Boulder Future Salon News Bits

Thumbnail
Standup comedian Jon the Robot likes to tell his audiences that he does lots of auditions but has a hard time getting bookings. "They always think I'm too robotic."

Jon the Robot is the brainchild of Oregon State University researcher Naomi Fitter and recently wrapped up a 32-club tour in LA and Oregon. 32 clubs, that's pretty good, for a human, let alone a robot. This playlist has some clips from the tour.

Thumbnail
Fidget spinner as medical diagnostic centrifuge. Some clever Koreans have figured out how to combine a fidget spinner with a microfluidic chip to get a centrifuge good enough for fast medical diagnoses that requires only hand power.

Thumbnail
Microwave signal stability boosted a hundredfold. So, unbeknownst to me, "optical" atomic clocks, that work by detecting optically detectable electronic state transitions in in atoms and ions such as ytterbium, strontium, and aluminum are more accurate than the traditional microwave-based caesium-133 atomic clock. These "optical" state transitions have frequencies that can exceed 1,000 THz. That's terahertz -- remember your metric prefixes go kilo-, mega-, giga-, tera-, each a factor of 1,000 higher than the one before. Furthermore this means that the clocks can reach 10 to the -16 s accuracy within a matter of seconds, unlike regular caesium atomic clocks which require a month-long averaging to achieve similar accuracy. That 10^-16 number matters for the metric system (SI) which strives for that level of accuracy in the metric system definition of the second.

By taking this accuracy and translating it back into the microwave domain, this has the potential to massively increase the accuracy of many microwave applications, but two in particular, the researchers thought were the most exciting: Doppler radar and VLBI astronomy. Doppler radar sensitivity is strongly affected by the amount of noise in the transmitted signal, and using the sub-femtosecond precision of optical atomic clocks could improve the ability to control the phase of those transmitted signals tremendously. And as for VLBI astronomy, it might first help to recap what VLBI astronomy is: VLBI stands for very-long-baseline interferometry, and the idea is that you combine the signals from multiple radio telescopes to create the equivalent of one very large radio telescope. In conventional interferometry, the telescopes have to be physically connected, so the light from the the various telescopes can be brought together and combined, in a way that the light wave patterns interact, or "interfere" with each other. With VLBI, you ditch the physical connections, instead recording the phase of the incoming light as accurately as you can using atomic clocks. You then simulate the "interference" of the light from the multiple telescopes using calculations to simulate the equivalent of one large telescope. By putting radio telescopes all over the globe, you can get the equivalent of a planet-sized radio telescope. But you can see here, the system depends on the accuracy of your atomic clocks. So by inventing vastly more accurate atomic clocks, you can improve the precision of these telescopes. The real invention here, though, isn't the more accurate atomic clocks themselves, but the optical-to-microwave conversion system which makes them accessible to Doppler radars and VLBI radio telescopes. VLBI interferometry is also used for geodesy, which is the precise measurement of the shape of Earth.

"The researchers used the 'ticking' of two of NIST's ytterbium lattice clocks to generate light pulses, as well as frequency combs serving as gears to translate the higher-frequency optical pulses accurately into lower-frequency microwave signals. Advanced photodiodes converted light pulses into electrical currents, which in turn generated a 10 gigahertz (GHz, or a billion cycles per second) microwave signal that tracked the clocks' ticking exactly, with an error of just one part in a quintillion (1 followed by 18 zeros). This performance level is on par with that of both optical clocks and 100 times more stable than the best microwave sources."

"In converting stable optical waves to microwaves, the researchers tracked the phase -- the exact timing of the waves -- to ensure they were identical, and not shifted relative to one another. The experiment tracked phase changes with a resolution corresponding to just one millionth of a cycle."

"Some components of the NIST system, such as the frequency combs and detectors, are ready to be used in field applications now, lead researcher Frank Quinlan said. But NIST researchers are still working on transferring state-of-the-art optical clocks to mobile platforms. The ytterbium clocks, which operate at frequencies of 518 terahertz (trillion cycles per second), currently occupy large tables in highly controlled laboratory settings."

Thumbnail
Mathematical model for divvying up the Arctic region. "The Arctic region is too rich in natural resources to remain ignored: although it makes up less than 6% of the earth’s landmass, the Arctic region accounts for about 13% and 30% of untapped oil and gas, respectively."

To make the model, he did Google searchers with "country name + Arctic + resource" to gauge the level of "interest" of a country in a resource. For example, "USA Arctic Oil". The six countries were "Russia", "Canada", "Iceland", "Norway", "Greenland/Denmark", and "USA" and the four resources considered were "gas", "oil", "fish", and "sea routes". He rounded the results to a 1-7 scale.

The mathematical model divides the Arctic region into 50 square kilometer pieces and tries to divide it so the area is adjacent to the shore of the state that receives it while making the overall solution as "envy free" as possible. "Envy free" is defined as no nation wanting to exchange its territories with another. Spoiler: no "envy free" solution could be found.

Thumbnail
"This robot can guess how you're feeling by the way you walk." The way the system works is, it represents a human as a stick figure with 16 joints. The first step in the process is to an existing pose estimation system to go from video to a 16-joint stick figure model. There's actually two substeps to this step, which are, first Structure-Aware PoseNet to get the pose estimation, and then using Temporal PoseNet to "correct" a time series of poses by looking at how the poses change over time. The next step is to get the emotion from the time series of poses, and for this they found two data sets with a total of 2,177 gait samples, labeled with just 4 labels: angry, sad, happy, and neutral. For the final classification of the person's gait, they actually translate the 3D stick figure into a 2D image so they can use a convolutional neural network on the 2D image. Their justification for doing this is just that 2D convolutions are faster and more efficient. A mean accuracy and F1 score are calculated to verify that the system works. For some reason the system is more accurate for angry (95.9%) and happy (94.5%) gaits than sad (83.4%) and neutral (81.3%) gaits.

The researchers envision the system being used by robots to understand the mood of pedestrians and maneuver more carefully around pedestrians if they need to.

Thumbnail
Blood hemoglobin can now be measured with just a smartphone, without drawing blood. You just need photos of your inner eyelid taken with the smartphone camera. The inner eyelid because that's one spot in the body where there are no pigmentation molecules. The system uses a 2-step approach. The first step goes from the smartphone camera's RGB values to a full spectrum using a matrix transformation that is set by statistical analysis of the smartphone's camera's spectral response. This has to be done for every specific smartphone camera -- for this study, they used the Samsung Galaxy J3. The second step goes from the spectrum information to blood hemoglobin levels. This is another statistical model, this one based on the characteristic absorption spectrum of blood hemoglobin, further trained on a dataset that comes from spectra reflected by eyelids. Hemoglobin has distinct spectral signatures in the visible range. The system is validated against a clinical laboratory hemoglobin test. The final output is hemoglobin in grams per deciliter (g dL−1).

Thumbnail
An artificial eye that is better than human eyes has been invented. Well, at least in principle. It's called an "electrochemical" eye but what makes it work is also that it's hemispherical. Commercial cameras work with something called a charge-coupled device (CCD) and is manufactured using semiconductor microfabrication processes, which means it's always flat. In addition to being hemispherical, this system doesn't use a CCD for light detection, it uses a perovskite nanowire array. Perovskites, you'll recall (or not), are those funky heavy metal minerals that everyone is talking about using for multi-color solar cells. They are basically crystals where part of the crystal is calcium titanate (CaTiO3) combined with a variety of other compounds that enable the crystal as a whole to have a cubic structure. Obviously if people want to use them for solar cells, they are sensitive to light which is what matters here.

To mimic the nerve fibers behind the retina, they used a liquid electrolyte and liquid-metal wires to contact the perovskite nanowire array.

Testing showed the electrochemical eye has a reaction speed comparable to the human eye -- actually slightly faster, 32.0 milliseconds for the response time and 40.8 milliseconds for the recovery time, vs 40 to 150 milliseconds for both of those for the human eye -- a low detection limit comparable to the human eye -- it responds to as little as 86 photons, on par with human cone cells -- an input-output signal gain comparable to the human eye, and a wide field of view, though less than the human eye -- 100.1 degrees compared with 155 for the human eye, but better than most CCD cameras which are about 70. The spectra it can pick up is similar to the human eye -- a little weaker on the blue end (400 to 500 nanometers), and extending a bit beyond the human eye on the red end (goes to about 800 nanometers).

But in one respect it's dramatically better than the human eye: resolution. The perovskite nanowire density is much higher than the density of photoreceptors in the human retina. Unfortunately, the current version doesn't actually capture a signal from each photoreceptor, so that potential isn't realized yet.

This artificial eye has a complex manufacturing process. First a thin aluminum sheet is shaped into a hemisphere. This becomes the porous aluminum oxide membrane that the array of perovskite nanowires is grown inside. Indium is used for adhesion, while mercury and lead are used to make the perovskite. A bithmuth compound is used as a sealant. Copper is used to make electrodes and nickel is used to make the nanowires. Both ion etching, vapor deposition, and electrochemical deposition are used to get these chemicals in the right place. An electron microscope is used to verify the manufacturing process works correctly.

Thumbnail
Biological neural networks are "spiking" neural networks while artificial neural networks aren't, and "spiking" neural networks may need sleep. "Yijing Watkins, a computer scientist at Los Alamos National Laboratory in New Mexico, and her colleagues experimented with programming neuromorphic processors to learn to reconstruct images and video based on sparse data, a bit like how the human brain learns from its environment during childhood development. 'However, all of our attempts to learn eventually became unstable,' said study senior author Garrett Kenyon, also a computer scientist at Los Alamos."

"The scientists ran computer simulations of a spiking neural network to find out what happened. They found that although it could learn to identify the data it was trained to look for, when such training went uninterrupted long enough, its neurons began to continuously fire no matter what signals they received."

"Watkins recalled that 'almost in desperation,' they tried having the simulation essentially undergo deep sleep. They exposed it to cycles of oscillating noise, roughly corresponding to the slow brain waves seen in deep sleep, which restored the simulation to stability. The researchers suggest this simulation of slow-wave sleep may help 'prevent neurons from hallucinating the features they're looking for in random noise.'"

Thumbnail
Microsoft built OpenAI's "dream system," an Azure supercomputer that ranks among top 5 in the world. "The announcement comes less than a year after Microsoft invested $1 billion in OpenAI and vowed to create a computational platform of 'unprecedented scale' to accelerate the development of artificial intelligence."

"The supercomputer, hosted in an undisclosed Microsoft Azure datacenter, has more than 285,000 CPU cores and 10,000 GPUs, with 400 gigabits per second of network connectivity for each GPU server."

Thumbnail
Pop songs written by OpenAI's deep-learning algorithm. "OpenAI trained Jukebox on 1.2 million songs, using the raw audio data itself rather than an abstract representation of pitch, instrument, or timing. But this required a neural network that could track so-called dependencies -- a repeating melody, say -- across the three or four minutes of a typical pop song, which is hard for an AI to do. To give a sense of the task, Jukebox keeps track of millions of time stamps per song, compared with the thousand time stamps that OpenAI's language generator GPT-2 uses when keeping track of a piece of writing."

"You will notice that the results, while technically impressive, are pretty deep in the uncanny valley."

Thumbnail
"The robots-are-taking-our-jobs threat gets real. The coronavirus is hastening the need for a labor force that doesn't get sick or locked down." "Industrial robot exports from Japan, the heartland of automation machinery, grew to some parts of the world from the previous year. Fanuc, maker of robots used in factories for companies ranging from Apple Inc. to Amazon.com Inc., saw orders in the fourth quarter to March climb 7%. Its revenues in the U.S. and China rose, while inventories of components ticked down as demand edged up. Bookings also increased for Harmonic Drive, which makes parts for small robots. This interest was manifested while the world was consumed by the initial shock wave of Covid-19, suggesting how big a priority automation has become."

Thumbnail
"Sphero, best known for making remote-controlled BB-8 and R2-D2 toys for Disney, announced today that it's spun off a new company catering to the robotic needs of first responders, law enforcement, and military clients."

But all we know is they will be making a "lightweight, yet highly advanced robotic solution that provides critical awareness for those we depend on the most, including police, fire, EMT, military, and others with dangerous jobs."

Thumbnail
"Symbolic mathematics finally yields to neural networks." This is a more lay-person-friendly introduction to the neural network that does integrals that I posted about in February. This article also goes on to describe the effort underway to use a similar methodology to advance mathematics by making a neural net that can invent proofs.

"François Charton describes at least two ways their approach could move AI theorem finders forward. First, it could act as a kind of mathematician's assistant, offering assistance on existing problems by identifying patterns in known conjectures. Second, the machine could generate a list of potentially provable results that mathematicians have missed. 'We believe that if you can do integration, you should be able to do proving.'"

Thumbnail
Training deep neural networks with 1/10th the energy. "One forward pass of the ResNet50 model requires 4 GFLOPs (FLOPs: floating point operations) of computations and training requires 10^18 FLOPs, which takes 14 days on one state-of-the-art NVIDIA M40 GPU. As a result, training a state-of-the-art DNN model often demands considerable energy, along with the associated financial and environmental costs. For example, a recent report shows that training a single DNN can cost over $10K US dollars and emit as much carbon as five cars in their lifetimes."

The technique here relies on the observation that "dense, randomly-initialized networks contain small subnetworks which can match the test accuracy of original networks when trained alone themselves. These subnetworks are called winning tickets." The idea is to notice these "winning tickets" developing early on in the training process.

The technique relies on a concept called "mask distance". By mask, they mean a set of 0s and 1s that indicate which connections are temporarily "pruned" (deleted) -- "masked out" of the network. Given two masks, the system can calculate the "distance" between the masks and, when choosing the better masks, the observation is that the mask distances rapidly decrease and become more or less unchanged after a certain point. The masks thus identify "early bird" winning tickets and the system can prune those connections and stop trying to train them.

Thumbnail
"Scientists have trained a computer to analyse different types of brain scan and predict the age of the human brain." By "different types of brain scan", they mean an anatomical MRI, a functional MRI (shows changes in brain activity over time), and magnetoencephalography (MEG).

"They trained their computer model with a subset of data from the Cam-CAN database, which holds MEG, MRI and neuropsychological data for 650 healthy people aged between 17 and 90 years old. They then compared different versions of the model with the standard anatomical MRI scan, and models that had additional information from functional MRI (fMRI) scans and MEG tests. They found that adding either the MEG or fMRI scan to the standard MRI led to a more accurate prediction of brain age. When both were added, the model was enhanced even further."

"Next, they looked at a marker of brain age (called brain age delta) and studied how this related to different brain functions that are measured by MEG and fMRI. This confirmed that MEG and fMRI were each providing unique insights about the brain's function, adding further power to the overall model."

"However, when they tested their model against the full Cam-CAN database of 650 people, some of whom did not have MRI, fMRI and MEG data available, they found that, even with the missing data, the computer model using what was available was still more accurate than MRI alone."

"In fact, as most hospitals use electroencephalography (EEG) rather than MEG tests, another important finding was that the most powerful brain function measurement that MEG tests provide to the model can also be accurately measured by EEG."

The article doesn't say what the model is, but it's a combination of ridge regression and random forest. Ridge regression is similar to linear regression, but doesn't assume input variables are independent -- it looks for "multicollinearity". Random forests is a technique where the system is trained by learning decisions based on the input data that are arranged in a hierarchical "tree", and it learns multiple of these that are combined together. If the "trees" and "forest" are constructed using randomness in a particular way, it prevents the combined system from overfitting the training data.

Thumbnail
AI to generate to-do lists from your emails. They started with 938,035 emails between 279 employees annotated with various metadata. From there, the way the AI works is to identify "commitment" sentences, additional sentences that are helpful in writing the to-do item, and the generation of the actual to-do items. For identifying the "commitment" sentences, it uses a classifier that is a recurrent neural network (RNN) that uses word embeddings (vectors that represent the meanings of words) and a self-attention mechanism that was trained on hand-labeled "commitment" sentences by 3 human judges. To identify additional "helpful" sentences, there is a second system that scores all the sentences in the email according to their relevance to the sentence chosen as the "commitment" sentence. The relevancy score is calculated using embeddings. The final phase of generating the actual to-do item is a sequence-to-sequence neural network that has an "attention" mechanism and a "copy" mechanism that can copy the input, so the to-do list item uses the exact words and phrases that the email uses as much as possible. The "commitment" sentence and "helpful" sentences go through a Long Short Term Memory (LSTM) network to get encoded, and then the decoder can either use vocabulary words based on the embedding or copy words.

----

From: John Carter To: Helena Watson; Daniel Craig; Rupert Grint
Subject: Thanks

Thank you for helping me prepare the paper draft for ACL conference. Attached is the TeX file.

Please feel free to make any changes to the revised version. I sent to my other collaborators already and am waiting for their suggestions. I'll keep you posted. Thanks, John.

----

To-do item from the AI: Keep Helana posted about ACL conference.

To-do item from human judges: Keep Helena posted about paper draft for ACL conference.

----

From: Raymond Jiang
To:support@company.com
Subject: Bug 62

Hi, there is a periodic bug 62 appearing in my cellphone browser, whenever I choose to open the request. It might be a JavaScript issue on our side, but it would be nice if you take a look. Thanks, Ray.

From: Criag Johnson
To: Raymond Jiang
Subject: Bug 62

Good Morning Ray, I shall take a look at it and get back to you.

----

To-do item from the AI: Take a look at periodic and get back to Raymond.

To-do item from human judges: Take a look at Bug 62 and get back to Raymond.