Boulder Future Salon Recent News Bits

Thumbnail 9,000 years ago, you could walk from Britain to Denmark without leaving dry land. The land in between, called Doggerland, was as large as Great Britain.
Thumbnail "Semiconducting nanotubes that assemble automatically in solutions of metallic nanocrystals and certain ligands." By ligand here I think they mean functional groups in chemistry that bind to a central metal atom, rather than the biochemistry meaning of the word that refers to molecules that bind to proteins or DNA and change their shape.

"The tubes have between three and six walls that are perfectly uniform and just a few atoms thick -- making them the first such nanostructures of their kind."

"What's more, the nanotubes possess photoluminescent properties: they can absorb light of a specific wavelength and then send out intense light waves of a different color, much like quantum dots and quantum wells. That means they can be used as fluorescent markers in medical research, for example, or as catalysts in photoreduction reactions, as evidenced by the removal of the colors of some organic dyes, based on the results of initial experiments."

The nanotubes are not carbon nanotubes, they are cadmium selenide (CdSe), which is a semiconductor. It has a zincblende crystal structure, which is a type of crystal structure that has a cubic unit cell. The ligands used in the synthesis are short-chain acetate ligends, which are ligands with acetic acid (CH3COOH) functional groups. The nanotubes start from plate-like "nanoseeds", which combine into "nanosheets", which curve into nanotubes.

The nanotubes have a light absorption and emission peak at 460 nanometers, which is blue visible light. The nanotubes act as a catalyst for Rhodamine B, a red dye used a lot in chemistry, getting it to change form when visible light is shone on it.
Thumbnail Plant mitochondrial gene editing is now possible. "Nuclear DNA was first edited in the early 1970s, chloroplast DNA was first edited in 1988, and animal mitochondrial DNA was edited in 2008. However, no tool previously successfully edited plant mitochondrial DNA."

"Researchers used their technique to create four new lines of rice and three new lines of rapeseed (canola)."

"The animal mitochondrial genome is a relatively small molecule contained in a single circular structure with remarkable conservation between species." "Plant mitochondrial genomes are a different story." "The plant mitochondrial genome is huge in comparison, the structure is much more complicated, the genes are sometimes duplicated, the gene expression mechanisms are not well-understood, and some mitochondria have no genomes at all."
Thumbnail "Why is it so hard to integrate machine learning into real business applications?" "Targeted product recommendations is one of the most common methods to increase revenue, computers make suggestions based on users' historical preferences, product to product correlations and other factors like location (e.g. proximity to a store), weather and more. Building such solutions requires analyzing historical transactions and creating a model. Then when applying it to production you'll want to incorporate fresh data such as the last transactions the customer made and re-train the model for accurate results. Machine learning models are rarely trained over raw data. Data preparation is required to form feature vectors which aggregate and combine various data sources into more meaningful datasets and identify a clear pattern. Once the data is prepared, we use one or more machine learning algorithms, conduct training and create models or new datasets which incorporate the learnings."
Thumbnail The Artificial Intelligence Applications to Autonomous Cybersecurity Challenge, abbreviated AI ATAC and conveniently pronounced "AI attack". "The Navy's Information Assurance and Cybersecurity Program Office seeks to automate the Security Operations Center (SOC) using artificial intelligence and machine learning (AI/ML) beginning with the endpoint. Modern malware strains, especially sophisticated malware created by advanced persistent threat (APT) groups, have shown capabilities that mutate faster than signature-based protection tools can adapt."

Submit your tool along with an explanatory white paper by September 30, 2019.
Thumbnail Artificial muscles for robots that are made of fibers made of conductive nanowires that are controlled with heat.
Thumbnail "By 2030, China wants to be the world's leading AI power, with an AI industry valued at $150 billion. How does China plan to achieve this?"

"Take health care. Ping An, a large Chinese conglomerate, has unveiled AI doctors. It has launched clinics known as 'One-Minute Clinic,' where AI doctors diagnose symptoms and propose medications. Within three years, Ping An plans to build hundreds of thousands of these clinics across China."

"Could China export 10,000 AI doctors to Russia? Such a move would transform geopolitics. The biggest impact is that it would shift the China-Russia relationship, from energy and currency, areas that the U.S. can influence, to Chinese AI, over which the U.S. has no control. The AI doctors may make Russian society more China-centric, and future generations in Russia may be more familiar with Ping An than with IBM or Intel."
Thumbnail "Using Watson algorithms, the HR team developed and patented a program that looks at patterns in data from all over IBM and predicts which employees are most likely to quit in the near future. The algorithms then recommend actions -- like more training or awarding an overdue promotion -- to keep them from leaving."
Thumbnail "Soldiers are slated to fire at targets next year using a platoon of robotic combat vehicles they will control from the back of modified Bradley Fighting Vehicles."

"The monthlong operational test is scheduled to begin in March at Fort Carson, Colorado, and will provide input to the Combat Capabilities Development Command's Ground Vehicle Systems Center on where to go next with autonomous vehicles."

"The upgraded Bradleys, called Mission Enabler Technologies-Demonstrators, or MET-Ds, have cutting-edge features such as a remote turret for the 25 mm main gun, 360-degree situational awareness cameras and enhanced crew stations with touchscreens."

"Initial testing will include two MET-Ds and four robotic combat vehicles on M113 surrogate platforms. Each MET-D will have a driver and gunner as well as four Soldiers in its rear, who will conduct platoon-level maneuvers with two surrogate vehicles that fire 7.62 mm machine guns."
Thumbnail Robot umpire. "It came on the first pitch in an all-star game in York, Pennsylvania, in a front of a few thousand fans. York Revolution starting pitcher Mitch Atkins fired a fastball just off the center of the plate. The home plate umpire signaled a 'strike.' But a computerized radar system actually made the call -- for the first time in professional baseball history."

"The pitches are tracked through a large Doppler radar screen high above home plate. The radar system measures a player's height and creates a strike zone."
Thumbnail "Moxi, which was designed and built by the Austin-based company Diligent Robotics, isn't trying to act like a nurse. Instead, Diligent Robotics founders Andrea Thomaz and Vivian Chu have designed their robot to run the approximately 30% of tasks nurses do that don't involve interacting with patients, like running errands around the floor or dropping off specimens for analysis at a lab."

"Moxi is equipped with a robotic arm and a set of wheels on its base, and can be preprogrammed to run errands around the hospital. It works like this: Moxi is hooked into the hospital's electronic health record system. Nurses can set up rules and tasks so that the robot gets a command for an errand when certain things change in a patient's record on Moxi's floor. For instance, if a patient has been discharged and their room is marked clean in the health record, Moxi will get a command to take an admission bucket -- a set of fresh supplies for a new patient -- to the room so that it's all ready to go for the next person."
Thumbnail Vision-based AI system does a completely automatic landing of an airplane. The airplane was a small plane, a Diamond DA42, and was landed at the Wiener Neustadt East Airport in May. The system uses visible light in good weather and infrared in poor weather. It enables airplanes to land themselves without any ground assistance.
Thumbnail "When a neuron receives input, the branches of the elaborate tree-like receptors extending from the neuron, known as dendrites, functionally work together in a way that is adjusted to the complexity of the input."

"The strength of a synapse determines how strongly a neuron feels an electric signal coming from other neurons, and the act of learning changes this strength. By analyzing the 'connectivity matrix' that determines how these synapses communicate with each other, the algorithm establishes when and where synapses group into independent learning units from the structural and electrical properties of dendrites. In other words, the new algorithm determines how the dendrites of neurons functionally break up into separate computing units and finds that they work together dynamically, depending on the workload, to process information."

"To date traditional learning algorithms (such as those currently used in AI applications) assume that neurons are static units that merely integrate and re-scale incoming signals. By contrast, the results show that the number and size of the independent subunits can be controlled by balanced input or shunting inhibition. The researchers propose that this temporary control of the compartmentalization constitutes a powerful mechanism for the branch-specific learning of input features."
Thumbnail Adversarial objects. "To make robot grasping more robust, researchers are designing objects that are as difficult as possible for robots to manipulate."

"There's been a bunch of research recently into adversarial images, which are images of things that have been modified to be particularly difficult for computer vision algorithms to accurately identify." Researchers "have been extending this concept to robot grasping, with physical adversarial objects carefully designed to be tricky for conventional robot grippers to pick up."

"In one of the examples, you can see a cube with some shallow pyramids on three of the six sides -- the smallest pyramid has a slope of just 10 degrees. The side opposite each pyramid is a regular, flat face, and the result is that there are no directly opposing faces on the cube. This causes problems for two-finger grippers, which work by pinching things, and if you're trying to pinch against an angled surface, the force you exert will tend to cause the object to twist, often leading to a failed grasp."
Thumbnail "While we may not all be able to build our own machine learning models from scratch, new tools like Runway ML and Joel Simon's forthcoming Artbreeder are opening up access to these machine learning models for everyone. Will this flood our screens with infinite images of deep kitsch? Or can machine learning augment human creativity on a larger scale and point towards a new direction for art?"

"My high school art teacher Marco Marchi, taught me that creativity can start with new tools, but it should never stop there. This is especially true with machine learning models that can apply eye candy like filters at the push of a button, only then to devolve into derivative and contrived visual effects. Marco also taught me that the beauty in making art is in the discovery process. Likewise, art appreciation is about unpacking the artist's discovery process and finding your own discoveries along the way."
Thumbnail The Shining starring Jim Carrey. Made you do a double-take? It's a deepfake. The Shining actually stars Jack Nicholson.
Thumbnail AI beats professionals at 6-player poker. The AI, called Pluribus, learned poker through self-play, like AlphaZero, AlphaStar, and Open AI Five, but unlike those systems, the Pluribus system creates a strategy offline, called the "blueprint strategy", then tries to improve it in real time during the game. Next, because there are too many decision points, it "buckets" similar actions and information together to reduce the complexity of the game, but it has a system for making real-time last second decisions that are outside the buckets.

The actual training involves a "regret" system where, when the system plays against itself, one player is chosen as the player being trained, and it learns not only from what actually happens during the game and whether it ultimately wins or loses, but by what would have happened if it had made different decisions, which can be calculated because during self-play the strategies of all the other players are known, unlike in real life, so the system knows what they would have done and what would have happened if the main player had played differently. This enables "counterfactual regret" to be calculated, and it is this difference between what was actually achieved and what could have been achieved that the player learns from.

Pluribus assumes the other players can change strategy during the game, so it actually consults 4 different strategies in guessing how the other players will play: its own "blueprint strategy", a modified form of the "blueprint strategy" biased towards folding, another modified form of the "blueprint strategy" biased towards calling, and another modified form of the "blueprint strategy" biased towards raising. This prevents Pluribus from itself choosing a biased strategy that its opponents can guess and keeps them guessing.

The blueprint strategy for Pluribus was computed in 8 days on a 64-core server, which is much less computing power than the other aforementioned "superhuman" AI systems, which used giant GPU farms. During live play, Pluribus only uses a single machine with 128 GB of memory and can play a hand every 20 seconds on average, about twice as fast as professional human players. For comparison, AlphaGo used 1,920 CPUs and 280 GPUs in its 2016 matches against top Go professional Lee Sedol, and Libratus used 100 CPUs in its 2017 matches against top professionals in two-player poker.

This represents a breakthrough in AI systems capable of handling multi-player, non-perfect-information games that have no Nash equilibrium, even in theory.

The AI "defeated poker professional Darren Elias, who holds the record for most World Poker Tour titles; and Chris 'Jesus' Ferguson, winner of six World Series of Poker events. Each pro separately played 5,000 hands of poker against five copies of Pluribus."

"In another experiment involving 13 pros, all of whom have won more than $1 million playing poker, Pluribus played five pros at a time for a total of 10,000 hands and again emerged victorious."

"Playing a six-player game rather than head-to-head requires fundamental changes in how the AI develops its playing strategy. We're elated with its performance and believe some of Pluribus' playing strategies might even change the way pros play the game."

"Pluribus' algorithms created some surprising features in its strategy. For instance, most human players avoid 'donk betting' -- that is, ending one round with a call but then starting the next round with a bet. It's seen as a weak move that usually doesn't make strategic sense. But Pluribus placed donk bets far more often than the professionals it defeated."
Thumbnail Polarized light camera invented. If you're thinking, don't polarized light cameras already exist? Yes, but not as small as this one -- potentially small enough to be in a cell phone. The article doesn't say how it works (too busy telling about how great polarized cameras are and all the potential uses of a very small and light camera -- the press release writer was probably asked to "make this relevant to the public"), but, a click through to the actual research paper reveals they did it all optically -- with diffraction gratings. Diffraction gratings carefully designed so that it's possible, with a boatload of math, to calculate something known as the full-Stokes polarization vector, considered the most complete description of light's polarization.
Thumbnail "When zebrafish sleep, they can display two states that are similar to those found in mammals, reptiles and birds: slow-wave sleep and paradoxical, or rapid eye movement, sleep." This suggests that this pattern of brain activity "evolved at least 450 million years ago, before any creatures crawled out of the ocean."

"To study the zebrafish, common aquarium dwellers also known as danios, the researchers built a benchtop fluorescent light-sheet microscope capable of full-fish-body imaging with single-cell resolution. They recorded brain activity while the fish slept in an agar solution that immobilized them. They also observed the heart rate, eye movement and muscle tone of the sleeping fish using a fluorescence-based polysomnography that they developed."

"They named the sleep states they observed 'slow bursting sleep,' which is analogous to slow-wave sleep, and 'propagating wave sleep,' analogous to REM sleep. Though the fish don't move their eyes during REM sleep, the brain and muscle signatures are similar. (Fish also don't close their eyes when they sleep, as they have no eyelids.)"

"The researchers found another similarity between fish and human sleep. By genetically disrupting the function of melanin-concentrating hormone, a peptide that governs the sleep-wake cycle, and observing neural expressions as the fish slept, the researchers determined that the hormone's signaling regulates the fish's propagating wave sleep the way it regulates REM sleep in mammals."
Thumbnail A pair of supermassive black holes headed for a collision have been spotted. "Each black hole's mass is more than 800 million times that of our sun. As the two gradually draw closer together in a death spiral, they will begin sending gravitational waves rippling through space-time."

"Even before the destined collision, the gravitational waves emanating from the supermassive black hole pair will dwarf those previously detected from the mergers of much smaller black holes and neutron stars."

"Supermassive black hole binaries produce the loudest gravitational waves in the universe,' says co-discoverer Chiara Mingarelli, an associate research scientist at the Flatiron Institute's Center for Computational Astrophysics in New York City. Gravitational waves from supermassive black hole pairs 'are a million times louder than those detected by LIGO."

"The two supermassive black holes are especially interesting because they are around 2.5 billion light-years away from Earth. Since looking at distant objects in astronomy is like looking back in time, the pair belong to a universe 2.5 billion years younger than our own."

At this point it might sound like we're going to detect the gravitational waves any day now, but they're also 430 parsecs apart, which is 1,400 light-years, so it's impossible for the black holes to collide in the next 1,400 years. They'd have to be headed directly at each other at the speed of light, but they're not. In fact the astronomers estimate the collision will happen in about 2.5 billion years. (2.5 billion just coincidentally about the same as the number of light years away the galaxy is from us.) Oh the difference between human timescales and astronomical timescales.
Thumbnail Muons have been used to map out an ancient underground building in Russia, thought to be an early Christian church. Muons are subatomic particles similar to electrons but with greater mass. They are created from the collision of high-energy cosmic rays with the atmosphere, and have enough energy to penetrate underground, which is why they make underground imaging possible.
Thumbnail "Hinton's grand vision of AI has always been that there are simple general principles of learning, analogous to the Navier-Stokes equations of fluid flow, from which complex general intelligence emerges. I think Hinton under-estimates the complexity required for a general learning mechanism, but I agree that we are searching for some general (i.e., minimal-bias) architecture. For the following reasons I believe that vector quantization is an inevitable component of the architecture we seek."

"Do the objects of reality fall into categories? If so, shouldn't a learning architecture be designed to categorize? A standard theory of language learning is that the child learns to recognize certain things, like mommy and doggies, and then later attaches these learned categories to the words of language. It seems natural to assume that categorization precedes language in both development and evolution. The objects of reality do fall into categories and every animal must identify potential mates, edible objects, and dangerous predators."

"It is not clear that the vector quanta used in VQ-VAE-2 correspond to meaningful categories. It is true, however, that the only meaningful distribution models of ImageNet images are class-conditional. A VQ-VAE with a vector quanta for the image as a whole at least has the potential to allow class-conditioning to emerge from the data."

"Vector quantization shifts the interpretability question from that of interpreting linear threshold units to that of interpreting emergent symbols -- the embedded tokens that are the emergent vector quanta."

"A fundamental issue is whether the vectors being quantized actually fall into natural discrete clusters. This form of interpretation is often done with t-SNE. But if vectors naturally fall into clusters then it seems that our models should seek and utilize that clustering. Interpretation can then focus on the meaning of the emergent symbols."

Ok, if this doesn't make sense, the key to understanding it is that "vector quantization" is a compression system, analogous to, for example, jpeg, which is used to compress images, except vector quantization is "hierarchical", compressing at multiple levels from low-level (fine detail) to high level (big picture structure).

VQ-VAE-2 is a system that uses vector quantization to actually generate new images. In other words, instead of decoding a set of vectors made from a real image, you decode a set of vectors not made from a real image, and thus generate a new image. Its creators claim it can make images as photorealistic as generative adversarial networks (GANs), the image generation system that gets all the headlines.

The writer of the blog post is saying, in so many words, that, in whatever system ultimately turns out to be best for enabling machines to learn to see and learn language and learn to connect the two together, it will incorporate vector quantization in some form.
Thumbnail Robot-ants that can jump, communicate and work together. Actually they don't look like ants, they look like circuit-board stick figures.
Thumbnail DeepMind's AlphaStar will be playing StarCraft II anonymously on Battle.net against players who click the opt-in button.

"The StarCraft community will not know which matches AlphaStar is playing, to help ensure that all games are played under the same conditions. AlphaStar plays with built-in restrictions that the DeepMind team has defined in consultation with pro players. A win or a loss against AlphaStar will affect your MMR as normal."
Thumbnail "In the many cities without real-time forecasts from the transit agency, we heard from surveyed users that they employed a clever workaround to roughly estimate bus delays: using Google Maps driving directions. But buses are not just large cars. They stop at bus stops; take longer to accelerate, slow down, and turn; and sometimes even have special road privileges, like bus-only lanes."

"To develop our model, we extracted training data from sequences of bus positions over time, as received from transit agencies' real time feeds, and aligned them to car traffic speeds on the bus's path during the trip. The model is split into a sequence of timeline units -- visits to street blocks and stops -- each corresponding to a piece of the bus's timeline, with each unit forecasting a duration. A pair of adjacent observations usually spans many units, due to infrequent reporting, fast-moving buses, and short blocks and stops."

"This structure is well suited for neural sequence models like those that have recently been successfully applied to speech processing, machine translation, etc. Our model is simpler. Each unit predicts its duration independently, and the final output is the sum of the per-unit forecasts."
Thumbnail New fast.ai course: a code-first introduction to natural language processing, "following the fast.ai teaching philosophy of sharing practical code implementations and giving students a sense of the 'whole game' before delving into lower-level details. Applications covered include topic modeling, classification (identifying whether the sentiment of a review is positive or negative), language modeling, and translation.

"The course teaches a blend of traditional NLP topics (including regex, SVD, naive Bayes, tokenization) and recent neural network approaches (including RNNs, seq2seq, attention, and the transformer architecture), as well as addressing urgent ethical issues, such as bias and disinformation.

"All the code is in Python in Jupyter Notebooks, using PyTorch and the fastai library."
Thumbnail Whole ecosystems are shifting dramatically north in the Great Plains, according to analysis of birds.

"The northernmost ecosystem boundary shifted more than 365 miles north, with the southernmost boundary moving about 160 miles from the 1970 baseline."

"To arrive at their conclusions, the researchers analyzed 46 years' worth of avian data collected for the North American Breeding Bird Survey, a US Geological Survey program designed to track bird populations. That survey included more than 400 bird species found within a 250-mile-wide transect stretching from Texas to North Dakota."

"The team then separated bird species into groups based on their body masses and searched for gaps in the distribution of the groups. Those gaps effectively act like the DNA signature of an ecosystem, allowing the team to identify where one ecosystem ends and another begins."

"Over their study area, and over time, the researchers identified three distinct ecosystem boundaries, with a fourth -- and thus a fourth ecosystem regime -- appearing in the final decade."

"The fact that the northernmost boundary shifted more than its southernmost counterpart reflects a well-documented phenomenon known as Arctic amplification, suggesting that climate change is at play."
Thumbnail Neural stem cells damaged by electronic cigarettes. "Using cultured mouse neural stem cells, the UC Riverside researchers identified the mechanism underlying EC-induced stem cell toxicity as 'stress-induced mitochondrial hyperfusion."

They say stress-induced mitochondrial hyperfusion is supposed to be a transient survival response, so the increased mitochondrial oxidative stress is supposed to be temporary, but with continuous exposure to e-liquids, aerosols, or nicotine, could have long-term repercussions.
Thumbnail "Machine learning has been used to automatically translate long-lost languages." The way it works is by mapping the unknown language to its closest related known language, constrained by some assumed rules about how languages evolve.

They demonstrate this with two languages, Ugaritic and Linear B. Ugaritic and Hebrew are both derived from the same proto-Semitic language, so Ugaritic is mapped to Hebrew. Linear B is mapped to Mycenaean Greek.
Thumbnail Estimated Tesla Autopilot miles as of July 5th, 2019: 1,557,569,997. Estimated miles on Autopilot hardware version 2: 867,910,942.
Thumbnail What's the best temperature for civilization? I thought this video was going to be about what's the best temperature for agriculture, but it's actually about what's the best temperature for humans to work outside.
Thumbnail Could the US invade Iran? Commissar Binkov, who apparently is a talking puppet with a Russian accent who wears a military uniform, presents his analysis.
Thumbnail "An algorithm with no training in materials science can scan the text of millions of papers and uncover new scientific knowledge."

The team "collected 3.3 million abstracts of published materials science papers and fed them into an algorithm called Word2vec. By analyzing relationships between words the algorithm was able to predict discoveries of new thermoelectric materials years in advance and suggest as-yet unknown materials as candidates for thermoelectric materials."

"Without telling it anything about materials science, it learned concepts like the periodic table and the crystal structure of metals. That hinted at the potential of the technique. But probably the most interesting thing we figured out is, you can use this algorithm to address gaps in materials research, things that people should study but haven't studied so far."

"The Berkeley Lab team took the top thermoelectric candidates suggested by the algorithm, which ranked each compound by the similarity of its word vector to that of the word 'thermoelectric.' Then they ran calculations to verify the algorithm's predictions."

"Of the top 10 predictions, they found all had computed power factors slightly higher than the average of known thermoelectrics; the top three candidates had power factors at above the 95th percentile of known thermoelectrics."

"Next they tested if the algorithm could perform experiments 'in the past' by giving it abstracts only up to, say, the year 2000. Again, of the top predictions, a significant number turned up in later studies -- four times more than if materials had just been chosen at random. For example, three of the top five predictions trained using data up to the year 2008 have since been discovered and the remaining two contain rare or toxic elements."
Thumbnail "The ability to read electrical activities from cells is the foundation of many biomedical applications, such as brain activity mapping and neural prosthetics. To achieve the most accurate readings and finest control of prosthetic limbs, electronic devices need to gain direct access to the interior of cells for intracellular recording. The most widely used conventional method for intracellular recording is the patch-clamp electrode, although its micrometer scale tip causes irreversible damage to the cells and it can only record a few cells at a time. To address these issues, we have developed a scalable way to create large arrays of 'hairpin'-like nanoscale transistor devices and used these to read electrical activity from the interior of multiple cells at the same time."

"The straight silicon nanowires, with very high length-to-width ratio, are as flexible as cooked noodles. To incorporate them into device arrays, we first made patterned a silicon wafer with U-shaped trenches, and then 'combed' the nanowires over the trenches. In addition to removing tangles from the nanowire hair, the combing shear force deforms the nanowires to conform to the designed U-shapes of the trenches, thus forming an array of 'hairpin'-like U-shaped nanoscale devices. The tip of each U-shaped nanowire, which points up from the chip surface by virtue of 'designed' strain in the bimetallic interconnects, is modified to act as a small transistor that, similar to the V-shaped nanowires, can be inserted into neuronal and cardiac cells for intracellular recording with signals comparable to the quality of those obtained with the gold-standard patch-clamp electrodes. Because the nanowire tips are so small and coated with a layer of molecules that mimic the cell membrane, they can be inserted into multiple cells in parallel without causing damage."
Thumbnail The connectome of all the neurons of the nervous systems of both sexes of the C. Elegans worm have been mapped out. If you've heard that the connectome of C. Elegans was mapped out, that was just the female/hermaphrodite form. The male wasn't mapped out until now. Although the subheading of the article is "Genes for the humble C. elegans turn up in autism, schizophrenia and other human disorders", the article has nothing to say about autism or schizophrenia, and neither does the research paper -- the news here is the connectome map for the male. If you click the link from the Scientific American article, you can get "guest" access to the full research paper on nature.com.
Thumbnail Neural Code Search (NCS) "takes natural language queries and returns relevant code snippets retrieved directly from a large codebase."

"The NCS model uses embeddings to map code snippets and natural language queries as vectors in the same vector space, and calculates the cosine similarities between embedded code snippets and the given query to locate and deliver the most relevant code as the output."

"As an unsupervised model, NCS can be trained quickly and easily to learn embeddings directly from the search corpus. NCS however struggles in situations where there is no overlap of words between the queries and the source code. To solve that problem researchers added the Embedding Unification model (UNIF)."
Thumbnail Enhancing satellite imagery through deep learning. Running satellite images through Deep Image Prior and then Decrappify.

Deep Image Prior "hardly conforms to conventional deep learning-based super-resolution approaches." "Typically, we would create a dataset of low and high-resolution image pairs, following which we train a model to map a low-resolution image to its high-resolution counterpart. However, this particular model does none of the above, and as a result, does not have to be pre-trained prior to inference time. Instead, a randomly initialized deep neural network is trained on one particular image. That image could be of one of your favorite sports star, a picture of your pet, a painting that you like or even random noise. Its task is then, to optimize its parameters to map the input image to the image that we are trying to super-resolve. In other words, we are training our network to overfit to our low-resolution image."

Decrappify "has a U-net architecture with a pre-trained ResNet backbone. But the part that is really interesting is in the loss function, which has been adapted from this paper. The objective of this model is to produce an output image of higher quality, such that when it is fed through a pre-trained VGG16 model, it produces minimal 'style' and 'content' loss relative to the ground truth image."
Thumbnail "On July 1st, California became the first state in the nation to try to reduce the power of bots by requiring that they reveal their 'artificial identity' when they are used to sell a product or influence a voter. Violators could face fines under state statutes related to unfair competition. Just as pharmaceutical companies must disclose that the happy people who say a new drug has miraculously improved their lives are paid actors, bots in California -- or rather, the people who deploy them -- will have to level with their audience."
Thumbnail "Siri, is artificial intelligence biased?" Discussion between Rachel Thomas (fast.ai/LeanIn.org), Meredith Broussard (NYU/author of Artificial Unintelligence: How Computers Misunderstand the World), Rediet Abebe (Cornell/Mechanism Design for Social Good/Black in AI), Stephanie Dinkins (Stony Brook/AI artist), and Max Tegmark (MIT/Future of Life Institute/author of Life 3.0: Being Human in the Age of Artificial Intelligence).

AI "impacts who gets jobs, who gets houses, who gets loans, who's put in jail and who isn't." "It's moving the discussion about unconscious bias and stereotypes and how they work, forward because it feels dire, like if we don't solve it now, we're just going to push it forward and we're going to exacerbate it."
Thumbnail How to spot visualization lies. Truncated axis, dual axes, totals that don't add up, seeing absolute instead of relative values, limited scope, odd choice of binning, area sized by a single dimension.
Thumbnail "Magnus Carlsen made only one queen move in the entire game and that was to the corner square a8 from where the queen was a long-distance general for his attack against Anish Giri's king on the other side of the board. Giri failed to foresee it. This original strategy drew comparisons with the neural network program Alphazero, which Carlsen called his 'hero' in a recent interview."
Thumbnail "On February 11, 2019, the President signed Executive Order 13859, Maintaining American Leadership in Artificial Intelligence. This order launched the American AI Initiative, a concerted effort to promote and protect AI technology and innovation in the United States."

"Strategy 1: Make long-term investments in AI research." "Strategy 2: Develop effective methods for human-AI collaboration. Strategy 3:" "Understand and address the ethical, legal, and societal implications of AI." "Strategy 4: Ensure the safety and security of AI systems." "Strategy 5: Develop shared public datasets and environments for AI training and testing." "Strategy 6: Measure and evaluate AI technologies through standards and benchmarks." "Strategy 7: Better understand the national AI R&D workforce needs." "Strategy 8: Expand public-private partnerships to accelerate advances in AI."
Thumbnail "In April of this year, the US Food and Drug Administration (FDA) released a discussion paper, Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML) -- Based Software as a Medical Device (SaMD), which proposed a novel regulatory framework for artificial intelligence (AI)-based medical devices."

"In its discussion paper, FDA recognized that its current approach to the regulation of medical devices -- which is based on devices that are static in nature with planned, discrete changes -- is ill-suited for AI algorithms."

"The framework proposed a total lifecycle approach, based on four principles: (1) good ML practices (GMLP) from software development through distribution; (2) initial pre-market review that would include a pre-determined plan for modifications; (3) risk management approach to modifications after pre-market review; and (4) post-marketing monitoring and reporting of product performance."

"The public comments vary in support of FDA's proposed framework."
Thumbnail AI audio-based aggression detector designed to be installed in schools and hospitals gets triggered by students playing Pictionary. "Laughter sometimes set it off, especially raucous guffaws that the detector apparently mistook for belligerent shouts."

"'Such findings aren't surprising,' said Shae Morgan, an assistant professor and audiology expert at the University of Louisville's medical school. 'Happy or elated speech shares many of the same signatures as hot anger,' he said. By contrast, 'cold anger' -- quiet, detached fury, often expressed without the markers of voice strain -- wouldn't be picked up."
Thumbnail AI audio-based aggression detector designed to be installed in schools and hospitals gets triggered by students playing Pictionary.

"Laughter sometimes set it off, especially raucous guffaws that the detector apparently mistook for belligerent shouts. 'Such findings aren't surprising,' said Shae Morgan, an assistant professor and audiology expert at the University of Louisville's medical school. 'Happy or elated speech shares many of the same signatures as hot anger,' he said. By contrast, 'cold anger' -- quiet, detached fury, often expressed without the markers of voice strain -- wouldn't be picked up."
Thumbnail Is nuclear energy from thorium reactors the solution to producing energy without carbon emissions?
Thumbnail Facebook open-sourced a deep learning recommendation model. It's a neural network designed to work with sparse categorical data.

"Deep Learning Recommendation Model (DLRM) advances on other models by combining principles from both collaborative filtering and predictive analytics-based approaches, which enables it to work efficiently with production-scale data and provide state-of-art results."

"In the DLRM model, categorical features are processed using embeddings, while continuous features are processed with a bottom multilayer perceptron (MLP). Then, second-order interactions of different features are computed explicitly. Finally, the results are processed with a top MLP and fed into a sigmoid function in order to give a probability of a click."
Thumbnail Baidu and Intel announced a new partnership to work together on Intel's new Nervana Neural Network Processor for training. "Baidu and Intel's collaboration on the NNP-T involves working together on both the hardware and software side of this custom accelerator to ensure that it's optimized for use with Baidu's PaddlePaddle deep learning framework, which will complement existing work that Intel has done to ensure that PaddlePaddle is set up to perform best on its existing Intel Xeon Scalable processors. The NNP-T optimization will specifically focus on applications of PaddlePaddle that focus on distributed training of neural networks, to complete other types of AI applications."
Thumbnail Compass & ruler problems that took over 2,000 years to solve. Doubling a cube, trisecting an angle, constructing regular heptagons (7-sided polygon), and squaring a circle.
Thumbnail AI boom or doom? Interview (podcast) with Stuart Russell. AI risk, literal goals, instrumental goals, how general purpose "tools" can combine to make unexpected innovations, many more moves in Starcraft and hidden information and whether Starcraft with require fundamental innovations, such as an AI that figures out hierarchical sub-goals, cognitive hierarchies and "chunking", how all of us have "inherited" goals and subgoals from "civilization", the "standard model" of AI, humans interact in multi-agent problem solving, figuring out human values from human behavior, instead of from (poorly chosen) words, military uses of AI (narrowly targeting drones), effect on job market (technological advancement increases employment at a particular occupation at first, but continued technological advancement reduces employment eventually after that), standards bodies establishing standards for provably safe AI systems, small problems functioning as wake-up calls, "counter-arguments" like "calculators didn't take over the world", the nuclear industry in the 1970s as an example of not taking technology risks seriously.
Thumbnail AI system classifies people's emotions from the way they walk. "The researchers selected four emotions -- happy, sad, angry, and neutral -- for their tendency to 'last an extended period' and their 'abundance' in walking activity. Then they extracted gaits from multiple walking video corpora to identify affective features and extracted poses using a 3D pose estimation technique. Finally, they tapped a long short-term memory (LSTM) model -- capable of learning long-term dependencies -- to obtain features from pose sequences, which they combined with a random forest classifier (which outputs the mean prediction of several individual decision trees) to classify examples into the aforementioned four emotion categories."
Thumbnail Skills from video. Motion capture is the cornerstone technique for character animation in movies and physics-based character animation in video games, but researchers have developed a way for AI to learn acrobatic skills by watching YouTube videos. As such it can learn more diverse and natural behaviors from environments that aren't heavily instrumented and that would be prohibitively expensive to motion capture, such as large-scale outdoor sports. The technique combines existing 3D pose estimation techniques with reinforcement learning that takes into account the physical constraints of the character and environment.

The motion reconstruction is improved to generate reference motions to be more amenable for imitation by a simulated character. The reinforcement learning system has a dynamically updated initial state system that improves its long-horizon performance in reproducing a desired motion.

This "dynamic curriculum generation" "substantially outperforms standard methods when learning from lower-fidelity reference motions constructed from video tracking sequences." "Our framework is able to reproduce a significantly larger repertoire of skills and higher fidelity motions from videos than has been demonstrated by prior methods."

"The resulting controllers are robust to perturbations, can be adapted to new settings, can perform basic object interactions, and can be retargeted to new morphologies via reinforcement learning. We further demonstrate that our method can predict potential human motions from still images, by forward simulation of learned controllers initialized from the observed pose."
Thumbnail "Astrophysicists have used artificial intelligence techniques to generate complex 3D simulations of the universe. The results are so fast, accurate and robust that even the creators aren't sure how it all works."

"The speed and accuracy of the project, called the Deep Density Displacement Model, or D3M for short, wasn't the biggest surprise to the researchers. The real shock was that D3M could accurately simulate how the universe would look if certain parameters were tweaked -- such as how much of the cosmos is dark matter -- even though the model had never received any training data where those parameters varied."

"Study co-author Shirley Ho, a group leader at the Flatiron Institute's Center for Computational Astrophysics in New York City and an adjunct professor at Carnegie Mellon University, Siyu He, a Flatiron Institute research analyst, and their colleagues honed the deep neural network that powers D3M by feeding it 8,000 different simulations from one of the highest-accuracy models available."

"After training D3M, the researchers ran simulations of a box-shaped universe 600 million light-years across and compared the results to those of the slow and fast models. Whereas the slow-but-accurate approach took hundreds of hours of computation time per simulation and the existing fast method took a couple of minutes, D3M could complete a simulation in just 30 milliseconds."

"D3M also churned out accurate results. When compared with the high-accuracy model, D3M had a relative error of 2.8 percent. Using the same comparison, the existing fast model had a relative error of 9.3 percent."
Thumbnail If you connect arrows end-to-end and make each one rotate at a constant velocity, depending on the initial angles and lengths you choose, you can draw complex pictures. Figuring out how only requires one very simple formula: the complex Fourier series.
Thumbnail Style transfer in 3D, in real time, in 4K resolution, and with temporal coherence.
Thumbnail 10 Boston Dynamics "Spot" robots pull a truck.
Thumbnail Playing Mario Kart with Deep-Q-Learning.
Thumbnail "Although it is now possible to decompose a piece of brain tissue into billions of pixels, the analysis of these electron microscope images takes many years. This is due to the fact that the standard computer algorithms are often too inaccurate to reliably trace the neurons' wafer-thin projections over long distances and to identify the synapses. For this reason, people still have to spend hours in front of computer screens identifying the synapses in the piles of images generated by the electron microscope."

"The Max Planck scientists led by Jörgen Kornfeld have now overcome this obstacle with the help of artificial neural networks. These algorithms can learn from examples and experience and make generalizations based on this knowledge. They are already applied very successfully in image process and pattern recognition today. 'So it was not a big stretch to conceive of using an artificial network for the analysis of a real neural network.'" "The resulting SyConn network can now identify these structures autonomously and extremely reliably."
Thumbnail A new programming language for AI. It's called "Gen" and is a "probabilistic" programming language. "Users write models and algorithms from multiple fields where AI techniques are applied -- such as computer vision, robotics, and statistics -- without having to deal with equations or manually write high-performance code. Gen also lets expert researchers write sophisticated models and inference algorithms -- used for prediction tasks -- that were previously infeasible."

"The researchers demonstrate that a short Gen program can infer 3D body poses, a difficult computer-vision inference task that has applications in autonomous systems, human-machine interactions, and augmented reality. Behind the scenes, this program includes components that perform graphics rendering, deep-learning, and types of probability simulations. The combination of these diverse techniques leads to better accuracy and speed on this task than earlier systems developed by some of the researchers."

"Due to its simplicity -- and, in some use cases, automation -- the researchers say Gen can be used easily by anyone, from novices to experts."

"The researchers also demonstrated Gen's ability to simplify data analytics by using another Gen program that automatically generates sophisticated statistical models typically used by experts to analyze, interpret, and predict underlying patterns in data. That builds on the researchers' previous work that let users write a few lines of code to uncover insights into financial trends, air travel, voting patterns, and the spread of disease, among other trends. This is different from earlier systems, which required a lot of hand coding for accurate predictions."
Thumbnail Robot tightly arranges cube-shaped objects in a shipping box. "The key is to make minimal but effective hardware choices and focus on robust algorithms and software."

"Tightly packing products picked from an unorganized pile remains largely a manual task, even though it is critical to warehouse efficiency."

"The researchers used visual data and a simple suction cup, which doubles as a finger for pushing objects. The resulting system can topple objects to get a desirable surface for grabbing them. Furthermore, it uses sensor data to pull objects toward a targeted area and push objects together. During these operations, it uses real-time monitoring to detect and avoid potential failures."
Thumbnail Robot tightly arranges cube-shaped objects in a shipping box. "The key is to make minimal but effective hardware choices and focus on robust algorithms and software."

"Tightly packing products picked from an unorganized pile remains largely a manual task, even though it is critical to warehouse efficiency."

"The researchers used visual data and a simple suction cup, which doubles as a finger for pushing objects. The resulting system can topple objects to get a desirable surface for grabbing them. Furthermore, it uses sensor data to pull objects toward a targeted area and push objects together. During these operations, it uses real-time monitoring to detect and avoid potential failures."
Thumbnail Robot that can "taste" its environment using bacteria. Actually it's a robot arm combined with bacteria, a genetically engineered E. coli that responds to a chemical called IPTG by producing a fluorescent protein.

"When IPTG crosses the membrane into the chamber, the cells fluoresce and electronic circuits inside the module detect the light. The electrical signal travels to the gripper's control unit, which can decide whether to pick something up or release it."

"As a test, the gripper was able to check a laboratory water bath for IPTG then decide whether or not to place an object in the bath."

The article doesn't say what IPTG is, perhaps because if they did they might feel some need to explain why it was used in this experiment. IPTG (isopropyl β-D-1-thiogalactopyranoside, C9H18O5S) is a chemical that mimics allolactose, which switches on genes for metabolism of lactose in E. coli, but which isn't broken down by the enzyme that breaks down allolactose. No idea why it was used in this experiment.
Thumbnail Modular self-programming, self-verifying, and self-composing robots for making robots safe to work alongside humans in factories. "After assembling standardized modules, the created robot programmed itself based on standardized information stored in each module. This allowed us to provide out-of-the-box functionality for a given reference trajectory by generating a model of the robot on the fly."

"To account for dynamically changing environments, the robot formally verified, by itself, whether any human could be harmed through its planned actions during its operation. A planned motion was verified as safe if none of the possible future movements of surrounding humans leads to a collision. Because uncountable possible future motions of surrounding humans exist, we bound the set of possible motions using reachability analysis."

"We automatically chose the best composition of modules for given robot tasks through optimization. The main goals were to minimize cycle time and to reduce energy consumption of the modular robot."
Thumbnail Open questions about generative adversarial networks (GANs). What are the trade-offs between GANs and other generative models? What sorts of distributions can GANs model? How can we Scale GANs beyond image synthesis? What can we say about the global convergence of the training dynamics? How should we evaluate GANs and when should we use them? How does GAN training scale with batch size? What is the relationship between GANs and adversarial examples?
Thumbnail Brain cells for 3D vision discovered -- in praying mantises. "In a specially-designed insect cinema, the mantises were fitted with 3D glasses and shown 3D movies of simulated bugs while their brain activity was monitored. When the image of the bug came into striking range for a predatory attack, scientist Dr Ronny Rosner was able to record the activity of individual neurons."

"Praying mantises use 3D perception, scientifically known as stereopsis, for hunting. By using the disparity between the two retinas they are able to compute distances and trigger a strike of their forelegs when prey is within reach."

"Despite their tiny size, mantis brains contain a surprising number of neurons which seem specialised for 3D vision. This suggests that mantis depth perception is more complex than we thought. And while these neurons compute distance, we still don't know how exactly."

"We've also found some feedback loops within the 3D vision circuit which haven't previously been reported in vertebrates. Our 3D vision may well include similar feedback loops, but they are much easier to identify in a less complex insect brain and this provides us with new avenues to explore."

"We find that the praying mantis brain harbours at least four classes of neuron that are tuned to binocular disparities. These are the first neurons discovered in any invertebrate with properties suitable for supporting stereoscopic vision. The binocular response fields of several neurons show clear evidence of centre-surround mechanisms and are similar to disparity-tuned neurons in the vertebrate visual cortex."

The four classes are called TAcen, TMEcen, COcom, and TAOpro, which stand for: TAcen = centrifugal tangential neuron of the anterior lobe, TAOpro = tangential projection neuron of the anterior and outer lobes, TMEcen = centrifugal tangential neuron of the medulla, and COcom = columnar commissural neuron of the outer lobes.
Thumbnail RoboBee can fly without a tether now. World's lightest flying robot at 259 mg (less than a paper clip). Its new actuators and wings give it enough lift to carry the 6 solar cells it needs for power. So it can fly until the sun goes down.
Thumbnail Today seems to be the day for weird quantum physics news. Think heat flows from hot objects to cold objects? Not necessarily in the weird world of quantum physics. If the quantum states of the objects are correlated in the right way ahead of time, heat will flow from cold to hot.

"Correlations can be said to represent information shared among different systems. In the macroscopic world described by classical physics, the addition of energy from outside can reverse the flow of heat in a system so that it flows from cold to hot. This is what happens in an ordinary refrigerator, for example. It's possible to say that in our nanoscopic experiment, the quantum correlations produced an analogous effect to that of added energy. The direction of flow was reversed without violating the second law of thermodynamics. On the contrary, if we take into account elements of information theory in describing the transfer of heat, we find a generalized form of the second law and demonstrate the role of quantum correlations in the process.

The website insists that I can only republish quotes from the article if I also tell you the name of the author of the article is José Tadeu Arantes, that the research was done by Kaonan Micadei, John P. S. Peterson, Alexandre M. Souza, Roberto S. Sarthour, Ivan S. Oliveira, Gabriel T. Landi, Tiago B. Batalhão, Roberto M. Serra and Eric Lutz and can be read at http://www.nature.com/articles/s41467-019-10333-7 and that the article was published by FAPESP Agency, so there you go.
Thumbnail Negative temperatures in quantum physics. "The group observed never-before-seen negative temperature states of quantum vortices in an experiment."

"Despite being important for modern understanding of turbulent fluids, these states have never been observed in nature. They contain significant energy, yet appear to be highly ordered, defying our usual notions of disorder in statistical mechanics."

"They created a superfluid by cooling a gas of rubidium atoms down to nearly absolute zero temperature, and holding it in the focus of laser beams. The optical techniques developed allow them to precisely stir vortices into the fluid."

"The cores of the vortices created in our system are only about 1/10 of the diameter a human blood cell."

"One of the more bizarre aspects of Nobel Laureate Lars Onsager's theory is that the more energy you add to the system of vortices, the more concentrated the giant vortices become. It turns out if you consider the vortices as a gas of particles moving around inside the superfluid, vortex clusters exist in absolutely negative temperature states, below absolute zero."

"This aspect is really weird. Absolute negative temperature systems are sometimes described as 'hotter than hot' because they really want to give up their energy to any normal system at positive temperature. This also means that they are extremely fragile." "Our study counters this intuition by showing that since the vortices are sufficiently isolated inside the superfluid, the negative-temperature vortex clusters can persist for nearly ten seconds."
Thumbnail fast.ai has released a second course, "Deep Learning from the Foundations."

"The first five lessons use Python, PyTorch, and the fastai library; the last two lessons use Swift for TensorFlow, and are co-taught with Chris Lattner, the original creator of Swift, clang, and LLVM."

"The purpose of Deep Learning from the Foundations is, in some ways, the opposite of part 1. This time, we're not learning practical things that we will use right away, but are learning foundations that we can build on. This is particularly important nowadays because this field is moving so fast."
Thumbnail Certain neurons in a part of the brain called the dorsal raphe nucleus, which is located in the middle of the brainstem, which is the part of the brain that connects to the spinal cord, have been discovered to activate in response to heat. Furthermore, when these cells are artificially stimulated optogenetically, it activates the parts of the body used to regulate temperature (generating heat, dilating blood vessels, etc).

But, because these cells in the dorsal raphe nucleus have previously been found to play a role in regulating hunger, the press release writer wrote about how this discovery might someday be used to create weight loss drugs. They even got a few choice quotes from the researchers about that, but that's not what the actual research was about, which was establishing that this particular group of cells in the brain are central to regulating temperature. It's true that the same cells play a role in regulating hunger, but whether that means weight loss drugs are in the cards or not is anybody's guess.
Thumbnail Get yourself up to speed on all things Mars, with Abigail Fraeman & Elizabeth Barrett of the NASA Jet Propulsion Laboratory. Abigail Fraeman starts with the dust storm that covered up Opportunity's solar panels. Opportunity is a robot geologist exploring Perseverance Valley. Opportunity has made it 5,000 Martian days (Sol 5,000). The dust storm was bad for Opportunity but great for Curiosity, though, which has a complete mobile weather station. If Opportunity is a robot geologist, Curiosity is a robot chemist, with the ability to drill and analyze rock samples.

Elizabeth Barrett talked about Insight. Insight has no wheels and stays in one place. It has a seismometer and a heat flow measurement probe that buries itself up to 5 meters in the ground. It can measure Mars's magnetic field, and using precise radio tracking, measure the "wobble" of Mars to determine the interior structure.
Thumbnail The theory that "adversarial examples are not bugs, they are features" led to an improved neural style transfer algorithm. "Adversarial examples", you'll recall, are examples where you can make very small perturbations, say to an image, and to a human it will still look like a panda but the neural network will change from "panda" to "gibbon".

The "features not bugs" theory is that with adversarial examples, neural networks are finding genuine features that can be used for classification, just not features humans are sensitive to. This is demonstrated by 1) making a "robustified" version of the training set which demonstrates that the adversarial vulnerability is not the fault of the neural network but a property of the dataset, and 2) making a "non-robustified" version of the training set which looks identical to the original to humans but has only adversarial examples, and is labeled according to the adversarial examples, so all the labels look wrong to humans -- and then demonstrating that training on this whacky dataset improves accuracy on new, unseen, unmodified test images. (!)

The next step is to modify style transfer so the new images are robustified to adversarial examples, and, surprisingly, they look better. They have more fine-grained artifacts, but look better to humans.
Thumbnail High-level overview of the difference between neural networks that use "attention" and neural networks that came before that pass memory from step to step (recurrent neural networks). The "attention" neural networks are the ones that are used now to do language translation. The famous GPT-2 neural network that can write text that seems like it was written by a human is a type of "attention" based network called a "transformer".
Thumbnail A few days ago Mark Finnern posted about this. It claims Google's quantum computing processor is making double-exponential gains. Unfortunately I have not been able to find out any more about this. I've been unable to find a research paper with any details, either by Hartmut Neven, the director of the Quantum Artificial Intelligence lab, quoted in the article, nor by anyone else in Google AI research. The name of the processor isn't given in the article so I had nothing to search for there. So I have no idea if this is true or not. I'm going to pass it along but for now this should be considered rumor.

"In December 2018, scientists at Google AI ran a calculation on Google's best quantum processor. They were able to reproduce the computation using a regular laptop. Then in January, they ran the same test on an improved version of the quantum chip. This time they had to use a powerful desktop computer to simulate the result. By February, there were no longer any classical computers in the building that could simulate their quantum counterparts. The researchers had to request time on Google's enormous server network to do that."

"Somewhere in February I had to make calls to say, 'Hey, we need more quota.' We were running jobs comprised of a million processors."

"That rapid improvement has led to what's being called 'Neven's law,' a new kind of rule to describe how quickly quantum computers are gaining on classical ones. The rule began as an in-house observation before Neven mentioned it in May at the Google Quantum Spring Symposium. There, he said that quantum computers are gaining computational power relative to classical ones at a 'doubly exponential' rate."
Thumbnail "They welcomed a robot into their family, now they're mourning its death." "Jibo sat in Kenneth Williams' bedroom, on his desk, where every day, it greeted him in the morning and ran through the weather and his calendar. Williams, 44, asked Jibo questions, requested music, and played its games. Jibo couldn't do much, really, but its most redeeming feature, the one that cemented it as a robot darling in its owner's heart, was its facial recognition. Unlike a Google Home or an Amazon Echo, Jibo noticed every time Williams entered the room and swiveled its head to say hello or crack a joke."
Thumbnail All the materials for UC Berkeley's Unsupervised Deep Learning course are online.
Thumbnail Robots that can imagine the feeling of touching an object just by looking at it. "The team used a KUKA robot arm with a special tactile sensor called GelSight, designed by another group at MIT. Using a simple web camera, the team recorded nearly 200 objects, such as tools, household products, fabrics, and more, being touched more than 12,000 times. Breaking those 12,000 video clips down into static frames, the team compiled 'VisGel,' a dataset of more than 3 million visual/tactile-paired images."
Thumbnail Slothbot crawl on wires and can switch wires.
Thumbnail "Oregon's Department of Corrections has banned dozens of introductory technology and coding books from state prisons over security concerns."

"The banned titles include 'Windows 10 for Dummies,' 'Python Programming For Beginners' and 'Blockchain Revolution,' a narrative explaining how blockchain technology works and its application in finance, business, computing and elections."
Thumbnail "While fans of other racing series fawn over world-class drivers, in Roborace it's programmers who are the real stars." Interview with Bryn Balcombe, Roborace's chief strategy officer. "Roborace is a completely new type of motor sport that we've been developing for the last 2 1/2 years, focusing on the mega-trends that are happening in the automotive industry, so electric, connected, and autonomous technologies."

"The main thing is to drive the advancement in the software. That's what we're focused on this year. So anything that we find creates competition between the teamsand moves that software development further forward, that's what we're focused on."
Thumbnail An AI system looks at Google Street View images, identifies "stop" and "give way" signs with 95% accuracy, identifies their type with 97% accuracy, and determines their precise geo location, all from 2D images.
Thumbnail Blood-brain barrier on a chip. Duplicates the blood-brain barrier of individuals using their own cells.

The team "collected blood cells from individuals and genetically manipulated them into stem cells (induced pluripotent stem cells), which they used to make the neurons, blood-vessel cells and support cells of the blood-brain barrier."

"They placed these various types of cells inside microfluidic chips that mimic the environment in which cells interact with each other and with blood. The living cells formed functioning blood-brain barriers that blocked entry of certain drugs."

"Significantly, when the BBB-Chip was derived from cells of patients with Allan-Herndon-Dudley syndrome or Huntington disease, the barrier malfunctioned in the same way that it does in patients with these diseases."
Thumbnail A new way of mapping cell populations and visualizing how biomolecules, including different sequences of DNA and RNA, are organized spatially in cells and tissues, has been developed. "The approach, dubbed DNA microscopy, doesn't require any optical or other specialized equipment, but instead uses nucleic acid barcodes to pinpoint molecules' relative positions within a sample. Carried out using standard laboratory equipment, the process requires just the sample of cells, some reagents and pipettes, and allows large numbers of samples to be processed simultaneously."

"DNA microscopy is an entirely new way of visualizing cells that captures both spatial and genetic information simultaneously from a single specimen. It will allow us to see how genetically unique cells -- those comprising the immune system, cancer, or the gut, for instance -- interact with one another and give rise to complex multicellular life."
Thumbnail Meet the RoboMaster S1 educational robot. Interactive curriculum to learn how to code. Infrared beam to battle with your friends. Pressure sensitive sensors so the robot can feel impact. International robotics competition.
Thumbnail The microspine-enhanced hexapod. Microspines distribute the load across many small hooks to give the robot climbing ability. The T-RHex robot hexapod has great ground mobility so they combined the concept of wall climbing with it.
Thumbnail "Machine learning is capable of identifying insects that spread the incurable disease called Chagas with high precision, based on ordinary digital photos." "Atificial intelligence can recognize 12 Mexican and 39 Brazilian species of kissing bugs."
Thumbnail Creating 3D hologram images with regular ink. "Lumii uses complex algorithms to precisely place tens of millions of dots of ink on two sides of clear film to create light fields that achieve the same visual effects as special films and lenses."

"You can formulate this as a machine learning problem or a signal processing problem, but basically at the end of the day we think of it as an optimization problem. To produce a three-dimensional image, you could place dots of ink so that you get a perfect rendition of a three-dimensional image from one perspective. Then you could rotate the print and say, 'Well now the perspective is off, so I need to readjust all of the dots,' and that will mess things up from the first perspective. We make it possible to have a three-dimensional image using just two layers of ink from as many perspectives as possible.'"
Thumbnail Finding hidden objects in dense point clouds. The system can only find objects where it has a predefined 'template' to search for. But it is fast and accurate.
Thumbnail Robotic lionfish with synthetic blood and arteries. "The fish 'blood' that runs through it serves as both the robot's power source and controls its movement."

"Instead of actual blood, the researchers used an electrolyte cocktail that can conduct electricity throughout the lionfish body and fins, much like a system of wires. When this cocktail flows over electrode stations, embedded in the fins, it generates electricity."

"These two components form what's known as a flow battery. Different than the lithium ion batteries in your phone or computer, flow batteries are generally safer because they don't typically overheat, but are less energy dense."
Thumbnail "A provocative paper, Energy and Policy Considerations for Deep Learning in NLP by Emma Strubell, Ananya Ganesh and Andrew McCallum has been making the rounds recently. While the paper itself is thoughtful and measured, headlines and tweets have been misleading, with titles like 'Deep learning models have massive carbon footprints'. One especially irresponsible article summarized the finding as 'an average, off-the-shelf deep learning software can emit over 626,000 pounds of carbon dioxide' which is an egregious misinterpretation." But "Just because model training is probably not a major carbon producer today doesn't mean that we shouldn't be looking at what its impact might be in the future."

"Although the typical machine learning practitioner might be only using eight GPUs to train models, at Google, Facebook, OpenAI and other large organizations the usage can be much, much higher."

"The US Department of Energy bought 27,648 Volta GPUs for their Oak Ridge supercomputer that they plan to use for deep learning which would consume around a megawatt at 100% utilization."

"The recent trend in deep learning is clearly to orders of magnitude more compute."

"Until recently models were generally thought to be data bound and many worried that large companies had an unassailable advantage in having the most data. But researchers are still able to make progress on high quality open datasets like ImageNet. Startups were able to build the best machine learning applications on the data that was available to them."

"In a world where researchers and companies are compute bound it's hard to imagine how they will compete or even collaborate with large companies."
Thumbnail Amazon AWS and Digital Ocean IPs are banned in Russia. Russia apparently banned Telegram and as part of that, they banned whole chunks of IPs addresses that Telegram uses on AWS and Digital Ocean, banning many websites not associated with Telegram. For example they banned all of 206.189.x.x.
Thumbnail Discussion on robot comedy (audio) with NYU Computer Science professor He He and Purdue University professor Julia Rayz. A greyhound stopped to get a hare-cut.
Thumbnail Bosstown Dynamics robot fights back. (CGI).
Thumbnail Robot that washes dishes. For restaurants, not your kitchen.
Thumbnail Smartphone app that purports to enable anyone to get a robot to perform complex tasks without programming.
Thumbnail "It might surprise you to hear that the number and locations of helipads in the United States are a huge known unknown for the urban air mobility industry."

"The solution I developed rests on retraining a CNN to recognize helipads in aerial images. This serves as the engine for scanning an area defined by the user and returning the latitudinal and longitudinal coordinates of a suspected helipad. The method used is called transfer learning, which takes an existing CNN trained to recognize distinctive features in imagery, and retrains the final classification layer of the network."

"In my case, this retraining distinguishes between images of 'helipads' and 'not helipads'. While the FAA database isn't perfect, I did find around 3k images of helipads by passing the coordinates listed in the database through the Google Maps API and storing the results. The 'not helipads' category was an interesting challenge: I needed a dataset of aerial images at a similar scale and context (predominantly urban areas) which would have an extremely low probability of containing helipad images."
Thumbnail Neural network to detect Photoshopped faces. "Right off the bat it must be noted that this project applies only to Photoshop manipulations, and in particular those made with the 'Face Aware Liquify' feature, which allows for both subtle and major adjustments to many facial features. A universal detection tool is a long way off, but this is a start."
Thumbnail "In the game The Witness, players solve puzzles by tracing patterns that begin with circles, and continue in continuous line to and endpoint point. At first these patterns appear only on panels, but players eventually realize that the entire island is filled with these patterns, and the real goal is to recognize these patterns in the surrounding environment. And after finishing, many players, myself included, began seeing these patterns in the real world as well."

"The Witness puzzle patterns in panels, in the environment, and in the real world! I trained a deep learning model to identify and label those puzzle patterns in The Witness screenshots."

"First, I went through the game and took several screenshots of each environmental puzzle: about 300 in total. Thanks to IGN for this comprehensive guide to the locations of all the puzzles, and SaveGameWorld for a save file so I didn't have to actually gain access to all the locations again! I made sure to capture screenshots from different angles, plenty of positive examples with the puzzle fully visible, and negative examples where parts are obscured or interrupted."
Thumbnail "Gradient descent not fast enough? Tired of managing memory and juggling template parameters to interface with your favorite nonlinear solver in C++?"

"OpTorch lets you write your cost functions as PyTorch modules and seamlessly optimize them in ceres, Google's industrial strength solver."
Thumbnail "In my opinion, PyTorch's automatic differentiation engine, called Autograd is a brilliant tool to understand how automatic differentiation works."

"All mathematical operations in PyTorch are implemented by the torch.nn.Autograd.Function class. This class has two important member functions we need to look at."

"The first is it's forward function, which simply computes the output using it's inputs. The backward function takes the incoming gradient coming from the the part of the network in front of it. As you can see, the gradient to be backpropagated from a function f is basically the gradient that is backpropagated to f from the layers in front of it multiplied by the local gradient of the output of f with respect to it's inputs. This is exactly what the backward function does."