Boulder Future Salon Recent News Bits

Thumbnail Steep declines in insect populations have happened over the last two decades in as geographically distinct places as the US, Germany, the Netherlands, Sweden, the British Isles, Puerto Rico, Costa Rica, and so on. You've probably heard explanations like pesticides and habitat loss, but one explanation you might not have heard is light pollution. "Modern light pollution is no longer confined to urban centers, but radiates outwards through the atmosphere and along road networks that run into or around otherwise pristine areas. Since 1992, levels of light pollution have doubled in high biodiversity areas, and are likely to continue to rise. By 2014, over 23% of the land surface of the planet experienced artificially elevated levels of night sky brightness; by comparison, agricultural crops cover approximately 12%."

Ways light pollution can interfere with insects: some insects have a "fatal attraction" to light, while others seek to avoid it. Light pollution can amplify polarized light that fools aquatic insects into laying eggs on non-aquatic surfaces. Light pollution can obscure natural nocturnal light sources such as the astronomical cues some insects use to navigate. Light pollution can interfere with the bioluminescent signals some insects use to reproduce. Light pollution can alter the circadian cycle of rest and activity, affecting foraging and pollinating activity.

The researchers think insects have poor evolutionary adaptation to light pollution because most other disturbances are similar to situations that have happened in the evolutionary past, but light pollution has never happened before in the planet's existence. The climate has changed before, habitats have fragmented before, invasive species have happened before, plants have invented new defenses similar to pesticides, and so on. Yet in the billions of years the planet has existed, the daily night/day cycle and the lunar cycle of light and dark have been the same, until now, and insects have not evolved any adaptation to the change.
Thumbnail The National Popular Vote Interstate Compact plan to obsolete the Electoral College.
Thumbnail "DeepFovea is a new AI-powered foveated rendering system for augmented and virtual reality displays. It renders images using an order of magnitude fewer pixels than previous systems, producing a full-quality experience that is realistic and gaze-contingent."

"This is the first practical generative adversarial network (GAN) that is able to generate a natural-looking video sequence conditioned on a very sparse input. In our tests, DeepFovea can decrease the amount of compute resources needed for rendering by as much as 10-14x while any image differences remain imperceptible to the human eye."

"When the human eye looks directly at an object, it sees it in great detail. Peripheral vision, on the other hand, is much lower quality, but because the brain infers the missing information, humans don't notice. DeepFovea uses recent advances in generative adversarial networks (GANs) that can similarly 'in-hallucinate' missing peripheral details by generating content that is perceptually consistent."
Thumbnail Nvidia has released a PyTorch library to accelerate 3D deep learning research. It has common 3D data manipulation functions like functions for meshes, pointclouds, signed distance functions, voxel grids, and the like, so as to take those burdens off you. In addition it has functions for rendering, lighting, shading, view warping, and the like, that are fully differentiable and ready for deep learning systems. It as visualization functionality to render 3D results. It also includes many state-of-the-art 3D deep learning architectures.
Thumbnail LISA Pathfinder, a spacecraft designed to test the feasibility of detecting gravitational waves in space (LISA stands for Laser Interferometer Space Antenna), was so sensitive it could detect "micrometeorites", particles only micrometers in size, i.e. dust, hitting the spacecraft. Those dust particles are believed to have come from distant comets as they moved through the solar system.
Thumbnail "I started working on my own video game '1982'. The setting was a personal one for me: The Lebanese Civil War. I became obsessed with understanding the bloody events my parents had to live through and was reading all the military and historical journals I could find. I wanted to turn the different tactics that the warlords and politicians used and turn them into a Civil War Simulator."

"I thought I'd program some simple AI to play against. But, every time I would make a design change, I would need to go update my AI and over time this was making me furious."

"So the idea of Yuri was to hook a Reinforcement Learning engine to a new game and then train the agent automatically whenever a design change was made."

"Yes, Reinforcement Learning is expensive but it'll save you a lot of time."
Thumbnail "Early scientific experimentation on animal learning was one of the inspirations that prompted later researchers to explore the use of reinforcement learning in artificial intelligence. The mechanism behind reinforcement learning is simple: allowing an agent to freely interact with an environment, and assigning reward functions if the agent succeeds in a task and negative rewards for failed attempts. What does reinforcement learning have to offer AI? Doina Precup, Research Team Lead at DeepMind, provided four answers: Growing knowledge and abilities in an environment, learning efficiently from one stream of data, reasoning at multiple levels of abstraction, and adapting quickly to new situations."
Thumbnail A new way of measuring the expansion rate of the universe has been invented. This is called the Hubble constant, named for Edwin Hubble, who did observations of Cepheid variable stars in other galaxies from Mount Wilson Observatory, the world's most powerful telescope at the time, and discovered galaxies were receding, and the further away they were, the faster they were receding. Since then another method has been invented, called the baryon acoustic oscillations method. Fluctuations in the density of the normal visible matter (baryonic matter) are used to calculate acoustic density waves in the primordial plasma of the early universe. The problem is that these two methods of measurement don't agree.

In the new method, interactions between gamma rays and the extragalactic background light are used. The extragalactic background light is a diffuse radiation field that fills the universe from ultraviolet through infrared wavelengths, and is mainly produced by star formation over cosmic history. Gamma-ray and extragalactic background light photons annihilate and produce electron-positron pairs. This interaction process generates an attenuation in the spectra of gamma-ray sources above a critical energy. How exactly this is used to calculate the Hubble constant, I don't understand (it involves a lot of math). But the result they get is 67.5 kilometers per second per megaparsec, which is closer to the baryon acoustic oscillations method than the Cepheid variable stars method. That number means that for every megaparsec (about 3.2 million light years) you go from here, that galaxy will be moving away from us at an additional 67.5 km/s. (Which is 243,000 kilometers per hour, or 151,000 miles per hour).
Thumbnail The golden age of the internet is over, according to gamer Glink.
Thumbnail Robots reading Vogue. Data mining in fashion. "Few magazines can boast being continuously published for over a century" (1892 to today), "familiar and interesting to almost everyone, full of iconic pictures -- and also completely digitized and marked up as both text and images. What can you do with over 2,700 covers, 400,000 pages, 6 TB of data?"

Histograms of color patterns, cover averages, n-gram search on words and phrases in ads and articles, topic modeling organized by word co-occurrence, advertisements by frequency, date, and industry, statistics on circulation, ratio of articles to advertisements, price per issue, and number of pages per year, colormetric (hue, saturation and lightness) analysis, fabric analysis using word embedding models (word2vec), and algorithmically-generated memos in the style of Vogue editor-in-chief Diana Vreeland.
Thumbnail In the latest Scimago Institutions Rankings for artificial intelligence, "the UK is leading AI developments in Europe while Iran is leading in the Middle-East."

The SCImago ranking is supposed to be a ranking based on a technique called eigenvector centrality measurement that is supposed to account for both the number of scientific papers, the number of citations, and the "importance" of the citations. But the chart on the article just goes by counting papers, putting China on top. You can see an H index column indicating the US still has the most impact. H index is still basically counting citations; it doesn't take into account the "importance" of the citations the way the SCImago ranking is supposed to.
Thumbnail The average Facebook user can't differentiate between misinformation and real news. The question is, am I above average? Are you above average? We'd have to be a lot above average because only 17% of Facebook users could do better than chance when tested.

Facebook's handling of vaccine-related news and ads has come under scrutiny. Seems to me vaccines are an interesting focal point for the whole "fake news" issue since actual children's lives are at stake.
Thumbnail Boeing 737-MAX Congressional testimony. This is over 8 hours long (spanning two days) and no, I don't expect you all to watch all of it (there won't be a test). I watched it because the math educator Matt Parker, who is also the author of a book called "Humble Pi: When Math Goes Wrong in the Real World", says that the aviation industry is an industry that has a "blame the process" culture rather than a "blame the person" culture. The medical industry (and he is talking about the UK medical industry here) has a "blame the person" culture -- if somebody dies because of a medical accident -- surgery gone wrong, equipment used incorrectly, too much medication given, etc -- the person responsible is identified, fired, and possibly sued or subject to criminal charges. So my interest in watching the Congressional testimony related to the Boeing 737-MAX accidents was to see if this was true and if so, what it looks like.

I would say it does seem true, more or less. It is true that Boeing identified and fired an employee (Mark Forkner, their top test pilot for the Boeing 737-MAX), however, no one is acting like he caused the problem or that firing him fixed it. What got him fired was an email where he used the phrase "Jedi mind-tricking". What he was referring to was "Jedi mind-tricking" the FAA into accepting certain modifications to the training procedures and manuals for Boeing 737-MAX pilots. It's not clear whether he was fired for embarrassing Boeing or for having a cavalier attitude towards safety or for lying to the FAA (though no specific evidence of lying came out in the Congressional testimony other than this phrase in one email). The fired employee is being subjected to a separate lawsuit and advised by his lawyers to not say anything, so we have no information from the fired employee himself.

Regarding the aforementioned training procedures and manuals, the Boeing executives were repeatedly hammered with questions about why the Maneuvering Characteristics Augmentation System (MCAS) system was omitted from the training and manuals. They repeatedly replied that more information is not necessarily better. As a software developer myself, I could sympathize. It is trivially easy with modern computers to overwhelm users with more information than they can process. The fact that Boeing was trying to minimize the amount of information that the pilots had to process didn't seem nefarious to me. But the Congresspeople with the benefit of 20/20 hindsight felt otherwise.

Getting back to the "blame the process" culture question, in the video the Boeing executives outline many changes to their processes they are making as a result of the accidents. They changed their safety review board structure, the safety review boards make weekly safety reports directly to the CEO (Dennis Muilenburg, who was part of the testimony), they created a whole new safety organization (headed by Elizabeth Pasztor, newly appointed Vice President of Safety, Security and Compliance for Boeing Commercial Airplanes), reporting to the Chief Engineer of Boeing's Commercial Airplanes division John Hamilton (who was part of the testimony), the board created a new aerospace safety committee, chaired by retired Admiral and former Vice Chairman of the Joint Chiefs of Staff Edmund Giambastiani Jr and retired Admiral John M. Richardson, said to have a "deep background in safety", they realigned the entire engineering organization of approximately 50,000 engineers to report directly to the Chief Engineer (John Hamilton), they created anonymous hotlines where any employee anywhere in the company can report a safety issue, and they started an investigation into reevaluating "human factors" assumptions. (The "human factors" part is related to the fact that Boeing assumed the pilots would respond to an alarm in under 10 seconds, with a typical time being 4 seconds, which might have been true if the MCAS alarm had been the only alarm the pilots had to deal with, but data from the crashes show multiple alarms went off at once and the pilots were confused and did not react the way Boeing engineers assumed). In addition to all of that on the Boeing side, on the regulatory side, the FAA appointed a Joint Authorities Technical Review (JATR), which issued an additional set of recommendations for changes to the FAA's processes.

So it does appear that the aviation industry has a "blame the process" culture. The same couldn't be said of the Congresspeople who repeatedly called for the CEO's resignation, thinking the solution to the problem was to blame a person (the CEO) and get rid of him. The CEO repeatedly told them that he felt leaving his position was "running away from the problem" and that he intended to figure out the underlying cause of the problem, fix it, and "see it through".

It got me thinking, the question essentially boils down to, if someone did 99 things right but made one mistake, is the solution to get rid of that person and replace them with someone who won't make that mistake, but possibly will make a different one, or is the solution for that person to learn from the mistake? If a person is motivated to learn from the mistake, and if they're supported by others making an effort to help them learn from the mistake, maybe the best thing to do is to actually keep that person in place. It does appear that the CEO and the company prioritized rushing planes to market and making short-term profits over long-term safety, but it seems unlikely after this, which cost the company over $9 billion in cancelled plane sales and money they have to repay the airlines for the cost of their planes being grounded, that they will do that going forward. It looks like they will be highly motivated to prioritize safety. It seems to me like as paradoxical as it may seem for people immersed in "blame the person" culture, the way to minimize loss of life in aviation accidents going forward may be to keep the CEO and everyone else at Boeing in place.

As a final thought, I don't really have an explanation for why the aviation industry would have a "blame the process" culture while the medical industry would have a "blame the person" culture. One cannot attribute this simply to the fact that lives are at stake, because lives are at stake in the medical industry also. I noticed that under the YouTube videos were people calling for prison time, and it got me thinking that "blame the person" culture is probably simply the human species' default. That raises the question of how the aviation industry managed to carve out an exception? I have no explanation and would appreciate any thoughts on the question.
Thumbnail "As for what I am going to be doing with the rest of my time: When I think back over everything I have done across games, aerospace, and VR, I have always felt that I had at least a vague 'line of sight' to the solutions, even if they were unconventional or unproven. I have sometimes wondered how I would fare with a problem where the solution really isn't in sight. I decided that I should give it a try before I get too old."

"I'm going to work on artificial general intelligence (AGI)."
Thumbnail Making a camera that can see wi-fi. Point it at a building and the bright spots in the image show you where the wi-fi routers are.
Thumbnail The intergalactic magnetic north pole is aligned with our solar system's magnetic north pole. The intergalactic magnetic field is actually stronger than the magnetic field in the solar system. The end of the heliosphere is non-uniform. If it's wavy, it could be due to solar cycles. Radiation from cosmic rays is much greater in interstellar space than it is in the solar system. And other discoveries from Voyager 2, which is still working (!) after 42 years.
Thumbnail System for handing over objects to a robot. The researchers found if the robot delayed too long it would be perceived as "not warm" but no delay at all was perceived as "discomforting", so they put in a short delay to make the handoff comfortable for humans.
Thumbnail Robot with artificial skin which is a giant hexagonal grid of sensors that can sense proximity, pressure, temperature, and acceleration. The cells have LED lights so you can see it when they're touched. It's another of these designs that doesn't send sensory signals from cells that aren't sensing anything.
Thumbnail "The idea is to trigger the censorship in China and let the authority delete or block the content you want it to block. Lots of people already knew this trick and used it nicely. For example, a mainlander stole the T-shirt design from a Taiwanese and sold the T-shirt on Taobao. The designer complained many times but the seller refused to withdraw the T-shirt. Then the designer claimed that he is for 'Taiwan Independence' (台独). After that the seller withdraw the T-shirt immediately because he's frightened of having anything to do with 'Taiwan Independence'."

"Some Japanese also know that trick. To prevent the mainlanders pirating their works, they just write the 'Tiananmen Square' and 'Xi Jinping' on some pages. Then the regime will 'help' them to protect the intellectual property."
Thumbnail Giant compendium of free online courses, many from top schools (Stanford, Yale, MIT, Harvard, Berkeley, Oxford, etc).
Thumbnail Intergroup contact has long been considered an effective strategy to reduce prejudice between groups. But when you bring people from different groups together online, that doesn't happen. At least, in this study, when they studied people from different groups interacting online, they actually became more polarized. The groups in question weren't Democrats and Republicans or Christians and atheists, they were Denver Nuggets fans and Portland Trailblazers fans. And other NBA fans. As the researchers got all the data from the Reddit subreddit r/nba. When speaking to other members of their own group, people would use 4-letter words like "help" and "thank". (Wait, that last one was 5 letters.) When speaking to the other group, they'd use 4-letter words like "refs".
Thumbnail RLCard is a toolkit for reinforcement learning in card games, developed by Texas A&M University. "Card games are ideal testbeds with several desirable properties. First, card games are played by multiple agents who must learn to compete or collaborate with each other. For example, in Dou Dizhu, peasants need to work together to fight against the landlord in order to win the game. Second, card games have huge state space. For instance, the number of states in UNO can reach 10163. The cards of each player are hidden from the other players. A player not only needs to consider her own hand, but also has to reason about the other players’ cards from the signals of their actions. Third, card games may have large action space. For example, the possible number of actions in Dou Dizhu can reach 10^4 with an explosion of card combinations. Last, card games may suffer from sparse reward. For example, in Mahjong, winning hands are scarce. We observe one winning hand every five hundreds of games if playing randomly. Moreover, card games are easy to understand with huge popularity. Games such as Texas Hold’em, UNO and Dou Dizhu are played by hundreds of millions of people. We usually do not need to spend efforts on learning the rules before we can dive into algorithm development."

Currently has Blackjack, Leduc Hold'em, Limit Texas Hold'em, No-limit Texas Hold'em, Dou Dizhu, Mahjong, and UNO.
Thumbnail Introduction to adversarial machine learning. The author, Arunava Chakraborty, wrote a library of PyTorch code for making adversarial machine learning easy. This tutorial walks you through black box and white box targeted and untargeted attacks. Black box means the AI model you are fooling is a "black box" that you don't look inside -- you figure out how to fool it entirely from the outside. White box means you can see inside the model and know exactly how its parameters are tuned, and use that knowledge to construct your attack. Untargeted means you only care that it gets it wrong, but you don't care how it gets it wrong -- you just want to make sure the neural network mistakes the stop sign for anything that's not a stop sign. Targeted means you want to fool it in a way you've decided in advance. You've decided you want to fool the neural network into thinking the stop sign is a speed limit sign.

The tutorial concludes with DeepFool, an untargeted white box attack system that figures out the absolute minimum modifications necessary to an image to fool the neural network. The original and modified photos literally look identical to the human eye.

This shows that neural networks do not "see" what is in images anything like what human brains are doing.
Thumbnail Robot "mini cheetahs" with a soccer ball doing somersaults.
Thumbnail Neural networks at Telsa. Very vertically integrated: build their own cars, arrange the sensors around the vehicle, collect all of the data, label all the data, train it on on-premise GPU clusters, run them on custom hardware when deployed to the fleet. The images from 8 cameras are processed by convolutional networks. They called their networks "hydranets" because they have a "shared backbone" but multiple "heads". These feed into a recurrent neural network that produces a "top down view", along with parallel networks for other tasks. For training, they train 48 neural networks that output about 1,000 predictions (output tensors), and take 70,000 GPU hours to train. None of them can regress -- ever. They have an automated workflow system that automates everything.
Thumbnail AI clones your voice after listening for just 5 seconds. It's an improvement on DeepMind's Tacotron and WaveNet techniques.
Thumbnail People were put in a brain imaging machine and asked to use chopsticks. When they used their dominant hand, one hemisphere of the brain became active but when they used their non-dominant hand, both hemispheres became active.
Thumbnail The politics of AI on PBS FRONTLINE. This program is only about the politics of AI, as its subtitle suggests, no technical details. How AI will deepen inequality, challenge democracy, and divide the world into two AI superpowers, the US vs China, with AlphaGo as China's "Sputnik moment".
Thumbnail Gravitational waves from 10 black hole mergers have been detected to date, but scientists are still trying to explain the origins of those mergers. "The largest merger detected so far seems to have defied previous models because it has a higher spin and mass than the range thought possible." New simulations suggest that "such large mergers could happen just outside supermassive black holes at the center of active galactic nuclei. Gas, stars, dust and black holes become caught in a region surrounding supermassive black holes known as the accretion disk. The researchers suggest that as black holes circle around in the accretion disk, they eventually collide and merge to form a bigger black hole, which continues to devour smaller black holes, becoming increasingly large in what Rochester Institute of Technology Assistant Professor Richard O'Shaughnessy calls 'Pac-Man-like' behavior."

"It offers a natural way to explain high mass, high spin binary black hole mergers and to produce binaries in parts of parameter space that the other models cannot populate. There is no way to get certain types of black holes out of these other formation channels."
Thumbnail "The microbiota is not accidental. The microbiota has co-evolved with us over very long periods of time, and it performs beneficial functions for us, just as we perform beneficial functions for it... We are all working together as an ecological unit." "Since our species -- and all of life -- has existed on earth, we have been in the company of microbes: microbes reside in our intestines, on our skin, and in the environments we live in. These bugs have at times been opportunistic pathogens, preying on the vulnerabilities of individuals and populations, but they have more frequently been some of our oldest evolutionary friends. The trouble is, the microbiota as we've known it is disappearing."
Thumbnail 21% of adolescents and 32% of young adults said they had used prescription opioids in the past year, according to a National Survey on Drug Use and Health conducted by the Substance Abuse and Mental Health Services Administration. "Adolescents" are defined as 12-17 years old and "young adults" are defined as 18-25 years old.
Thumbnail Russian universities have the best performance track record in the world over the last 20 years in the International Collegiate Programming Contest (ICPC), a contest that dates back to 1977 and has 50,000 students from over 3,000 universities participating. The final round will be held in Russia for the first time next year, at the Moscow Institute of Physics and Technology, in June 2020.
Thumbnail DeepMind published how their AlphaStar system works. Unfortunately the system combines so many algorithms it's hard to summarize. First there is a neural network with a self-attention system to pay attention to the player's and opponent's units. It uses something called a scatter connection system to "integrate spatial and not-spatial information", whatever that means. It uses a long-short-term-memory (LSTM) system to remember sequences of observations and deal with partial observability of the game arena. It uses a combination of something called an auto-regressive policy and a recurrent pointer network to manage the "structured, combinatorial" action space.

The learning occurs in three stages: supervised learning, reinforcement learning, and multi-agent reinforcement learning. In the supervised learning stage, the system was trained to predict exactly the actions that a human player took. The supervised learning stage was considered necessary because reinforcement learning by self-play was deemed to be incapable of discovering the wide variety of strategies needed to master the game. After the supervised learning stage, the system did use self-play with agents initialized to the parameters of the supervised agents. It was necessary to further extend this with a multi-agent reinforcement learning system, however, because the system could get stuck in cycles. By "cycles", we mean a situation where agent A beats agent B, agent B beats agent C, and agent C beats agent A, leading to a cycle where the system gets stuck indefinitely and makes no progress. The solution they came up with was inspired by an algorithm called Fictitious Self Play that avoids cycles by computing a best response against a uniform mixture of all previous strategies which converges to a Nash equilibrium. In their system, they used a non-uniform mixture of opponents. They used strategies from both the current iteration and previous ones, and gave each agent opponents that were tailored specifically for that agent. Agents were categorized into three categories: main agents, exploiter agents, and league exploiter agents -- I'll skip trying to explain what exactly the differences were but the point was to create diverse opponents with diverse strategies for agents to train against.

The agents played against humans handicapped in a way to make the games "fair" to human opponents. The AI agents had a limited camera view, rather than being able to see the full game all at once, they had action-per-minute (APM) limits, so they couldn't do actions a superhuman speed, and they had delays added to simulate the delays in human reaction time. These rules were developed in consultation with professional StarCraft II players and Blizzard employees.

AlphaStar won all the Protoss-vs-Terran games it played against humans, 99.91% of the Protoss-vs-Protoss, 99.94% of the Protoss-vs-Zerg, 99.83% of the Terran-vs-Protoss, 99.92% of the Terran-vs-Terran, 99.82% of the Terran-vs-Zerg, 99.70% of the Zerg-vs-Protoss, 99.51% of the Zerg-vs-Terran, and 99.96% of the Zerg-vs-Zerg. It's considered to be within the top 0.15% of StarCraft II players.
Thumbnail The suicide rate for 10-14 year olds went from 0.9 to 2.5 (2.8x increase) per 100,000 between 2007 and 2017.
Thumbnail Fish should move poleward as climate warms. "Warming waters have less oxygen and, therefore, fish have difficulties breathing in such environments. In a catch 22-type situation, such warming, low-oxygen waters also increase fish's oxygen demands because their metabolism speeds up."

"Fish's gills extract oxygen from the water to sustain the animal's body functions. As fish grow into adulthood their demand for oxygen increases because their body mass becomes larger. However, the surface area of the gills does not grow at the same pace as the rest of the body because it is two-dimensional, while the rest of the body is three-dimensional. The larger the fish, the smaller its surface area relative to the volume of its body."

This theory is called Gill-Oxygen Limitation Theory, or GOLT. Great acronym?
Thumbnail New record for high temperature superconductivity -- 161 K, in a thorium compound (thorium hydride). But you have to use insane pressures (175 gigapascals -- atmospheric pressure is around 100 kilopascals) and magnetic fields of 45 tesla. I guess the most powerful MRI machines are around 10 tesla, so that's a very powerful magnetic field.
Thumbnail Human population animation. Agriculture was invented more or less immediately after the last ice age ended 11,600 years ago -- I'm not going to posit an explanation why, I'll leave that to you to ponder -- and took a long time to get going, and my understanding is that the total human population on the entire planet before the invention of agriculture was about 3 million, which looks about right on the graph they show in the introduction on this video. They say the population was 170 million in 1 AD, which is the year the actual animation starts. On the animation, each dot equals 1 million people. The animation starts to really speed up after about 1400 and starts to sizzle after about 1700. Maybe you already knew all that, but what I didn't know was where the population was -- in the ancient world, human population was much more concentrated in India and China than I had realized. The video also notes that there is only one period in all human history where population declined -- the Black Death (bubonic plague) in the 1300s.
Thumbnail Realistic video game character movement from neural networks. The system used motion capture from humans as a starting point, then data augmentation was used to expand the tasks the animated characters would learn, then objects would be swapped out with new objects with the same contact points, for example switching to a different size chair. The user can interactively control the characters with control signals. The network automatically makes movements and transitions. Movements that it can do include walking, running, sitting, carrying objects, opening doors, and climbing. The system can adapt to different geometries, for example sitting on different chairs and carrying different size objects.
Thumbnail Video game world sizes.
Thumbnail "Understanding OpenAI's robot hand that solves the Rubik's cube." "What did OpenAI not do? 1. Use artificial intelligence to solve the puzzle." "2. Manipulate the cube using computer vision." "3. Choose to solve the task one-handedly to make it harder."

"Removing the above misconceptions should not create the impression that OpenAI's work lacked a purpose, but prepare us to appreciate the actual contributions. In fact, OpenAI's work was pioneering. This is what probably made it hard to make it understandable to the public and led them to mispresenting it."

"The objective was that of creating a general-purpose robotic arm. The hand could have been performing any sort of task; solving the Rubik's cube just made for a well-defined problem that required quick reflexes and skilful manipulation."

"This research area is today little-understood and unexplored."

"The main theoretical contribution of this work towards general-purpose robotics was a technique called adaptive domain randomization. This technique builds upon two older concepts in the AI literature: domain randomization and curriculum learning."
Thumbnail OpenAI has released the full-sized (1.5 GB) GPT-2 AI model. GPT-2 is the text-generating system that generates the most convincing text (to humans). They did a staged release over 9 months to watch for misuse. They decided there wasn't enough misuse to not release the full-sized model, and they thought there were benefits to releasing it, including: software engineering code autocompletion, grammar assistance, autocompletion-assisted writing, art, creating or aiding literary art, poetry generation, entertainment, video games, chatbots, and health and medical question-answering systems. They've partnered with outside groups to study the effects of GPT-2 and other advanced language models. Cornell University is studying human susceptibility to text generated by language models. The Middlebury Institute of International Studies Center on Terrorism, Extremism, and Counterterrorism (CTEC) is exploring how GPT-2 could be misused by, you guessed it, terrorists and extremists. The University of Oregon is developing a series of 'bias probes' to analyze bias in GPT-2. The University of Texas at Austin is studying the statistical detectability of GPT-2, including after it's been retrained on domain-specific datasets and text in different languages.

They are working with the Partnership on AI (PAI) to develop guidelines on responsible publication for machine learning and AI. They recommend building frameworks that take the trade-offs of benefits vs harms into consideration, engaging outside researchers and the public, and giving outside researchers early access.

The GROVER system did the best at detecting, successfully detecting 97% of fake Amazon reviews. A tool called GLTR assists humans and increases humans' ability to detect AI generated text. GPT-2's output was found to have biases with respect to gender, race, religion, and language preference. This is a reflection of the data it was trained on (text from the web).

They comment on future trends in language models: Language models are moving to mobile devices, they allow for greater control of text generation, they will have improved usability, and will be subject to greater risk analysis.
Thumbnail Learning is optimized when we fail 15% of the time. "We learn best when we are challenged to grasp something just outside the bounds of our existing knowledge. When a challenge is too simple, we don't learn anything new; likewise, we don't enhance our knowledge when a challenge is so difficult that we fail entirely or give up."

"So where does the sweet spot lie? According to the new study in the journal Nature Communications, it's when failure occurs 15% of the time. Put another way, it's when the right answer is given 85% of the time."

You would think, from this description, that they got this from doing empirical observations on thousands of human beings. But that's not what they did. They looked at neural networks. Specially those trained by the mathematical algorithm known as gradient descent. And they didn't even look at thousands of neural networks. They studied gradient descent itself and worked out an exact solution from first principles. That solution is:

Ideal failure rate = (1 - erf(1 / sqrt(2))) / 2

where erf is the Gauss error function, which is related to the Gaussian (normal) distribution. The answer comes out to about 0.1586553. That's where they get the 15%.
Thumbnail Supramolecules of singlehandedness, or chirality, have been reliably, reproducibly synthesized, possibly revealing something about the origin of life. Apparently people have been able to do this before one time here, one time there, but never reliably and reproducibly. What this is about is how some molecules have right and left mirror images. If you've ever seen names prefixed by "L" and "D" prefixes, like "L-phenylalanine" and "D-phenylalanine", the "L" stands for "left" and the "D" stands for "right". And if "D" standing for "right" doesn't make sense... well it does in Latin. Anyway, you could also think of "L" and "D" as "living" and "dead" as life uses the "L" forms exclusively. Why life uses the "L" forms exclusively is a mystery, labeled the mystery of the "homochirality of life".

Anyway, until now, nobody connected large-scale rotation with molecule-scale rotation, but that's what this experiment does. What they did is pretty complicated but I'll try to summarize. They took something called phthalocyanines, wrapped them in a monolayer of long alkyl chains, and arranged them into stacks held together with what are known as pi-pi bonds. Phthalocyanine is a large organic compound with the formula (C8H4N2)4H2, which was chosen because of its ability to have right, left, or no chirality. The alkyl chains are carbon-hydrogen chains which generally don't have rings, unlike the phthalocyanines, and their purpose here is to form the pi-pi bonds. Pi-pi bonds form when you have electron shells in two atoms line up and essentially form a new orbital, called a pi orbital, shared between them, and then you can have pi orbitals in neighboring molecules line up in such a way that they can stack (by alternating positive and negative electric charges). Once you're able to stack molecules in this manner, then you can have something known as a "supramolecule". At this point you should understand the first word in the first sentence.

What they did next was stir the supramolecules with a magnetic stirrer. Then they gradually removed the solvent and tested the resulting compound for chirality (using a technique called circular dichroism spectroscopy), and the test came back indicating clockwise or anticlockwise rotation depending on how they had rotated the magnetic stirrer.

What this has to do with the origin of life is that the researchers speculate that large-scale rotation, such as the vortex motion induced by the rotation of the Earth itself, called the Coriolis effect, might have locked in an initial chirality that life still uses to this day. At this point you should be able to understand the first sentence.
Thumbnail Tiny, self-propelled robots that remove radioactive uranium from simulated wastewater. "To make their self-propelled microrobots, the researchers designed ZIF-8 rods with diameters about 1/15 that of a human hair. The researchers added iron atoms and iron oxide nanoparticles to stabilize the structures and make them magnetic, respectively. Catalytic platinum nanoparticles placed at one end of each rod converted hydrogen peroxide 'fuel' in the water into oxygen bubbles, which propelled the microrobots at a speed of about 60 times their own length per second. In simulated radioactive wastewater, the microrobots removed 96% of the uranium in an hour. The team collected the uranium-loaded rods with a magnet and stripped off the uranium, allowing the tiny robots to be recycled."

The key is the invention of micromotors based on metal organic frameworks which can operate in hydrogen peroxide. By ZIF-8, they mean zeolitic imidazolate framework-8 doped with iron. ZIFs are minerals composed of aluminium, silicon, and oxygen, and are usually doped with iron, cobalt, copper, or zinc, and they went with the iron option. The magnetism is provided by iron oxide and platinum nanoparticles. Don't know how exactly these machines extract uranium. It has something to do with the iron in the ZIF-8 metal organic framework, in particular the Fe(II) form.
Thumbnail A brain region that helps build memories during deep sleep has been identified. It's called the nucleus reuniens and it "connects two other brain structures involved in creating memories -- the prefrontal cortex and the hippocampus -- and may coordinate their activity during slow-wave sleep."

"We found that the nucleus reuniens is responsible for coordinating synchronous, slow-waves between these two structures. This means that the reuniens may play an essential role for sleep-dependent memory consolidation of events."

"Slow-wave sleep is the deepest stage of sleep, during which the brain oscillates at a very slow, once-per-second rhythm. It is crucial for muscle and brain recovery, and has been shown to play a role in memory consolidation."

In the study they used optogenitics to activate the nucleus reuniens in rats and chemogenetics to inhibit it.

More precisely, they used hSyn-ChR2-EYFP for the optogenetic experiments, hSyn-hM4Di-HA-mCitrine for the chemogenetic experiments, and hSyn-mCherry as a control, whatever those are. Some kind of plasmids, which are DNA molecules in cells without being parts of chromosomes.
Thumbnail "3D printer that is so big and so fast it can print an object the size of an adult human in just a couple of hours." But how?

"HARP (high-area rapid printing) uses a new, patent-pending version of stereolithography, a type of 3D printing that converts liquid plastic into solid objects. HARP prints vertically and uses projected ultraviolet light to cure the liquid resins into hardened plastic. This process can print pieces that are hard, elastic or even ceramic. These continually printed parts are mechanically robust as opposed to the laminated structures common to other 3D-printing technologies. They can be used as parts for cars, airplanes, dentistry, orthotics, fashion and much more."

"A major limiting factor for current 3D printers is heat. Every resin-based 3D printer generates a lot of heat when running at fast speeds -- sometimes exceeding 180 degrees Celsius. Not only does this lead to dangerously hot surface temperatures, it also can cause printed parts to crack and deform. The faster it is, the more heat the printer generates. And if it's big and fast, the heat is incredibly intense."

"The Northwestern technology bypasses this problem with a nonstick liquid that behaves like liquid Teflon. HARP projects light through a window to solidify resin on top of a vertically moving plate. The liquid Teflon flows over the window to remove heat and then circulates it through a cooling unit."
Thumbnail Drone that can fly into disaster areas and tell living people from dead people. "Using a new technique to monitor vital signs remotely, engineers from the University of South Australia and Middle Technical University in Baghdad have designed a computer vision system which can distinguish survivors from deceased bodies from 4-8 metres away."

The system works by using a system called OpenPose to find human forms lying on the ground and finding the chest region. It uses a wavelet system to look for motion of the chest wall due to cardiopulmonary activity.

I imagine the system doesn't work if enough of the human body is occluded, say from debris from the earthquake or whatever the disaster is. But it looks like a good start.
Thumbnail "Expecting the unexpected: a new model for cognition." So this is about a concept called "predictive coding", which is that your brain learns simply by trying to predict what it will experience next, and then comparing the prediction with reality, and AI can be developed that learns the same way. It's pretty straightforward to make a recurrent neural network (RNN) work that way, and this research is an improvement on RNNs called PV-RNN (apparently the "P" is for "predictive-coding" and "v" is for "variational"), which allows the network to, instead of making a definite prediction, to make probabilistic predictions.

The article suggests that "the model may enable robots to 'socialize' by predicting and imitating each other's behaviors" and may offer insights into autism spectrum disorders (ASD). "An intriguing consideration is that the current results showing that the generalization capability of PV-RNN depend on the setting of the metaprior w bears parallels to observational data about autism spectrum disorders (ASD) and may suggest possible accounts of its underlying mechanisms." "Recently, there has been an emerging view suggesting that deficits in low-level sensory processing may cascade into higher-order cognitive competency, such as in language and communication have suggested that ASD might be caused by overly strong top-down prior potentiation to minimize prediction errors (thus increasing precision) in perception, which can enhance capacities for rote learning while resulting in the loss of the capacity to generalize what is learned.
Thumbnail AI giving fashion advice. Fashion++ takes the outfit you have and gives suggestions for small changes that make it more fashionable. "Fashion++ was trained using more than 10,000 images of outfits shared publicly on online sites for fashion enthusiasts. Finding images of fashionable outfits was easy, said graduate student Kimberly Hsiao. Finding unfashionable images proved challenging. So, she came up with a workaround. She mixed images of fashionable outfits to create less-fashionable examples and trained the system on what not to wear."

"As fashion styles evolve, the AI can continue to learn by giving it new images, which are abundant on the internet."
Thumbnail The latest RoboBee can crash into walls, fall on the floor, and collide with other RoboBees without being damaged, thanks to the invention of soft actuators.
Thumbnail Top 15 biggest companies by market capitalization 1993-2019. Watch the world transition from the non-tech world of GE, Exxon Mobil, Walmart, Coca-Cola, Merck, Procter & Gamble, etc, to the high-tech world of Microsoft, Apple, Amazon, Google, Facebook, etc.
Thumbnail Most popular websites 1996-2019. Easy to forget AOL was once way ahead of everybody. Also in the beginning, the numbers were racing up for everybody, but racing up faster for the top sites (AOL, Yahoo, MSN, etc). It took until 2005 before the top site, Yahoo, actually started to see its numbers go down. At the very end of the video, for this year, you can see Facebook's numbers have actually started going down. YouTube's numbers are going down, too. These are both things I did not know. Instagram, which is owned by Facebook, is still going up, though. Google, which owns YouTube, is still going up. Wikipedia is going down, which I never would have expected.
Thumbnail Flies execute four perfectly timed maneuvers to land upside down: "they increase their speed, complete a rapid body rotational maneuver (likened to a cartwheel), perform a sweeping leg extension and, finally, land through a leg-assisted body swing when their feet are firmly planted on the ceiling."
Thumbnail "Researchers are proposing a framework that would allow users to understand the rationale behind artificial intelligence (AI) decisions. The work is significant, given the push to move away from 'black box' AI systems -- particularly in sectors, such as military and law enforcement, where there is a need to justify decisions."

"One thing that sets our framework apart is that we make these interpretability elements part of the AI training process. For example, under our framework, when an AI program is learning how to identify objects in images, it is also learning to localize the target object within an image, and to parse what it is about that locality that meets the target object criteria. This information is then presented alongside the result."

"In a proof-of-concept experiment, researchers incorporated the framework into the widely-used R-CNN AI object identification system. They then ran the system on two, well-established benchmark data sets. The researchers found that incorporating the interpretability framework into the AI system did not hurt the system's performance in terms of either time or accuracy."

Here's where I would describe how the system works, but I didn't understand it. It constructs something called an "and-or-graph". In a regular graph, nodes are connected with edges, and you can take any edge from one node to another, provided it exists and is pointed the right way (if the graph is directed, and the graphs they make here are directed and also acyclic, which means they don't have cycles in them). As such regular graphs are "or" graphs. With and "and-or" graph, some of the edges exiting a node can be linked together in such a way that if you take one of the edges, you're also required to take the other edge or edges. The name is inspired by AND and OR logic gates. In the case of the convolutional neural network used to look at images, R-CNN, they've somehow linked this and-or graph (AOG, not to be confused with OAG which is Overly Attached Girlfriend) with the system in R-CNN for identifying the "region of interest", which is what produces the bounding box around the cat's face (assuming it's looking at a picture of a cat, which, of course, is frequently the case if the picture came from the internet). The AOG that gets created is actually a hierarchical tree of image regions, created by something they call a "top-down grammar model". How this leads to "interpretability" of the neural network's decision is beyond me.
Thumbnail The coming era of extreme ultraviolet (EUV).
Thumbnail Perovskites have liquid-solid duality, at least when it comes to electrons. "The researchers used a multi-dimensional electronic spectrometer (MDES) -- a unique instrument hand-built at McGill -- to observe the behaviour of electrons in cesium lead iodide perovskite nanocrystals. The MDES that made these observations possible is capable of measuring the behaviour of electrons over extraordinarily short periods of time -- down to 10 femtoseconds, or 10 millionths of a billionth of a second. Perovskites are seemingly solid crystals that first drew attention in 2014 for their unusual promise in future solar cells that might be cheaper or more defect tolerant."

"Since childhood we have learned to discern solids from liquids based on intuition: we know solids have a fixed shape, whereas liquids take the shape of their container. But when we look at what the electrons in this material are actually doing in response to light, we see that they behave like they typically do in a liquid. Clearly, they are not in a liquid -- they are in a crystal -- but their response to light is really liquid-like. The main difference between a solid and a liquid is that a liquid has atoms or molecules dancing about, whereas a solid has the atoms or molecules is more fixed in space as on a grid."

Well, that sounds weird and interesting and simple enough until you look at the paper and find it's all about polaron formation. The concept of polarons is a quasiparticle theory, which is when the emergent properties of a physical system behave in a particle-like way even though there is no particle, but you can think of it that way, where electrons in a dielectric (that means the material is an insulator that can be characterized by a dielectric constant that tells you how much it gets polarized in an electric field) crystal get the atoms to move from their equilibrium positions in such a way as to screen the charge of an electron. What physicists mean by "screen" is, imagine you have a plasma that's net electrically neutral, a soup of electrons and positive ions. The plasma is hot enough that the electrons and ions can't combine into atoms but because the ions attract the electrons, an ion can be surrounded by electrons and from the outside the charge of the ion becomes invisible -- the charge of the ion becomes "screened" by the electrons. Except here it's the charge of the electrons that is getting screened by atoms in a crystal moving from their equilibrium positions.

"Figure 3d shows that the anti-diagonal width of the CsPbI3 nanocrystals quickly broadens from its initial homogeneous value to its final value at 400 fs. In contrast, as seen in Fig. 3e, the width of the CdSe nanocrystals stays constant on average, indicating that no spectral diffusion is taking place. Instead, the width is modulated at the LO phonon frequency. Figure 3f shows the lineshape dynamics for Nile Blue in ethanol. By comparing Fig. 3d, f, it becomes apparent that the lineshape behavior of the CsPbI3 nanocrystals qualitatively mimics that of the molecular dye in solution, albeit with a slower timescale and without the strong coherent vibronic modulations arising from the ring distortion mode of Nile Blue. These data reveal that perovskite nanocrystals undergo spectral dynamics that are consistent with liquids and inconsistent with covalent solids."
Thumbnail Enzymes, and possibly all proteins, are capable of conducting electricity. "Until quite recently, proteins were regarded strictly as insulators of electrical current flow. Now, it seems, their unusual physical properties may lead to a condition in which they are sensitively poised between an insulator and a conductor. (A phenomenon known as quantum criticality may be at the heart of their peculiar behavior.)"

In earlier research, "the protein was hooked up via its two so-called active sites. These are the regions of a protein that bind selected molecules, often resulting in a conformational change in the molecule's complex 3D structure and the completion of the protein's given task."

"This time, the biomolecule was sensitively wired to the electrodes by means of alternate binding sites on the enzyme, leaving the active sites available to bind molecules and carry out natural protein function."

"The enzyme molecule chosen for the experiments is one of the most important for life. Known as a DNA polymerase, this enzyme binds with successive nucleotides in a length of DNA and generates a complimentary chain of nucleotides, one by one. This versatile nanomachine is used in living systems for copying DNA during cell replication as well as for repairing breaks or other insults to the DNA."

"The study describes techniques for affixing the DNA polymerase to electrodes so as to generate strong conductance signals by means of two specialized binding chemicals known as biotin and streptavidin. When one electrode was functionalized using this technique, small conductance spikes were generated as the DNA polymerase successively bound and released each nucleotide, like a grasping hand catching and releasing a baseball. When both electrodes were outfitted with streptavidin and biotin, much stronger conductance signals, measuring 3-5 times as large, were observed."
Thumbnail "Pancreatic beta cells were engineered with a gene that encodes a photoactivatable adenylate cyclase (PAC) enzyme. The PAC produces the molecule cyclic adenosine monophosphate (cAMP) when exposed to blue light, which in turn cranks up the glucose-stimulated production of insulin in the beta cell. Insulin production can increase two- to three-fold, but only when the blood glucose amount is high. At low levels of glucose, insulin production remains low. This avoids a common drawback of diabetes treatments which can overcompensate on insulin exposure and leave the patient with harmful or dangerously low blood sugar (hypoglycemia)."

"Researchers found that transplanting the engineered pancreatic beta cells under the skin of diabetic mice led to improved tolerance and regulation of glucose, reduced hyperglycemia, and higher levels of plasma insulin when subjected to illumination with blue light."

So, a possible solution to Type I diabetes, as long as people don't mind embedding blue LEDs in their bodies?
Thumbnail A simple mathematical model, involving center of mass and center of pressure, enables human movements to be scaled down to a bipedal robot.
Thumbnail "New neural network could solve the three-body problem 100 million times faster." Quite a headline, but this is more of a proof-of-concept than anything practical. They assumed all three bodies have the same mass and start at 0 velocity. They put one at (1,0), one within the unit circle with x < 0, and the third one such that the center of mass of the whole system was at (0, 0), and then ran a "brute-force" system called Brutus that can calculate with an arbitrary number of decimal places and with arbitrarily short time steps to create the training data and to verify how accurate the neural network was. The neural network was a 10-layer fully connected network with 128 nodes in each layer and ReLU for the activation layer between the fully connected layers.
Thumbnail "Players of the science-fiction video game StarCraft II faced an unusual opponent this summer. An artificial intelligence (AI) known as AlphaStar -- which was built by Google's AI firm DeepMind -- achieved a grandmaster rating after it was unleashed on the game's European servers, placing within the top 0.15% of the region's 90,000 players.

"DeepMind first pitted AlphaStar against high-level players in December 2018, in a series of laboratory-based test games. The AI played -- and beat -- two professional human players. But critics asserted that these demonstration matches weren't a fair fight, because AlphaStar had superhuman speed and precision."

"Before the team let AlphaStar out of the lab and onto the European StarCraft II servers, they restricted the AI's reflexes to make it a fairer contest. In July, players received notice that they could opt-in for a chance to potentially be matched against the AI. To keep the trial blind, DeepMind masked AlphaStar's identity."
Thumbnail An AI system triaged postoperative ICU patients as well as a human. "The algorithm used in the pilot study included 87 clinical variables and 15 specific criteria related to the appropriateness of admission to the ICU within 48 hours of surgery. An admission to the ICU was considered appropriate if one of the 15 criteria was met. The criteria included: intubation for more than 12 hours, reintubation, respiratory or circulatory arrest, call for rapid response or code, blood pressure below 100/60 mHg for two consecutive hours, heart rate below 60 bpm or above 110 bpm for two consecutive hours, use of pressors, placement of a central venous line or Swan-Ganz catheter, echocardiogram, new onset of cardiac arrhythmia, myocardial infarction, return to the operating room, blood transfusion requiring more than four units, or readmission to the ICU after a prior admission."

"AI correctly triaged 41 of the 50 patients in the study (82 percent). Surgeons had an accuracy triage rate of 70 percent (35 patients), intensivists 64 percent (32 patients), and anesthesiologists 58 percent (29 patients)."

"The rate of under-triage was similar for AI (12 percent) and surgeons (10 percent); the rate of over-triage was much lower for AI (6 percent) than for the clinicians, whose rates ranged from 20 percent to 40 percent. Furthermore, AI achieved a positive predictive rate of 50 percent and negative predictive rate of 86 percent."
Thumbnail When a vehicle has one driver, and the one driver makes an error, they were blamed, regardless of whether that driver is a machine or a human, but, in the case of human-machine shared-control vehicles, humans are blamed more, and blame attributed to the machine is reduced.
Thumbnail "Compact depth sensor inspired by spiders."

Jumping spiders "have impressive depth perception despite their tiny brains, allowing them to accurately pounce on unsuspecting targets from several body lengths away."

"Each principal eye has a few semi-transparent retinae arranged in layers, and these retinae measure multiple images with different amounts of blur. For example, if a jumping spider looks at a fruit fly with one of its principal eyes, the fly will appear sharper in one retina's image and blurrier in another. This change in blur encodes information about the distance to the fly."

"Instead of using layered retina to capture multiple simultaneous images, as jumping spiders do, the metalens splits the light and forms two differently-defocused images side-by-side on a photosensor."

"An ultra-efficient algorithm, developed by Todd Zickler's group, then interprets the two images and builds a depth map to represent object distance."
Thumbnail Elastic energy robots. Soft robot can catch a beetle in 120 milliseconds, like a chameleon's tongue. Another can close like a venus flytrap in 50 ms. Works by fabrication of prestressed soft actuators.
Thumbnail If you put sensors in a chair, a machine learning algorithm can distinguish eSports professionals from amateurs. "The experiment involved a total of 19 players, including 9 professionals and 10 amateurs, who were asked to play Counter-Strike: Global Offensive (CS:GO) for 30 to 60 minutes. Their skills were evaluated in game hours, similarly to pilots, whose skills are assessed in flight hours. The data were collected using an accelerometer and a gyroscope embedded in the chair."

"The patterns extracted from each session were used to evaluate the players' behavior and check how intensively and how often they moved or turned around along each of the three axes and leaned back in the chair. A total of 31 patterns were obtained for each player, and the 8 most important features were defined using statistical techniques. Machine learning methods were then applied to the key features. The popular Random Forest method displayed the best performance, correctly determining the player's skill level from a 3-minute session in 77% of cases. Also, the results showed that professional players move around more often and more intensively than beginners, while sitting perfectly still during shootings and other game events."
Thumbnail Robotic gripper for soft objects like eggs. The gripper uses repulsion between magnets to adjust the stiffness of its grip and absorb energy from collisions (i.e. humans bumping into it).
Thumbnail "Inspired by octopuses, researchers have developed a structure that senses, computes and responds without any centralized processing -- creating a device that is not quite a robot and not quite a computer, but has characteristics of both."

"At the core of the soft tactile logic prototypes is a common structure: pigments that change color at different temperatures, mixed into a soft, stretchable silicone form. That pigmented silicone contains channels that are filled with metal that is liquid at room temperature, effectively creating a squishy wire nervous system."

"Pressing or stretching the silicone deforms the liquid metal, which increases its electrical resistance, raising its temperature as current passes through it. The higher temperature triggers color change in the surrounding temperature-sensitive dyes. In other words, the overall structure has a tunable means of sensing touch and strain." "The researchers also developed soft tactile logic prototypes in which this same action -- deforming the liquid metal by touch -- redistributes electrical energy to other parts of the network, causing material to change colors, activating motors or turning on lights. Touching the silicone in one spot creates a different response than touching in two spots; in this way, the system carries out simple logic in response to touch."
Thumbnail A shadow-sensing system that can sometimes detect if a moving object is behind a corner can get an autonomous car to stop half a second earlier than a lidar system. This was from tests conducted indoors with consistent lighting on cars going slower than they go in real life.

The system was built by modifying an existing system for detecting shadows called ShadowCam that detects changes in light intensity over time, from image to image. The new system takes into account rapid movement of the car, and was trained in an area lined with AprilTags. AprilTags are similar to QR codes. The AprilTags enable the robotic vehicle to place the tags in 3D space and determine its precise location in 3D space. Using this the system learned which pixels in images to zero in on, and how to determine if there are shadows by converting multiple images to the same viewpoint and then using a trick where color is intensified to reduce the signal-to-noise ratio. Afterward the AprilTag system was removed and a visual odometry system called Direct Sparse Odometry used in its place.
Thumbnail Lab-grown meat closer to the texture of real meat, thanks to a gelatin "scaffolding" system.

The way the system works is, gelatin is produced using an immersion rotary jet spinning (iRJS) system. The iRJS system was chosen because it has the potential to scale-up to industrial production levels. The researchers say it has the potential to reduce land and water use for meat production by more than 80% compared with the current system of producing meat from livestock. It also can use materials other than gelatin, such as polysaccharides, though gelatin only was used for these experiments.

By using different concentrations of gelatin, the resulting microfibrous gelatin scaffolds get created with different characteristics: different fiber diameter, scaffold porosity, and scaffold coherency, by which they mean how much the fibers align with each other. The microfibrous gelatin scaffolding mimics the extracellular matrix that muscle cells grow in in living animals. Once the gelatin scaffolds are produced, they are seeded with either bovine aortic smooth muscle cells or rabbit skeletal myoblast cells. No idea why those two were chosen.

"The researchers used mechanical testing to compare the texture of their lab-grown meat to real rabbit, bacon, beef tenderloin, prosciutto, and other meat products." "When we analyzed the microstructure and texture, we found that, although the cultured and natural products had comparable texture, natural meat contained more muscle fibers, meaning they were more mature. Muscle and fat cell maturation in vitro are still a really big challenge that will take a combination of advanced stem cell sources, serum-free culture media formulations, edible scaffolds such as ours, as well as advances in bioreactor culture methods to overcome."
Thumbnail Using deep learning to design a 'super compressible' material. The system uses a less used method called Bayesian machine learning. The researcher (Miguel Bessa, Assistant Professor in Materials Science and Engineering at Delft University of Technology) thought probabilistic techniques were the way to go when analyzing or designing structure-dominated materials because they deal with uncertainties that he categorizes as "epistemic" and "aleatoric". Normal deep learning methods are non-probabilistic.

"Epistemic or model uncertainties affect how certain we are of the model predictions (this uncertainty tends to decrease as more data is used for training). Aleatoric uncertainties arise when data is gathered from noisy observations (for example, when different material responses are observed due to uncontrollable manufacturing imperfections)."

Structure-dominated materials "are often strongly sensitive to manufacturing imperfections because they obtain their unprecedented properties by exploring complex geometries, slender structures and/or high-contrast base material properties."
Thumbnail Robot joints invented by the Korea University of Technology and Education (Koreatech) about two years ago, but I haven't seen until today. Never seen robot joints like this before.
Thumbnail LaMa is another new simultaneous localization and mapping (SLAM) system for robots. It looks less powerful than the system I posted a few days ago (Kimera), which looked like state-of-the-art to me, but this system can run on the least powerful systems -- they say "The minimum viable computer to run our localization and SLAM solutions is a Raspberry Pi 3 Model B+.". It can run with or without Robot Operating System (ROS). It offers two algorithms, sparse-dense mapping, and a particle filter.
Thumbnail "Foodvisor tries to evaluate the distance between your plate and your phone using camera autofocus data. It then calculates the area of each food item. The company then tries to extrapolate the volume of each item depending on the type of food."

"If Foodvisor got something wrong, you can manually correct it before you log your meal. Many people give up on nutrition trackers because it's too demanding. Foodvisor's technology is all about making the data entry process as seamless as possible."

"After that, you get a list of nutrition facts about what you just ate -- calories, proteins, carbs, fats, fibers, etc."
Thumbnail Google is rolling out BERT (Bidirectional Encoder Representations From Transformers) for search. So you see, all those posts I did about BERT had a point, there was something to this technology.

"Pandu Nayak, Google's vice president of search, gave another example at a press event yesterday, using the query 'How old was Taylor Swift when Kanye went on stage?' Before BERT, Google surfaced videos of the 2009 event during which the rapper interrupted the pop star's acceptance speech at the MTV Video Music Awards. After BERT, Google presents as its first result a snippet from a BBC article, which states: 'A 19-year-old Swift had just defeated Beyoncé to win Best Female Video for her country-pop teen anthem You Belong With Me.' Google's search returns automatically highlighted '19-year-old' for emphasis."
Thumbnail "For seven years, AI researchers have been struggling with an unusual challenge: shooting cartoon birds at cartoon pigs." "It's trickier than it looks. One of the paper's authors, Ekaterina Nikonova, currently a PhD candidate at the Australian National University, tells me that in chess, for example, there's a much smaller number of choices on every turn -- and the outcome is knowable in advance, making it easier to plan ahead. The same thing is true for Go -- but the cartoon worlds of Angry Birds are far less predictable."

So Nikonova teamed up with a researcher on another continent -- Jakub Gemrot, a lecturer on game development in the Czech Republic at Charles University in Prague." "The two intrepid researchers describe how they carefully represented the state of the playing field and the available actions -- and then incorporated a system of rewards, based on the scores achieved."

"Then they applied the tried-and-true tactic of deep reinforcement learning, using an architecture based on Google's DeepMind Deep Q-network, which had achieved some notoriety for its use in experiments with several Atari games."
Thumbnail A course in artificial Intelligence is being offered in a prison in Turku, a city in the south of Finland. "The course was originally designed at the University of Helsinki as a more accessible version of an Introduction to AI curriculum for computer science students."

"The Finnish government embraced the scheme as a way of supporting inmates reintegrating into a digital-first employment market once they left prison."
Thumbnail Machine learning for scent. "We leverage graph neural networks (GNNs), a kind of deep neural network designed to operate on graphs as input, to directly predict the odor descriptors for individual molecules, without using any handcrafted rules."

"Since molecules are analogous to graphs, with atoms forming the vertices and bonds forming the edges, GNNs are the natural model of choice for their understanding."

They describe a process for transforming the graph into a fixed-length vector, which is then fed into a fully connected deep neural network. The output of this neural network is labels like "fruity", "citrusy", "creamy", "sweet", "citrusy", "baked", "spicy", "vanilla", "clean", "alcoholic", "beefy", "chocolaty", etc.

To evaluate it, they used a statistical technique called AUROC on a dataset from something called the DREAM Olfaction Prediction Challenge. It beat out random forests, which were the previous best technique.
Thumbnail Mark Zuckerberg testified before the House Financial Services Committee about the Libra cryptocurrency. I haven't posted about Libra here because I haven't time to finish reading the white paper. My impression so far was that it has a better chance of succeeding than most cryptocurrencies because it had the backing of many established financial institutions, but my understanding is that many of them have since pulled out. Anyway, questioning about the cryptocurrency Libra begins at 28:14 in the video.
Thumbnail Facebook has released "the most comprehensive and modular open source platform for creating AI-based reasoning systems."

"One of the most important lessons we learned from using Horizon over the past year was that, though it provided resources for training production-ready reinforcement learning models, and then improving them during deployment, the platform's core library was more applicable to in-development models than existing ones. So we built ReAgent as a small C++ library that can be embedded in any application. And where Horizon focused on models that already had enough data to start improving through trial and error, ReAgent better addresses the difficulty of creating new systems, with included models that assist in beginning the process of gathering relevant learning data."

"Previously, the open source reinforcement learning model libraries available to researchers typically required them to fill in substantial gaps before evaluating and deploying their systems. Researchers and engineers have also had access to various decision services that offer extensive functionality, but these work more as services than toolkits, making their resources difficult to integrate into existing models or projects. ReAgent strikes a balance between these offerings, with modular tools that can be used for complete, end-to-end development of a new model, or to evaluate or optimize existing systems, avoiding the need to repeat substantial work with each new evaluation or implementation."
Thumbnail I haven't had time to go through the 'quantum supremacy' paper that everyone's talking about but thought I'd pass along this commentary by Scott Aaronson. Issues he addresses: IBM saying that they could simulate the Google experiment in 2.5 days, rather than the 10,000 years that Google had estimated, direct versus indirect verification, the asymptotic hardness of spoofing Google's benchmark, why use linear cross-entropy, controlled-Z versus iSWAP gates, and Gil Kalai's claim that the Google experiment must've been done wrong.

IBM argues that "by commandeering the full attention of Summit at Oak Ridge National Lab, the most powerful supercomputer that currently exists on Earth -- one that fills the area of two basketball courts, and that (crucially) has 250 petabytes of hard disk space -- one could just barely store the entire quantum state vector of Google's 53-qubit Sycamore chip in hard disk. And once one had done that, one could simulate the chip in ~2.5 days, more-or-less just by updating the entire state vector by brute force, rather than the 10,000 years that Google had estimated on the basis of my and Lijie Chen's 'Schrödinger-Feynman algorithm' (which can get by with less memory)."

"I don't know why the Google team didn't consider how such near-astronomical hard disk space would change their calculations, probably they wish they had."

"Neither party here -- certainly not IBM -- denies that the top-supercomputers-on-the-planet-level difficulty of classically simulating Google's 53-qubit programmable chip really is coming from the exponential character of the quantum states in that chip, and nothing else."
Thumbnail "Is building a telescope in space worth it? From a risk-reduction perspective the answer is a definite yes, says Nick Siegler, the chief technologist of NASA's exoplanet exploration program and a coauthor of the study."

"In the NASA study, Siegler and his colleagues explored the hypothetical assembly of a 20-meter telescope in space. About three times the size of JWST and twice the size of the Gran Telescopio Canarias, the largest optical telescope on Earth, this imaginary instrument could be used to look for exoplanets, which means it has to be incredibly stable and precise. According to Siegler, this was the 'hardest case possible.'"
Thumbnail Elon Musk Deepfake: 2021 A SpaceX Odyssey.
Thumbnail Bike designed with artificial intelligence breaks world speed records. "On September 13th, pilot Ilona Peltier bicycled down a 200-meter track at 126.5 kilometers per hour, setting a new world record for women's cycling speed across all categories. The previous day, her teammate Fabian Canal set a new men's university world record with 136.7 km/h. The speeds were recorded at the 2019 edition of the World Human Powered Speed Challenge (WHPSC) in Nevada, USA, in which teams of university students take advantage of the flat desert roads to compete for the title of fastest human-powered vehicle."

"Peltier and Canal competed as part of the IUT Annecy team, led by Guillaume de France, and both rode a bike designed using a software application developed by CVLab spin-off Neural Concept. The pod-like vehicle, which houses two wheels and is pedaled by a reclining pilot, can travel at speeds faster than many motor-driven vehicles thanks to an optimized, aerodynamic shape proposed by Neural Concept's artificial intelligence (AI)-driven computer program."

But drivers passing by will be like, "What's that funky elongated egg doing in the street?" And you can't see, except through a video camera. You have to steer through a video camera.
Thumbnail What if devices were covered with artificial skin? You could tickle them.
Thumbnail "Polymers promise a more flexible artificial retina. Organic semiconductors can link up with brain cells to send and receive signals."

"Retinitis pigmentosa targets the rod and cone cells almost exclusively. The disease does minimal damage to the retina's many other neurons, which process signals from the rods and cones and convey the results to the optic nerve. So in principle, fixing vision is just a matter of going in through the very back of the eye, where the ravaged rods and cones originally formed a layer just 100 micrometers thick, and replacing them with a device that will generate electrical pulses in response to light. Pulses from various points on the device can then communicate with the retina's surviving neurons in a natural way."

"Because semiconductive polymers bend and flex like natural tissues, physicist Guglielmo Lanzani of the Italian Institute of Technology (IIT) in Milan says, 'they are biocompatible.' In tests in the lab and in animals, the polymer retina seems to coexist with them quite happily, with no adverse reactions at all."
Thumbnail An artificial leaf takes water and carbon dioxide combined with energy from sunlight to produce hydrogen and carbon monoxide, which is called 'syngas', and is combustible in engines, releasing the energy and recycling the carbon in a carbon-neutral cycle.

It uses halide perovskite, a material commonly used in solar cells, along with bismuth vanadate, a yellow solid, to absorb the sunlight, and the reaction is catalyzed with a cobalt porphyrin. Porphyrins are rings of carbon atoms connected with methanes.

"What we'd like to do next, instead of first making syngas and then converting it into liquid fuel, is to make the liquid fuel in one step from carbon dioxide and water."
Thumbnail Hiders and Seekers from Open AI on Two Minute Papers with Károly Zsolnai-Fehér. "The goal of the project was to pit two AI teams against each other, and hopefully, see some interesting emergent behavior." "Whenever one team discovers a new strategy, the other one has to adapt." "The results are magnificent, amusing, weird."
Thumbnail The New York Times is using a recommendation algorithm called contextual multi-armed bandits. Sounds much simpler than what others (e.g. Facebook, YouTube) are using. "The algorithm we used is based on a simple linear model that relates contextual information -- like the country or state a reader is in -- to some measure of engagement with each article, like click-through rate. When making new recommendations, it chooses articles more frequently if they have high expected engagement for the given context."
Thumbnail Could a robot outdo the runners at the 2020 Tokyo Olympics? Contenders include MIT's Cheetah, Boston Dynamics' Petman, Handle, and WildCat, Michigan Robotics' MABEL, the University of Cape Town's Baleka, and the Institute for Human & Machine Cognition (IHMC)'s Planar Elliptical Runner (PER) and HexRunner.
Thumbnail Darknet: open source neural networks in C. I think I'll stick with Python and TensorFlow (and PyTorch once I've learned it) for the time being, but it's good to know this option is out there. It can use GPUs through CUDA and can run state-of-the-art neural networks like YOLO ("you only look once", a real-time object detection system).
Thumbnail Trump vs RoboTrump. Take the quiz.
Thumbnail "When AngularJS framework was released, it sky rocketed to being the most popular front-end development framework. But after a few years, React, a competing front-end framework open sourced by Facebook, quickly gained traction and is now the most popular one."

"A few years later, I began working on deep learning projects and began using the most popular framework at the time, Tensorflow 1.0, open sourced by Google. Google announced early 2019 the release of Tensorflow 2.0, which is a major, non-backward compatible rewrite of TF1.0. One of the most important changes is that TF2.0 default mode is now « eager execution », which basically means you write your neural networks as functions and not as graphs. This pattern (procedural rather than declarative) is widely considered to be more intuitive (at least for python people who are numerous in deep learning community) and closely follows PyTorch, a competing deep learning framework open sourced by Facebook, that is progressively overtaking Tensorflow as the most popular framework."

"Once again, the rewrite is not backward compatible, once again it aligns on the usage pattern of a competing framework, and once again this competing framework comes from Facebook."
Thumbnail Benchmarking transformers: PyTorch and TensorFlow. Transformers are a type of neural network architecture used for natural language processing tasks like text classification, information extraction, question answering, and text generation.

Basically, the result from the benchmarks is that PyTorch with a GPU instead of a CPU and TorchScript is about as fast as TensorFlow with a GPU and XLA. TorchScript was designed to take models created in PyTorch and remove their dependency on Python so they can be moved into production. XLA, which stands for Accelerated Linear Algebra, in contrast, is an actual compiler. TorchScript improved the performance of some models but not others, while XLA increased the performance of all models.
Thumbnail Kimera is a C++ library for robots to map their environment and figure out where the robot is using only camera images and inertial data. (This is called SLAM, "simultaneous localization and mapping".) It works on robots that use Robot Operating System (ROS). The input is camera and inertial data, and it outputs trajectory estimates, odometry information, something called loop closures, which refers to when the robot has looped back to a location it has been in before and needs to update its beliefs about that location, and multiple meshes describing the 3D environment the robot is in, one that is updated very fast (less than 20 milliseconds) so the robot can avoid obstacles, one that is updated very slowly but maps the environment out most extensively (not just the area in the robots immediate vicinity) and has semantic labels for everything (in other words, it differentiates between walls, floors, furniture such as tables, etc), which is useful for long-term planning, and a 3rd mesh which is in-between the other two (generated semi-quickly, around 1 second, has the local area in the robot's immediate vicinity, and has semantic labels). It does this by combining a lot of different algorithms, too many to list here. The only place neural networks are used is for the semantic labeling. This enables the system to run fast without GPUs. It also doesn't require distance (RGB-D) cameras.
Thumbnail "The researchers invented a 'visual deprojection' model that uses a neural network to 'learn' patterns that match low-dimensional projections to their original high-dimensional images and videos. Given new projections, the model uses what it's learned to recreate all the original data from a projection."

"In experiments, the model synthesized accurate video frames showing people walking, by extracting information from single, one-dimensional lines similar to those produced by corner cameras. The model also recovered video frames from single, motion-blurred projections of digits moving around a screen."
Thumbnail Subsystems that can be plugged together by robots to build large-scale structures. "The underlying vision is that just as the most complex of images can be reproduced by using an array of pixels on a screen, virtually any physical object can be recreated as an array of smaller three-dimensional pieces, or voxels, which can themselves be made up of simple struts and nodes. The team has shown that these simple components can be arranged to distribute loads efficiently; they are largely made up of open space so that the overall weight of the structure is minimized. The units can be picked up and placed in position next to one another by the simple assemblers, and then fastened together using latching systems built into each voxel."

"The robots themselves resemble a small arm, with two long segments that are hinged in the middle, and devices for clamping onto the voxel structures on each end. The simple devices move around like inchworms, advancing along a row of voxels by repeatedly opening and closing their V-shaped bodies to move from one to the next."

The hope is the system can be used to produce large-scale systems from airplanes to bridges to buildings.
Thumbnail Coanda effect hovercraft. Instead of blowing air underneath, it blows air on the outside. It works but before someone builds a life-size version and offers you a ride on it, you might consider its tendency to flip over.
Thumbnail "According to a new study from Oracle and Future Workplace, a clear majority of Americans (64 percent) would trust a robot more than a human manager, and 32 percent think that a machine will eventually replace their boss."

"What can robots actually do better than living, breathing managers? Here's the full list: Provide unbiased information, maintain work schedules, problem-solve, manage a budget, answer confidential questions, and evaluate team performance. However, respondents didn't think machines were better than human managers at 'understanding feelings' and professional coaching."
Thumbnail "Ono Food Co. announced this week that the first mobile restaurant powered entirely by robotic technology, called Ono Blends, will open later this month in Venice, California. Not coincidentally, the company was founded by two people who know quite a bit about robotics and automation. CEO Stephen Klein came from robotic coffee bar Café X in San Francisco, and previously worked at Instacart. CTO Daniel Fukuba directed the engineering team at a firm that provided automation for Zume, SpaceX, Tesla, Apple and more."

"Every step of Ono Blends' assembly process is monitored by hundreds of sensors to ensure no spillage, cross-contamination or inconsistencies. He adds that Ono's technology creates 60 blends per hour, versus the industry standard of about 20, and uses about 28 times less water because of its cleaning system."