| "Not all reinforcement learning agents are created equal and it is not obvious what type of reinforcement learning agent should be deployed in specific scenarios. We know that a particular self-learning RL agent can master Go through self-play, but will it be any good at driving a car? Aside from tedious trail-and-error, there are no practical answers to this question.
Or maybe there are. DeepMind has just released a collection of "experiments" that you can do on your reinforcement learning agent.
For example, there is an experiment called "memory length" that "is designed to test the number of sequential steps an agent can remember a single bit."
Another experiment, called "deep sea" which tests the agent's depth of exploration of an NxN grid. The idea is that there is a cost to "exploration", yet the agent must explore its environment if it is going to learn.
The complete list of experiments (though I'm skipping explanations) is: bandit, bandit noise, bandit scale, cartpole, cartpole noise, cartpole scale, cartpole swingup, catch, catch noise, catch scale, deep sea, deep sea stochastic, discounting chain, memory len, memory size, mnist, mnist noise, mnist scale, mountain car, mountain car noise, mountain car scale, umbrella distract, and umbrella length.
| Not Jordan Peterson. So with this website, you can type in any text, and it will make Jordan Peterson say it.
"The technology used to generate audio on this site is a combination of two neural network models that were trained using audio data of Dr. Peterson speaking, along with the transcript of his speech."
"The first model, developed at Google, is called Tacotron 2. It takes as input the text that you type and produces what is known as an audio spectrogram, which represents the amplitudes of the frequencies in an audio signal at each moment in time. The model is trained on text/spectrogram pairs, where the spectrograms are extracted from the source audio data using a Fourier transform."
"The second model, developed at NVIDIA, is called Waveglow. It acts as a vocoder, taking in the spectrogram output of Tacotron 2 and producing a full audio waveform, which is what gets encoded into an audio file you can then listen to. The model is trained on spectrogram/waveform pairs of short segments of speech."
To give it a whirl, I punched in, "Never gonna give you up. Never gonna let you down. Never gonna run around and desert you. Never gonna make you cry. Never gonna say goodbye. Never gonna tell a lie and hurt you."
It made me wait 20 minutes for the result, evidently the result of the site being under heavy load. But I put the audio I got back on my web server so you can follow the link below to hear it.
|Raisim is a physics engine for robotics and AI research from ETH Zürich that does rigid-body dynamics simulation. "It features an efficient implementation of recursive algorithms for articulated system dynamics (Recursive Newton-Euler and Composite Rigid Body Algorithm)."|
| "New US Air Force kit that can turn a conventional aircraft into a robotic one has completed its maiden flight. Developed by the Air Force Research Laboratory (AFRL) and DZYNE Technologies Incorporated as part of the Robotic Pilot Unmanned Conversion Program, the ROBOpilot made its first two-hour flight on August 9 at the Dugway Proving Ground in Utah after being installed in a 1968 Cessna 206 small aircraft."
ROBOpilot works by "replacing the pilot seat (and pilot) with a kit consisting of all the actuators, electronics, cameras, and power systems needed to fly a conventional aircraft, plus a robotic arm for the manual tasks. In this way, ROBOpliot can operate the yoke, rudder, brakes, throttle, and switches while reading the dashboard gauges and displays like a human pilot."
|Multiply Labs is making a robot that makes pharmaceuticals with multiple customized doses.|
|The DeepMind podcast: Coming soon.|
| A month ago someone used OpenAI's GPT-2 to create a fake Reddit where the AI wrote all the posts and all the comments. It was trained on a massive corpus of real Reddit posts. Somewhere in the middle of all that, the AI asked itself, "Do you think AI will be the downfall of humanity or the savior?", and wrote all the answers.
"The downfall of humanity. The salvation of humanity. The rise of the (presumably) benevolent AI in our place." "The downfall of humanity could be considered an existential crisis if it truly did not learn our 'codes of conduct'."
|AI-generated personal finance blog.|
|Put your address into this website and it'll tell you what native American tribe's territory you're living on. It says I'm living on Apache territory.|
|Robo-shorts that make walking and running easier (they claim). It can distinguish between walking and running by detecting when your center of mass changes position relative to your stride. From there it has motors that assist your glute muscles.|
| Interview with Paola Arlotta on brain development. We mostly study mice brains, not human brains. A human brain is built in human time -- 9 months of gestation plus 20 years to become brains that can have this type of conversation. A mouse brain takes 20 days. If you put mouse brain stem cells in a dish, they form faster than human brain stem cells. The brain starts as a neural tube. The stem cells start out all the same, but over time become heterogeneous, and then diverge further into non-stem cells and the actual cells of the brain. Neurons are made first and then glial cells. The tube curls and one end becomes expanded for the brain and mechanical forces shape the brain as well. We don't know the entirety of how the genetic code controls brain development, but it is very well controlled. We only know how some parts of it work, like the development of some cell types. New cell types are developed all the way until birth. Before birth, our cells have no myelin, and are myelinated after birth. This continues until we're 25-30. Some of our most recently evolved cells, though, that give us a lot of the cognition that we have and mice don't, have very little myelin. Less myelin may allow for more flexibility of functions.
Nature vs nurture: always both. The genes incorporate your 20 plus years of interacting with the environment into your brain. If you are born without vision, your visual cortex will develop to do something different. Her kids have very different personalities, because they have different genetics, even if they have the same parents, but also have amazing plasticity.
Organoids: organoids are not brains. They are cellular systems developed from stem cells in a culture dish that mimic some aspect of the development of the brain. They are 4-5 mm in size. They are our best way of studying the development of the brain. You can take cells from an autistic person and study brain cell development in an autistic person. Organoids are very different from each other, unlike brains. When we are all born our brains look very similar. Different parts of the brain have different cells and researchers have been able to make organoids that mimic some aspect of development of parts of the brain. The cerebral cortex is the part that really makes us human. If you grow the organoids for a long enough time, many cells of the cortex appear in culture. The astrocytes also appear. Astrocytes are support cells that also guide the development of synapses.
Neurodevelopmental disorders can come from cells that don't work properly or cells that are not "born" at all. Something could go wrong in the cell maturation process. We can compare the gene expression of a single cell in a normal person and a person with a neurodevelopmental disease. You can make an organoid of a brain from a specific person with a neurodegenerative disease using their stem cells and use it for screening drugs to find what would helps that person.
Stem cell biology (how to turn a skin cell into an embryonic stem cell that can become a brain or an organ) and technologies for studying the properties of single cells millions of cells at a time are growing exponentially.
|McGyver-ing robot. Robot who needs a squeegee and spatula but doesn't have them, but has other parts, can figure out how to make them by sticking the other parts together.|
|Deep learning for semantic data type detection. So they call types like "string", "integer", and "boolean" "atomic" data types, but what they want to identify are "semantic" data types, like "name", "birthdate", "weight", "rank", "location", "elevation", "grade", "product", "album", "elevation", etc -- "semantic" data types, in their terminology, reflect the meaning of the data. Some "semantic" types, like ISBNs and credit card numbers, can be identified by mathematical formulas. For everything else, you need deep learning.|
| "Exploring DNA with deep learning." Interview (written) with Lex Flagel who just published "The Unreasonable Effectiveness of Convolutional Neural Networks in Population Genetic Inference". People have been using deep learning in genetics research, but generally what they've been doing is combining a whole bunch of traditional statistics and running them though a classifier. Using one statistic on its own isn't good enough because there can sometimes be other things that cause that statistic to change other than the thing you're looking for. For example there's a statistic called Tajima's D that is supposed to detect population bottlenecks. If the DNA indicates that there was a recent bottleneck in a population, Tajima's D is supposed to go negative. The trouble is it can be fooled by positive selection instead of population size changes. So one statistic by itself isn't so good, but if you can calculate a whole bunch of them, and then feed those into a neural network, maybe it can tell you what's going on with the population.
What Lex Flagel did, though, was convert the genetic sequence alignment data into images and then feed the images into a convolutional neural network, the type of neural network designed for images. Doing it this way, you don't need to pre-calculate any statistics -- the neural network learns them for itself.
But as he explains in the interview, "However, there's a catch. Though neural networks can automate feature extraction, they are poor at giving explanations of how they did it. For example, a it's pretty easy to train a neural network to distinguish pictures of cats from pictures of dogs. But it's really hard to get them to tell you why they think a certain picture is a cat, or what generally distinguishes cats from dogs. Our method suffers from this problem as well. So the neural networks we built can make some pretty stunning inferences, which is great, but they can't teach you new theory or lead you to new equations. In contrast, because many classical methods in population genetics are derived from theory, they can explain themselves in the terms of that theory, which is really useful for understanding and learning."
|Whole genome sequencing from DNA from 2,308 people from 493 families found 69 genes that increase the risk of autism spectrum disorder. The genes found are primarily related to ion transport and the microtubule cytoskeleton in the brain. However, genes in people with no family history of autism were related to transcriptional and chromatin regulation.|
|A huge genome-wide association study on asthma with 37,846 British individuals with asthma, 9,433 of whom had asthma as children, and a control group of 318,237 people without asthma found that 61 independent genes are related asthma, 56 of which are related to childhood-onset asthma and 19 of which are related to adult-onset asthma. "Childhood-onset genes were highly expressed in epithelial cells (skin). Both childhood-onset and adult-onset asthma genes were highly expressed in blood (immune) cells."|
| "A genome-wide association study (GWAS) and bioinformatic analysis of more than 165,000 US veterans confirms a genetic vulnerability to post-traumatic stress disorder (PTSD), specifically noting abnormalities in stress hormone response and/or functioning of specific brain regions."
"In the European American group, the scientists found eight distinct genetic regions with strong associations between PTSD and how the brain responds to stress. It highlighted the role of one specific kind of brain cell: striatal medium spinal neurons, which are prevalent in a region of the brain responsible for, among other things, motivation, reward, reinforcement and aversion."
| "The loss of a single gene two to three million years ago in our ancestors may have resulted in a heightened risk of cardiovascular disease in all humans as a species, while also setting up a further risk for red meat-eating humans."
"Naturally occurring coronary heart attacks due to atherosclerosis are virtually non-existent in other mammals, including closely related chimpanzees in captivity which share human-like risk factors, such as high blood lipids, hypertension and physical inactivity."
"Mice modified to be deficient (like humans) in a sialic acid sugar molecule called Neu5Gc showed a significant increase in atherogenesis compared to control mice, who retain the CMAH gene that produces Neu5Gc."
"Human-like elimination of CMAH and Neu5Gc in mice caused an almost 2-fold increase in severity of atherosclerosis compared to unmodified mice."
|Whole-genome sequencing has emerged as the most effective way to trace salmonella outbreaks back to their source (through our increasingly complex supply chain), and the most effective way to prevent outbreaks going forward.|
|Humans are capable of echolocation.|
|"Failure to launch" in the US comparable to hikikomori in Japan?|
|Clip from a talk by Andrej Karpathy. Unfortunately I couldn't find video of the original talk -- always prefer to track material down to the original source. Andrej Karpathy is the person in charge of Telsa's AI for driving cars. He talks about the challenge of getting cars to fuse information from many sensors over time looking for many things.|
|The case for re-writing history. The part of history before recorded history. (Warning: watching this video entails risk of becoming a conspiracy crackpot.)|
| "In the first study, the researchers asked 300 people if they would rather see a colleague replaced by a human or a robot -- 62 percent of respondents chose the human. The researchers also asked the same group how they would feel if it was their own job at stake -- this time, only 37 percent chose the human option."
"In a second study, the researchers asked 251 people to rate how much negativity they felt about losing a job to a robot versus to another person. They report that respondents generally showed more negativity toward robots replacing colleagues' jobs than if they were losing their own. The people in this group expressed that they felt less threatened by the thought of losing a job to a robot versus another person."
"In a third study, the researchers asked 296 people who worked in manufacturing if they thought they were going to lose their jobs someday due to being replaced by some form of technology. They report that roughly a third of respondents felt like it was a real possibility."
|$949 hexabot. Each leg has 3 motors and 3 degrees of freedom. Head can spin. 720p camera, 3-axis accelerometer, distance measuring sensor, infrared transmitter. Has dual-core 1 GHz ARM processor, USB, audio, I2C, ADC, and GPIO ports as well as Wi-fi. Battery lasts 45-180 minutes. Comes with SDK, 3D simulator, and mobile app.|
| "CARLA has been developed from the ground up to support development, training, and validation of autonomous driving systems. In addition to open-source code and protocols, CARLA provides open digital assets (urban layouts, buildings, vehicles) that were created for this purpose and can be used freely. The simulation platform supports flexible specification of sensor suites, environmental conditions, full control of all static and dynamic actors, maps generation and much more."
But what does CARLA stand for?
|Beard trimming robot.|
|Vibration-minimizing system for robots. This is from Disney Research and will make it easier for Disney to make animatronic characters. The way it works is, instead of making animatronic characters with very rigid materials to minimize vibration, you can make them with rigid and deformable materials, as long as you can get the simulator to characterize their movements. The movement optimizations are all worked out in the simulator, not with AI, but with tons of regular math, 3D vector calculus such as Jacobians.|
| "This startup built a treasure trove of crop data by putting AI in the hands of Indian farmers." "Tushar Kamble knew there was something wrong with his chilli plants when he noticed they were smaller than average, and the leaves were starting to curl." He "asked his neighboring farmers for advice one morning at the border between his 7 acres of land and theirs."
"Kamble, 38, got lots of different opinions about aphids and disease. Then he tried an app called Plantix and used it to take a photo of his chilli plant. The app cross-referenced it against a database 50 different species using image recognition, a type of machine-learning, and within two minutes he had a different answer: His chillis weren't getting enough water, and they'd benefit from a micronutrient spray to."
"Within a few weeks, Kamble's chillies had grown to a decent size."
| Human-monkey chimeras have been made in China. The researchers are in the US and Spain (Salk Institute in the US and Murcia Catholic University in Spain), but the actual chimeras were made in China to avoid US and Spanish laws. "Researchers led by Spanish scientist Juan Carlos Izpisúa have created for the first time a human-monkey hybrid in a laboratory in China -- an important step towards using animals for human organ transplants, project collaborator Estrella Núñez confirmed to EL PAÍS." Juan Carlos Izpisúa is currently at the Salk Institute for Biological Studies in La Jolla, California.
"The team, made up of members of the Salk Institute in the United States and the Murcia Catholic University (UCAM) in Spain, genetically modified monkey embryos to deactivate genes that are essential to the formation of organs. The scientists then injected human stem cells, which are capable of creating any type of tissue, into the embryo. The product of this work is a monkey with human cells that has not been born, because researchers stopped the process."
The article goes on to describe human-pig and rat-mouse chimeras, basically saying the human-pig chimeras were unsuccessful (one human cell for every 100,000 pig cells), but the rat-mouse chimeras were much more successful (stem cells capable of generating organs), and they think it's because rats and mice are more closely related species than humans and pigs, which is why someone thought the logical next thing to try was chimeras with humans and a species more closely related to humans. The article also comments on laws and the ethical considerations that are considered acceptable. It does not, however, reveal anything about what happened in the human-monkey chimeras, other than hinting they were more successful than the human-pig chimeras.
|Helium shortage update.|
|Artificial neural networks don't learn the way the brain learns. The key algorithm governing learning in the brain is thought to be an algorithm called spike-timing dependent plasticity (STDP), in which the synapse is sensitive to when neurons on both sides of it (output and input neurons) fire at the same time. An algorithm to do the same thing in computers has shown itself to be a faster learner than regular neural networks, though it has only been compared with the most simple neural networks (single-layer neural networks called perceptrons) so far.|
| The brain processes information best when the networks of neurons in it are close to a critical point -- or so brain researchers thought based on theoretical models. A critical point is the point where a physical system switches from one mode to another, such as when a solid becomes a liquid. In the case of neurons, we're talking about a switch from one pattern of activity to a different one. It is possible for large areas of the brain to become active nearly simultaneously in an avalanche-like fashion, but happens much less than expected.
An analysis of 155 nerve cells in a macaque monkey suggests there is a second type of criticality in the brain that also leads to large-scale coordinated behavior, but it has escaped detection by electroencephalograms (EEGs) or local field potential (LFPs -- like EEGs but measure electrical potential in the extracellular space in brain tissue instead of from the scalp) because those instruments detect the additive effect of large numbers of neurons, and the new second type of criticality doesn't produce electrical potentials that add up in the way those instruments expect.
|Taking visual input from a camera and sending an image directly into the brain of a blind person. Very low-resolution now but likely will get better.|
| "When I was a baby, my grandma knit me an impressive range of little booties and blankets. I've been worried that I will utterly fail at my grandmotherly duties when I eventually have grandchildren because I am complete rubbish at knitting."
"But a team of computer scientists at MIT have seen my despair, and they want to help. They've developed a software called InverseKnit that will allow anyone to design a pattern on a computer (no coding skills required!) or upload a picture of an existing knitted item. That image can be sent to any existing 3D knitting machine to produce the final product. You might sketch out some hip fingerless gloves or a beanie with a unicorn on it, and a few minutes later, the garment will appear as if by magic. Or you might upload a picture of an infinity scarf, tweak the color on the computer, and presto! Your scarf is done."
| Treasure-trove of previously unknown ancient massive galaxies. "The light from these galaxies is very faint with long wavelengths invisible to our eyes and undetectable by Hubble. So we turned to the Atacama Large Millimeter/submillimeter Array (ALMA), which is ideal for viewing these kinds of things." "It took further data from the imaginatively named Very Large Telescope in Chile to really prove we were seeing ancient massive galaxies where none had been seen before."
They found 39 very massive galaxies with radio telescopes at a wavelength of 870 micrometres, much longer than visible light, and these galaxies are completely invisible in the normal visible spectrum. Since due to the speed of light we are also looking into the past, we are looking at galaxies that existed very early in the universe's history, when massive and dusty galaxies were the norm, had extremely high star formation rates, and were probably progenitors to clusters of galaxies such as the local group our galaxy is a part of today. It is speculated they could have massive dark matter halos, and that this type of galaxy might play a role in the distribution of dark matter in the universe.
|Most books published before 1964 are in the public domain and are now available online. Where to find time to read them?|
| "A novel dataset of more than 1,200 AI stumping questions" has been "generated by humans and computers working in conjunction."
"When humans write questions, they don't know what specific elements of their question are confusing to the computer. When computers write the questions, they either write formulaic, fill-in-the blank questions or make mistakes, sometimes generating nonsense."
"To develop their novel approach of humans and computers generating questions together, Jordan Boyd-Graber, associate professor of computer science at UMD and senior author of the paper, and his team created a computer interface that reveals what a computer is 'thinking' as a human writer types a question. The writer can then edit his or her question to exploit the computer's weaknesses."
"In the new interface, when a human author types a question, the computer's guesses appear on the screen ranked in order of correctness. And the words that led the computer to make its guesses are highlighted."
"For example, if the author writes 'What composer's Variations on a Theme by Haydn was inspired by Karl Ferdinand Pohl?' and the system correctly answers 'Johannes Brahms,' the interface highlights the words 'Ferdinand Pohl' to show that this phrase led it to the answer. Using that information, the author can edit the question to make it more difficult for the computer without altering the question's meaning. In this example, the author replaced the name of the man who inspired Brahms, 'Karl Ferdinand Pohl,' with a description of his job, 'the archivist of the Vienna Musikverein,' and the computer was unable to answer correctly. However, human experts could still easily answer the edited question correctly."
| The ultrafast rotation of a molecule has been filmed with precisely tuned pulses of laser light. "The resulting 'molecular movie' tracks one and a half revolutions of carbonyl sulphide -- a rod-shaped molecule consisting of one oxygen, one carbon and one sulphur atom -- taking place within 125 trillionths of a second, at a high temporal and spatial resolution." "The different stages of the molecule's periodic rotation repeat after about 82 picoseconds."
"In the realm of molecules, you normally need high-energy radiation with a wavelength of the order of the size of an atom in order to be able to see details. So Jochen Küpper's team took a different approach: they used two pulses of infrared laser light which were precisely tuned to each other and separated by 38 trillionths of a second (picoseconds), to set the carbonyl sulphide molecules spinning rapidly in unison (i.e. coherently). They then used a further laser pulse, having a longer wavelength, to determine the position of the molecules at intervals of around 0.2 trillionths of a second each." "Since this diagnostic laser pulse destroys the molecules, the experiment had to be restarted again for each snapshot."
"The peculiar features of quantum mechanics can be seen in several of the movie's many images, in which the molecule does not simply point in one direction, but in various different directions at the same time -- each with a different probability."
| "Symantec said it had seen three cases of seemingly deepfaked audio of different chief executives used to trick senior financial controllers into transferring cash."
"Corporate videos, earning calls, media appearances as well as conference keynotes and presentations would all be useful for fakers looking to build a model of someone's voice."
| Antarctic ice velocity map. "Constructed from a quarter century's worth of satellite data." "The map is 10 times more accurate than previous renditions, covering more than 80 percent of the continent."
"By utilizing the full potential of interferometric phase signals from satellite synthetic-aperture radars, we have achieved a quantum leap in the description of ice flow in Antarctica."
"To chart the movement of ice sheets across the surface of the enormous land mass, the researchers combined input from six satellite missions: the Canadian Space Agency's Radarsat-1 and Radarsat-2; the European Space Agency's Earth remote sensing satellites 1 and 2 and Envisat ASAR; and the Japan Aerospace Exploration Agency's ALOS PALSAR-1."
"The team was able to compose a map that resolves ice movement to a level of 20 centimeters (a little over half a foot) per year in speed and 5 degrees in annual flow direction for more than 70 percent of Antarctica."
|Doom Man upscaled by a neural network.|
| "Robots are solving banks' very expensive research problem." "As lawmakers in Brasilia debated a controversial pension overhaul for months, a robot more than 5,000 miles away in London kept a close eye on all 513 of them. The algorithm, designed by technology startup Arkera Inc., tracked their comments in Brazilian newspapers and government web pages each day to predict the likelihood the bill would pass."
"Weeks before the legislation cleared its biggest obstacle in July, the machine's data crunching allowed Arkera analysts to predict the result almost to the letter, giving hedge fund clients in New York and London the insight to buy the Brazilian real near eight-month lows in May."
| Using AI to give doctors a 48-hour head start on acute kidney injury. "Affecting up to one in five hospitalised patients in the UK and the US, the condition is notoriously difficult to spot, and deterioration can happen quickly. Experts believe that up to 30% of cases could be prevented if a doctor intervenes early enough."
"Working with the VA, the DeepMind team applied AI technology to a comprehensive de-identified electronic health record dataset collected from a network of over a hundred VA sites. The research shows that the AI could accurately predict AKI in patients up to 48 hours earlier than it is currently diagnosed. Importantly, the model correctly predicted 9 out of 10 patients whose condition deteriorated so severely that they then required dialysis. This could provide a window in the future for earlier preventative treatment and avoid the need for more invasive procedures like kidney dialysis."
"To address the 'black box' problem -- one of the key barriers for the implementation of AI in clinical practice -- the model also provides the clinical information that was most important in making its predictions of deteriorating kidney function. It also provides predicted future results for several relevant blood tests."
|How birds land is being studied with high-speed cameras and a sensor-packed perch. The birds studied are Pacific parrotlets. They do the same thing with their wings regardless of the perch -- only the foot and claws act differently depending on the surface. The claws can react to the surface on millisecond timescales if the claws discover the surface is slippery or has other irregularities. The researchers developed a mathematical model of how the grasping works, which they think could be used for aerial robots.|
| "Colleges must prepare students for an AI world." Op-ed by Ritu Agarwal. "Change is needed now. Progress in these technologies will continue to unfold exponentially. We simply must accelerate our response in higher education."
"First, every student should be required to take a course on AI/ML -- call it AI Literacy 101. The course will demystify AI, separating hype from reality."
"Second, while the benefits of diversity and ability to work in diverse teams have always been important, they become front and center in an AI/ML world."
"Third, we must prepare students to work in concert with machine intelligence -- whether in a substitution or supplemental mode."
|Neural network gradients turned into sound. When the system is learning normally you get a jumble of sounds, but when the learning rate is too high (for stochastic gradient descent) and the network diverges you get the same low note repeating, and an even lower repeating note if you get NaNs (the gradient has exploded to infinity in essence).|
|Underwater "transformer" robot can transform from one form that is streamlined and can travel smoothly a long way and another form that enables it to manipulate machinery underwater.|
| Depression detection algorithm. The press release says, "A realistic scenario is to have people use an app that will collect voice samples as they speak naturally. The app, running on the user's phone, will recognize and track indicators of mood, such as depression, over time. Much like you have a step counter on your phone, you could have a depression indicator based on your voice as you use the phone." Then it says, "Such a tool could prove useful to support work with care providers or to help individuals reflect on their own moods over time." But the primary use of the technology described in the actual research paper is not to help people monitor their own moods or discuss their moods with their therapist, but to detect depression in people who are trying to hide it.
The way the system works is it divides audio into 20-second chunks, computes a set of spectral features from the sound, and feeds it into the analysis system. Paradoxically, while a deep learning network was most accurate when asked to simply specify yes or no for depression, a random forest was most accurate when asked to quantify the severity of depression.
|"The last robot-proof job in America?" "There is one thing, however, that the sophisticated logistics system cannot do: pick out a fish. If Warren Buffett orders a red snapper, the company needs to insure that his fish is fresh, fairly priced, and actually an American red snapper -- and not some other, day-old red fish that a vender is trying to pass off. (According to the ocean-conservation organization Oceana, more than twenty per cent of the seafood in restaurants and grocery stores in America is misidentified.) For this task, the company has enlisted one of the old-timers: Robert DiGregorio, a forty-seven-year veteran of the business, known in the marketplace as Bobby Tuna. DiGregorio, sixty-eight, is the author of 'Tuna Grading and Evaluation,' an industry standby. He possesses a blend of discernment and arcane fish knowledge that, so far, computers have yet to replicate. Over the years, he said recently, 'I've bought and sold literally millions of pounds of fish.'"|
| "Marty the grocery store robot is a glimpse into our hell-ish future."
"Apparently my supermarket has just gotten a robot. Its name is Marty. It detects spills. Doesn't clean them, just starts shouting if it sees one."
| Hitesh Nair misses The Office. So he made an AI write him new scripts. With the small version of OpenAI's GPT-2 retrained on all the scripts from The Office.
"Michael: Well, I mean, do you have to be perfect?"
"Michael: We, at Dunder Mifflin, are a team. And we all know that we are going to fail. And that is the greatest gift that a salesman can give to a salesman."
|Course materials for Stanford CS234: Reinforcement Learning taught by Emma Brunskill are available online for free.|
| "A chemical clue to how life started on Earth. Every living thing stems from the same limited set of 20 amino acids, and now scientists may know why." Because of "selective incorporation of proteinaceous over nonproteinaceous cationic amino acids in model prebiotic oligomerization reactions."
"Cationic" means positively-charged amino acids. "Oligomerization reactions" means chemical reactions that combine the amino acids together (going from monomers to polymers). "Proteinaceous" means we are talking about the amino acids that today are part of proteins in living organisms, as opposed to those evolution decided to leave out (the "nonproteinaceous"). "Selective incorporation" means the amino acids that today form proteins have shown themselves to be selectively incorporated into these combinations (at least for positively-charged amino acids), while the others aren't. "Model" and "prebiotic" mean this was done in an experiment that was designed to model the conditions before life arose on the planet.
"Why that specific set? Scientists know there are many more amino acids out there. In fact, meteorites with up to 80 amino acids have landed on Earth."
"The new study suggests that life's dependence on these 20 amino acids is no accident. The researchers show that the kinds of amino acids used in proteins are more likely to link up together because they react together more efficiently and have few inefficient side reactions."
"The researchers knew water evaporation could have created the conditions necessary for amino acids to link together on early Earth, so they used a drying reaction -- water evaporates and heat is applied -- to mimic the natural conditions that cause amino acids to form peptides."
"Their experiments showed that proteinaceous amino acids are more likely to spontaneously link to form large 'macromolecules' without requiring any other ingredients, such as enzymes or activating agents. This linkage is an important step in forming a protein."
"The proteinaceous amino acids seemed to prefer reactivity through a part of their structure called the alpha-amine. They mostly formed linear, protein-like backbone 'topologies' (geometric formations)."
| New state of matter discovered. Well, apparently it was predicted in the 1960s but not observed until now. "In the 1960s, it was proposed that in small indirect band-gap materials, excitons can spontaneously form because the density of carriers is too low to screen the attractive Coulomb interaction between electrons and holes."
"The research team employed scanning tunnelling microscopy and spectroscopy to show that the enhanced Coulomb interaction in quantum-confined elemental antimony nanoflakes drives the system to the excitonic insulator state."
"The unique feature of the excitonic insulator, a charge density wave without periodic lattice distortion, was directly observed. Furthermore, spectroscopy shows a gap induced by the charge density wave near the Fermi surface."
Unfortunately, while I know what a band gap is, I don't know what an "indirect" band gap is, so my understanding of this discovery stops there. But the article goes on to speculate as to what it might be good for (which usually I ignore but this time I'm wondering).
"Excitonic insulators have been predicted to host many novel properties, including crystallized excitonium, superfluidity, and excitonic high-temperature superconductivity."
|Zach aka "MajorPrep" does a better job than Eliza Hayes at explaining why I'm wrong.|
| "When it come to generating the end of a story, most current algorithms tend to favor generic sentences, such as 'They had a great time,' or 'He was sad.' Alan Black, a professor in CMU's Language Technologies Institute, believes that writing algorithms need to incorporate some keywords into the ending that are related to those used early in the story. The algorithmic model also needs to be rewarded for using some rare words in the ending, in hopes of choosing an ending that is not totally predictable."
"Megan was new to the pageant world. In fact, this was her very first one. She was really enjoying herself, but was also quite nervous. The results were in and she and the other contestants walked out."
"Existing algorithms generated the following possible endings: 'She was disappointed the she couldn't have to learn how to win,' and 'The next day, she was happy to have a new friend.' The CMU algorithm produced this ending: 'Megan won the pageant competition.'"
| "Luke Lavis has spent years developing fluorescent dyes in every color of the rainbow. The super-bright, long-lasting dyes have been used in labs around the world and have helped make Nobel Prize-winning microscopy advances possible."
"Not all of the team's dyes have been absolute winners. Some are made just to showcase new methods or test out specific hypotheses, and sometimes they don't glow in colors optimal for bioimaging, says Lavis, a Senior Group Leader at the Howard Hughes Medical Institute's Janelia Research Campus. But for Group Leader Eric Schreiter, one of these 'bonus' dyes -- one Lavis's team made just to round out the spectrum -- was exactly right. The misfit dye is now a cornerstone of a powerful new brain imaging tool developed in Schreiter's lab."
"The tool, called Voltron, lets researchers track neuron activity in living animals more precisely and for far longer time periods than was once possible." "Schreiter's team paired Lavis's dye with a specially engineered protein that makes the intensity change when specific neurons switch on, allowing researchers to detect neural signals throughout the brain."
If you're wondering why it's called Voltron, who knows. But "Voltron" is a catchier name than "Ace2N fused to HaloTag with five amino acids removed at their junction".
Also, by "far longer time periods than was once possible," they mean, like, 15 minutes.
| 3D printing the human heart. They didn't fully re-create a human heart, though -- what was accomplished in this research was an important step in accomplishing that goal, or 3D printing any other organ. What they accomplished was 3D printing of collagen, the major structural protein of the human body. "Collagen is an ideal material for biofabrication because of its critical role in the extracellular matrix, where it provides mechanical strength, enables structural organization of cell and tissue compartments, and serves as a depot for cell adhesion and signaling molecules. However, it is difficult to 3D-bioprint complex scaffolds using collagen in its native unmodified form because gelation is typically achieved using thermally driven self-assembly, which is difficult to control. Researchers have used approaches including chemically modifying collagen into an ultraviolet–cross-linkable form, adjusting pH, temperature, and collagen concentration to control gelation and print fidelity, and/or denaturing it into gelatin to make it thermoreversible. However, these hydrogels are typically soft and tend to sag, and they are difficult to print with high fidelity beyond a few layers in height. Instead, we developed an approach that uses rapid pH change to drive collagen self-assembly within a buffered support material, enabling us to (i) use chemically unmodified collagen as a bio-ink, (ii) enhance mechanical properties by using high collagen concentrations of 12 to 24 mg/ml, and (iii) create complex structural and functional tissue architectures. To accomplish this, we developed a substantially improved second generation of the freeform reversible embedding of suspended hydrogels (FRESH v2.0) 3D-bioprinting technique used in combination with our custom-designed open-source hardware platforms. FRESH works by extruding bio-inks within a thermoreversible support bath composed of a gelatin microparticle slurry that provides support during printing and is subsequently melted away at 37°C."
"The original version of the FRESH support bath, termed FRESH v1.0, consisted of irregularly shaped microparticles with a mean diameter of ~65 mm created by mechanical blending of a large gelatin block. In FRESH v2.0, we developed a coacervation approach to generate gelatin microparticles with (i) uniform spherical morphology, (ii) reduced polydispersity, (iii) decreased particle diameter of ~25 mm, and (iv) tunable storage modulus and yield stress."
Coacervation is a chemical process where a liquid with polymers (molecules with many repeated subunits) is separated into two phases, a dense, polymer-rich phase and a very dilute, polymer-deficient phase, then recombined such that the two form a colloid, which is a mixture with dispersed insoluble particles is suspended throughout another substance.
By polydispersity, they mean the degree of variation of polymer sizes in a mixture. So by reducing the polydispersity, they are reducing the variation of the polymers.
"What we've shown is that we can print pieces of the heart out of cells and collagen into parts that truly function, like a heart valve or a small beating ventricle."
|How modern wireless telecommunication was envisioned in the 1960s by Maxwell Smart.|
| A neural net was trained to classify images based on emotion and then compared to fMRI scans of the same images from actual human brains. The similarity shows emotion processing begins as soon as our brains start processing images, in the visual cortex, and the brain doesn't wait until the information reaches parts of the brain we thought were for emotion processing.
"To further test and refine EmoNet, the researchers then brought in 18 human subjects. As a functional magnetic resonance imaging (fMRI) machine measured their brain activity, they were shown 4-second flashes of 112 images. EmoNet saw the same pictures, essentially serving as the 19th subject."
"When activity in the neural network was compared to that in the subjects' brains, the patterns matched up."
"We found a correspondence between patterns of brain activity in the occipital lobe and units in EmoNet that code for specific emotions. This means that EmoNet learned to represent emotions in a way that is biologically plausible, even though we did not explicitly train it to do so."
"The brain imaging itself also yielded some surprising findings. Even a brief, basic image -- an object or a face -- could ignite emotion-related activity in the visual cortex of the brain. And different kinds of emotions lit up different regions."
"This shows that emotions are not just add-ons that happen later in different areas of the brain. Our brains are recognizing them, categorizing them and responding to them very early on."
|Cockroach robot can handle being stepped on.|
| When 3D printers print a curved surface, you get a "stairstep" effect that requires manual sanding afterwards if you want to make the surface truly curved. "This imperfection affecting objects with gently inclined surfaces was seen as being inherent to the manufacturing technology itself, given that it involves the material being gradually deposed in flat layers having a uniform thickness. First a desired object is modelled in 3D and then a software computes the horizontal movement instructions driving the printing head in order to manufacture each flat layer."
"It is possible, however, to move the nozzle of a conventional filament printer vertically during deposition, enabling it to depose melted material along paths curved in all three directions. This feature is rarely used, given the difficulty in obtaining an algorithm capable of dividing any part into curved slices without introducing the risk of collisions between the printer and the part being manufactured. Other constraints must also be taken into account, including the minimum and maximum deposition thicknesses. Until recently, there was no algorithm that would enable conventional printers to take advantage of this vertical movement during deposition for any input model."
| A new distributed matrix computation technology can analyze 100 times more data 14 times faster. "This method forms matrix multiplication in a 3D hexahedron and then partitions and processes to multiple pieces called cuboid. The optimal size of the cuboid is flexibly determined depending on the characteristics of the matrices, i.e., the size, the dimension, and sparsity of matrix, so as to minimize the communication cost. CuboidMM not only includes all the existing methods but also can perform matrix multiplication with minimum communication cost."
When CuboidMM is combined with GPUs, they call the system DistME.
| New software "allows robots to quickly, accurately and fully automatically build high-quality three-dimensional computer models of engine parts and carry out subsequent actions set by their control programs."
"FEFU scientists have solved the problem of the tedious programming of industrial robots that do not know how to independently adapt to changing production conditions. Now robots will no longer need to be retrained manually, which previously led to greater time spent on preparing for the production launch."
"The speed of information processing is achieved through a special set of mathematical methods, which was implemented in the computer program. The mathematical apparatus works with a voluminous array of points that make up a 3D image, lays it out on a plane, and then quickly calculates possible scanning deficiencies."
| "Perception is not objective reality. Case in point: The above image is stationary and flat...just try telling your brain that. In his new book, The Case Against Reality, UCI cognitive scientist Donald Hoffman applies this concept to the whole of human consciousness -- how we see, think, feel and interact with the world around us. And he thinks we've been looking at it all wrong."
"I'm interested in understanding human conscious experiences and their relationship to the activity of our bodies and brains as we interact within our environment -- and that includes the technical challenge of building computer models that mimic it, which is why I'm working on creating a model that explains consciousness."
|Slides and source code for the SIGGRAPH course "CreativeAI: Deep Learning for Computer Graphics" are available online for free.|
|A 16-year-old, Kyle Giersdorf aka "Bugha", won $3 million in the Fortnight World Cup, the highest individual payout, beating out 100 players in the finals and 40 million competitors in the whole competition.|
|To become, or not to become... a neuron. So, a little piece of the puzzle of how neurons are formed, and a hard one to explain. Basically glial cells become neurons, and when they do, they counterintuitively stop paying attention to external signals. What this research shows is that something called Bcl6 (a transcription factor, which is a protein that controls the rate that DNA is transcribed onto RNA) kicks off the creation of new neurons (called neurogenesis), and in the process switches on genes that lead the cell to become a neuron and off genes that would lead the cell to respond to any other external signals.|
| "The intention to say specific words can be extracted from brain activity and converted into text rapidly enough to keep pace with natural conversation."
"In its current form, the brain-reading software works only for certain sentences it has been trained on, but scientists believe it is a stepping stone towards a more powerful system that can decode in real time the words a person intends to say."
"To date there is no speech prosthetic system that allows users to have interactions on the rapid timescale of a human conversation."
The system works by monitoring two areas of the brain, called the superior temporal gyrus, which is activated during listening to speech, and the ventral sensorimotor cortex, which is activated when producing speech. The signals are not decoded with a deep learning neural network, however. They are decoded using Viterbi decoding combined with hidden Markov models.
| "Der Sprecher des Untersuchungsausschusses hat angekündigt, vor Gericht zu ziehen, falls sich die geladenen Zeugen weiterhin weigern sollten, eine Aussage zu machen." "(Machine translation to English: 'The spokesman of the Committee of Inquiry has announced that if the witnesses summoned continue to refuse to testify, he will be brought to court.')."
"But, when we apply a subtle change to the input sentence, say from geladenen to the synonym vorgeladenen, the translation becomes very different (and in this case, incorrect):"
"Der Sprecher des Untersuchungsausschusses hat angekündigt, vor Gericht zu ziehen, falls sich die vorgeladenen Zeugen weiterhin weigern sollten, eine Aussage zu machen." "(Machine translation to English: 'The investigative committee has announced that he will be brought to justice if the witnesses who have been invited continue to refuse to testify.')."
"This lack of robustness in NMT models prevents many commercial systems from being applicable to tasks that cannot tolerate this level of instability." "In 'Robust Neural Machine Translation with Doubly Adversarial Inputs' (to appear at ACL 2019), we propose an approach that uses generated adversarial examples to improve the stability of machine translation models against small perturbations in the input."
"It does this using an algorithm called Adversarial Generation (AdvGen), which generates plausible adversarial examples for perturbing the model and then feeds them back into the model for defensive training. While this method is inspired by the idea of generative adversarial networks (GANs), it does not rely on a discriminator network, but simply applies the adversarial example in training, effectively diversifying and extending the training set."
|Chinese CPU for AI. "Alibaba's chip subsidiary Pingtouge (平头哥) yesterday introduced its first product, an RISC-V (Reduced Instruction Set Computer) processor. The Xuantie 910 will be used as a core IP to produce high-end edge-based microcontrollers (MCUs), CPUs, and systems-on-chips. It is tailored for 5G, artificial intelligence, and Internet-of-things (IoT). Pingtounge says the processor will be open-sourced in the near future."|
| "One field that so far has not been greatly impacted by automatic differentiation tools is evolutionary computation. The reason is that most evolutionary algorithms are gradient-free: they do not follow any explicit mathematical gradient (i.e., the mathematically optimal local direction of improvement), and instead proceed through a generate-and-test heuristic. In other words, they create new variants, test them out, and keep the best."
"Recent and exciting research in evolutionary algorithms for deep reinforcement learning, however, has highlighted how a specific class of evolutionary algorithms can benefit from auto-differentiation. Work from OpenAI demonstrated that a form of Natural Evolution Strategies (NES) is massively scalable, and competitive with modern deep reinforcement learning algorithms."
"To more easily prototype NES-like algorithms, Uber AI researchers built EvoGrad, a Python library that gives researchers the ability to differentiate through expectations (and nested expectations) of random variables, which is key for estimating NES gradients. The idea is to enable more rapid exploration of variants of NES, similar to how TensorFlow enables deep learning research."
| "Like random search, Population Based Training also starts with multiple networks initiated with random hyperparameters. Networks are evaluated periodically and compete with each other for 'survival' in an evolutionary fashion. If a member of the population is underperforming, it's replaced with the 'progeny' of a better performing member. The progeny is a copy of the better performing member, with slightly mutated hyperparameters. Population Based Training doesn't require us to restart training from scratch, because each progeny inherits the full state of its parent network, and hyperparameters are updated actively throughout training, not at the end of training. Compared to random search, Population Based Training spends more of its resources training with good hyperparameter values."
"The first experiments that DeepMind and Waymo collaborated on involved training a network that generates boxes around pedestrians, bicyclists, and motorcyclists detected by our sensors -- named a 'region proposal network.'"
| DeepSORT: Deep learning to track custom objects in a video. Challenges: occlusion, variations in view points, non stationary camera, annotating training data.
"The Kalman filter is a crucial component in deep SORT. Our state contains 8 variables. The other variables are the respective velocities of the variables." "The variables have only absolute position and velocity factors, since we are assuming a simple linear velocity model. The Kalman filter helps us factor in the noise in detection and uses prior state in predicting a good fit for bounding boxes."
"We have an object detector giving us detections, Kalman filter tracking it and giving us missing tracks, the Hungarian algorithm solving the association problem. So, is deep learning really needed here?"
"The answer is Yes. Despite the effectiveness of Kalman filter, it fails in many of the real world scenarios we mentioned above, like occlusions, different view points etc."
"So, to improve this, the authors of Deep sort introduced another distance metric based on the 'appearance' of the object."
| AI system that identifies road networks from satellite imagery. "Map With AI includes an editor interface, RapiD, which allows mapping experts to easily review, verify, and adjust the map as needed."
"We used this system to map all the previously unmapped roads in Thailand -- more than 300,000 miles' worth -- in OpenStreetMap (OSM), a community-based effort to create freely available, editable maps of the world. We were able to complete this project in 18 months -- less than half the time it would have taken a team of 100 mapping experts to do it manually."
"In extracting roads from satellite imagery, we've leveraged recent advances in using fully convolutional neural networks for semantic segmentation in conjunction with large-scale weakly supervised learning. Road detection is a straightforward application of semantic segmentation where the road is the foreground and the rest of the image is the background."
"As part of our Thailand road-mapping project, we had human experts review and correct the road networks that the AI system identified. We then used these manually corrected maps as training data for the model."
"Maps of many other countries still contain substantial gaps; therefore, we explored new ways to get high-quality, geographically diverse training data. Drawing inspiration from our previous work on weakly supervised image classification and training building detection models on OSM data, we experimented with translating these weakly supervised training ideas from classification to semantic segmentation. This experiment required identifying regions with adequate, accurate data coverage and then converting the OSM database's road vectors into rasterized semantic segmentation labels."
|AI newscaster voice from Amazon.|
| "New research suggests 'f' and 'v' are recent speech sounds worldwide." "Some sounds common to modern languages likely arose only recently, as a result of diet-induced changes in the human bite." "Our findings belie the popular hypothesis that spoken languages have remained unchanged since the emergence of our species."
"While the teeth of ancient humans once met in an edge-to-edge bite due to a tougher diet at the time, more recent softer diets allowed modern humans to retain the juvenile overbite that had previously disappeared by adulthood, with the upper teeth slightly more in front than the lower teeth and in a slight angle outwards the mouth. This shift led to the rise of a new class of speech sounds now found in half the world's languages: 'labiodentals,' or sounds made by touching the lower lip to the upper teeth. Predominant among these are 'f' and 'v'."
"Human speech is spectacularly diverse, ranging from ubiquitous sounds like 'm' and 'a' to the rare click consonants in some languages of Southern Africa." "This range is generally thought to have been fixed by biological constraints since the emergence of Homo sapiens ca. 300,000 years ago." "However, we underestimate the power of our biological conditions: here, we see how changes in our diet bring about changes in our bite and that this in turn shaped how modern languages sound. In Europe, for example, our data suggests that the use of labiodentals has increased dramatically only in the last couple of millenia, correlated with the rise of food processing technology such as industrial milling."
"The project was sparked when the team became intrigued by an observation made by linguist Charles Hockett back in 1985. Hockett noticed that languages which foster labiodentals are often found in societies with access to softer foods." "But there are plenty of superficial correlations involving language which are spurious, and linguistic behavior, such as pronunciation, doesn't fossilize."
|In an effort to replicate the "social deficits" of autism spectrum disorder in mice, scientists altered a gene called SHANK3 that controls the glutamatergic synapses in the pyramidal neurons of the anterior cingulate cortex. That disrupted the normal function of excitatory synapses in that part of the brain. The resulting mice displayed social interaction deficits.|
| "Of the 1,500 active volcanoes worldwide, up to 85 erupt each year. Due to the cost and difficulty to maintain instrumentation in volcanic environments, less than half of the active volcanoes are monitored with ground-based sensors, and even less are considered well-monitored. Volcanoes considered dormant or extinct are commonly not instrumentally monitored at all, but may experience large and unexpected eruptions, as was the case for the Chaitén volcano in Chile in 2008 which erupted after 8,000 years of inactivity."
Their solution is an AI system that processes data from satellites in space (Sentinel-1 Synthetic Aperture Radar SAR, Sentinel-2 Short-Wave InfraRed SWIR, Sentinel-5P TROPOMI) and ground-based seismic data (GEOFON and USGS global earthquake catalogues). The combination of multiple sensors enables them to detect many types of volcanic activity, from subsurface magma intrusion, to surface eruptive deposit emplacement, pre/syn-eruptive morphological changes, and gas propagation into the atmosphere, and not to be fooled by different climate settings. The AI system is a convolutional neural network trained on synthetically generated data produced by adding strong deformation to interferograms.
"It provides near-real-time access to surface deformation, heat anomalies, SO2 gas emissions, and local seismicity at a number of volcanoes around the globe, providing support to both scientific and operational communities for volcanic risk assessment. Results are visualized on an open-access website where both geocoded images and time series of relevant parameters are provided, allowing for a comprehensive understanding of the temporal evolution of volcanic activity and eruptive products."
|Some data recovery companies 'recover' data encrypted by ransomware by simply paying the ransom. "'Pleased to confirm that we can recover your encrypted files' for $3,950 -- four times as much as the agreed-upon ransom."|
| CRISPR-Cas9 and Cas12 edit DNA, while CRISPR-Cas13 edits RNA. "Targeting disease-linked mutations in RNA, which is relatively short-lived, would avoid making permanent changes to the genome. In addition, some cell types, such as neurons, are difficult to edit using CRISPR/Cas9-mediated editing, and new strategies are needed to treat devastating diseases that affect the brain."
The Cas13 system has been enhanced to include a programmable enzyme that can flip cytosine into uridine. This means CRISPR targets now include modifiable positions in proteins, such as phosphorylation sites. "Such sites act as on/off switches for protein activity and are notably found in signaling molecules and cancer-linked pathways."
"The team took the new platform into human cells, showing that they could target natural RNAs in the cell, as well as 24 clinically relevant mutations in synthetic RNAs."
| "The most extensive and systematic insect monitoring program ever undertaken in North America shows that butterfly abundance in Ohio declined yearly by 2%, resulting in an overall 33% drop for the 21 years of the program."
The article goes on to say "the findings also are in line with those of butterfly monitoring programs in multiple European countries," and that it is likely all insects are declining. But butterflies are easiest to track because people like them and are willing to keep track of them. But that also means the tracking is only in human-inhabited areas, so we only know the decline is in human-inhabited areas.
|State of AI report 2019 (slide show). Reinforcement learning conquers new territory: OpenAI surpassed human performance at Montezuma's Revenge, DeepMind beat a world class Starcraft II player 5-0, humans surpassed at Quake III Arena Capture The Flag, OpenAI Five (Dota 2) improves. What's next: play-driven robots, learning dexterity, curiosity-driven exploration, online planning, RL moving from research to production. Machine learning in life science: AlphaFold predicts de novo 3D structure of folded proteins. Natural language processing: pretrained language models, GLUE benchmark, machine translation without bidirectional texts, learning common sense from text. Federated learning. Data privacy. Deep learning in medicine: diagnosis and referral, cardiologist-level performance classifying arrhythmia from ECGs, >600K chest x-rays but little improvement, reconstructing speech from brainwaves, limb control for the disabled, learning to synthesize chemical molecules. AutoML: evolutionary algorithms improve, Google MnasNet accurate on CNN models, Facebook DNAS performs as well at lower cost. GANs: image quality improves, full body synthesis, realistic speech synthesis. Learning 3D shapes from single images. Immense growth in research paper publication output. Google dominantes at NeurIPS by number of papers submitted. 88% of researchers at NeurIPS, ICML, and ICLR were men. Compensation of top engineers at top companies approaches $1,000,000. Huge growth in $1.47/hour data labelling jobs. Pioneers of neural networks won Turing Award, highest award in computer science. Europe publishes the most AI papers, but China is growing fastest. $1 billion investment in AI at MIT. University course enrollment in AI is growing, especially in China. Gender diversity is unequal with professors and students. Job applicants are 71% male. 44% of researchers earned a PhD in the US vs 11% China and 6% UK. Five countries, US, China, UK, Germany, Canada, account for 72% of researchers. Tech giants have frozen research talent hiring, looking to put research into production. Global VC investment in AI over $27 billion in 2018. Startups get acquired by the tech giants. Robotic process automation took off in 2017-18. Brain Corp and Walmart are scaling up robotic cleaning and in-store operations. Boston Dynamics robots do parkour hopping. ABB invested $150M to build world's most advanced robotics factory in Shanghai. Bright Machines founded by Autodesk and Flex veterans raised $179M to make robots that build products. Robot-enabled supply chains. US factories are installing record numbers of robots. Amazon is massively scaling up its fulfillment infrastructure and rolling out more robots. Self-driving car companies are selling for billion-dollar valuations. Waymo drove > 1M miles in 2018. Self-driving miles still a fraction of all miles. GM/Cruise missed launch dates and other big players are silent. Machine learning used for demand forecasting for energy, water, travel, local business, logistics, retail. Advances in NLP applied in reading financial data. FDA cleared 3 AI diagnostic products. Pharma companies partering wih AI companies for drug development. Nutrition: using data from genetic, metabolic, and meals to predict individual response to food. AI patents growing faster than scientific publications, with cmputer vision the most popular patenting area. Big tech monetize cloud services but not hosted AI services. AI hardware: benchmarking performance of mobile chipsets for AI tasks, Google Edge TPU and Nvidia Nano push compute to the edge, Amazon SageMaker trains for edge devices in the cloud. 5G. Public attitudes towards AI: attitudes towards developing AI for warfare, public doesn't know who should be ultimately responsible for how AI systems are designed and deployed, companies should have AI ethics review boards, early AI ethics boards have difficulties, public expects "machines able to perform almost all tasks that are economically relevant today better than the median human today" by 2028 (9 years), strongest predictors of enthusiasm for AI are being male, having >$100K income, and having computer science experience, public trusts only university researchers to develop AI, military less trustworthy, rate Facebook as least trustworthy, public concerned about AI surveillance violating privacy, civil liberties, AI used to spread fake information, AI cyberattacks. AI nationalism: Germany "AI Made In Germany" plan, Finland 1% scheme, European Union "CERN for AI" aspiration, Trump's "American AI Initiative". US export controls. AI and dual use issues. New challenges: mass surveillance becoming more sophisticated, natural language processing used for fraud, deepfakes in politics. Interesting new ideas: responsible AI licenses, Google's elimination of gender bias in Google Translate. ACLU finds gender bias in Amazon's Rekognition algorithm. Andrew Yang's presidential bid focuses on AI and UBI. New "AI ethics" think tanks. Georgetown department focused on AI and policy. China is reducing the friction of everyday consumer face recognition experiences. Chinese internet companies expanded into farming. Chinese R&D increasing, but still behind rest of world. China ramping up semiconductor sales and reducing trade deficit. Some Chinese industrial companies have already automated away 40% of their human workforce. Chinese companies like JD already use robotic warehousing. Chinese companies continue to IPO on US stock market. Chinese groups own the most patents but most are utility model and design patents and not invention patents. Chinese investors let majority of patents lapse in 5 years. China's research publishing is more "high impact".|
| Audio classification model that detects whether a key or pick has been inserted into a lock. "We will take in live audio from a microphone placed next to our lock, cut the audio at every 5 second mark and pass those last 5 seconds to our pre-trained model. We then print out any events we may detect, including 'static', 'pick', or 'key'. We will also log the date and time of the the event, and save the audio clip of the incident."
"I achieved a little more than 90% accuracy on both training and validation sets using the code posted below. I should also note that this code is almost exactly the same as a typical image classifier, which I found pretty interesting! Only needed a couple tweaks."
| "The godfather of China's Silicon Valley was Chen Chunxian, a Sichuan physicist who visited the original Silicon Valley on a state-sponsored trip to California in 1979. He returned to found the non-government Advanced Technology Service Association in Zhongguancun in 1980, and although the company didn't last long, it laid the foundations for what was to come."
"As the economic reforms of 1978 invited foreign investors to pump money into China, Zhongguancun quickly became the country's high-tech hub. Some began selling consumer goods on this 'electronics avenue', while others started companies of their own. Zhongguancun exploded. Even the government saw its potential, designating the area the 'Beijing High-Technology Industry Development Experimental Zone' in 1988. A simpler nickname stuck: China's Silicon Valley."
|Greenland ice sheet is sliding. Sliding completely dominates ice motion all winter, despite a hard bedrock below and no surface meltwater. This was determined by 212 GPS "tilt" censors, which also report their tilt relative to gravity, embedded in boreholes in the ice sheet.|
| Improvements in satellite imaging and remote sensing equipment that allow scientists to measure ice mass in greater detail than ever before.
"Snowfall is dominant over Antarctica and will stay that way for the next few decades. And we've seen that as the atmosphere warms due to climate change, that leads to more snowfall, which somewhat mitigates the loss of ice sheet mass there. Greenland, by contrast, experiences abundant summer melt, which controls much of its present and future ice loss."
"In years past, climate models would have been unable to render the subtleties of snowfall in such a remote area. Now, thanks to automated weather stations, airborne sensors and Earth-orbit satellites such as NASA's Gravity Recovery and Climate Experiment (GRACE) mission, these models have been improved considerably. They produce realistic ice sheet surface mass balance, allow for greater spatial precision and account for regional variation as well as wind-driven snow redistribution -- a degree of detail that would have been unheard of as recently as the early 2000s."
"Ground-based radar systems and ice core samples provide a useful historical archive, allowing scientists to go back in time and observe changes in the ice sheet over long periods of time. But while current technologies allow for greater spatial monitoring, they lack the ability to measure snow density, which is a crucial variable to translate these measurements into mass changes."
"The biggest opportunity may lie in cosmic ray counters, which measure surface mass balance directly by measuring neutrons produced by cosmic ray collisions in Earth's atmosphere, which linger in water and can be read by a sensor. Over long periods of time, an array of these devices could theoretically provide even greater detail still."
|Robot so tiny it's almost invisible. It has to have externally applied vibrations to move around. It has 3D-printed legs that let it capture the vibrations for movement. A normal 3D printer can't print legs that small, so it uses a 3D-printing technique called two-photon polymerization lithography, which uses technology designed for the semiconductor industry for photolithography. It's called "two photon" because the photoresist doesn't react unless it gets two photos of different frequencies. A traditional photolithography technique is subtractive but this technique is additive, enabling it to be used for 3D printing. The bot, called the "bristle-bot", is about 2 mm and weighs 5 mg. It can achieve a speed up to 4 times its body length per second if given sound of the most resonant frequency, which is around 6.3 kHz, but can be changed by changing the bot's geometry.|
| A system for automated design of a type of robot part called actuators has been invented. Actuators are motors or anything that physically moves and controls things. It was inspired by animals like the cuttlefish, sharks, comb jellies, and centipedes, which can control multiple abilities with the same part, such as physical deformation, visual appearance, and hydrodynamic drag. Optimizing existing actuation systems with identical actuators for power consumption, low footprint, and process reliability requires a substantial amount of time, yet nature creates similar systems with nonuniform actuator arrays and bigger design complexity. Designs pervasive in natural organisms have actuator collections with high design complexity that are optimized through evolution.
The new design system works by using simulated annealing on a 3D model that accounts for both the operation of the part once it's completed and the complexities of the 3D printing process. Simulating annealing is a technique that uses randomness to explore a large possibility space and zero in on a solution that will be nearly as good as the theoretically optimal solution.
| Beliefs affect perception. Prior beliefs affect how the brain interprets new information. What they did here was train monkeys using a game called "Ready, set, go" to believe an indicator light told them whether the upcoming time interval would be "short" or "long". The way the game works is there are two flashes of light, "ready", and "set", and you're supposed to delay the exact same amount of time before pushing the button for "go". If the light indicated the "short" scenario, the time delay was between 480 and 800 milliseconds. If the light indicated the "long" scenario, the delay was between 800 and 1,200 milliseconds (1.2 seconds). Ah, but what if the time interval is *exactly* 800 milliseconds?
"If animals believed the interval would be short, and were given an interval of 800 milliseconds, the interval they produced was a little shorter than 800 milliseconds. Conversely, if they believed it would be longer, and were given the same 800-millisecond interval, they produced an interval a bit longer than 800 milliseconds." "Trials that were identical in almost every possible way, except the animal's belief led to different behaviors."
But that's not all. The researchers recorded activity from about 1,400 neurons in a region of the frontal cortex. The change in prior belief was reflected in changes in the activity of these 1,400 neurons.
| AI research in Africa. "Being in an environment where the challenges are unique in many ways gives us an opportunity to explore problems that maybe other researchers in other places would not be able to explore."
"Before founding its AI lab in Ghana, for example, Google began working with farmers in rural Tanzania to understand some of the struggles they faced in maintaining consistent food production. The researchers learned that crop disease can significantly reduce yield, so they created a machine-learning model that could diagnose early stages of disease in the cassava plant, an important staple crop in the region. The model, which works directly on farmers' phones without needing access to the internet, helps them intervene earlier to save their plants."
|Five cities, five different approaches to algorithms in government. New York City, New York: introducing oversight during implementation (Automated Decision Systems Task Force). Columbus, Ohio: grants drive experimental implementations (smart city grant programs hosted by the US Department of Transportation). Los Angeles, California: jumping into the deep end (automated traffic sensors, dynamic tolling, automated license plate readers, PredPol predictive policing system, Axon AI research partner company with access to LAPD data). Somerville, Massachusetts: pressing pause to study the issue (banned facial recognition technology). Spokane, Washington: discovering the true cost of data (tossed development of pre-trial risk assessment tool because the ongoing costs of collecting data points and upkeep were too high).|
| MixNets are the state-of-the art image classification systems small enough to run on mobile devices.
The main idea behind MixNets is that, in a convolutional neural network, the accuracy of the whole neural network does not necessarily improve as you increase the size of the convolution kernel, e.g. accuracy might improve when you go from 3x3 to 5x5, but get worse when you go from 7x7 to 9x9. In the limit case, the size of the kernel is the same as the whole image, in which case the convolutional network is exactly the same as a fully connected network and there is no point in having a convolutional network at all. So, instead of having a single kernel size, MixNets mix a variety of kernel sizes in a single convolution operation.
|Reinforcement Learning Specialization at Coursera. Series of 4 courses offered by the Alberta Machine Intelligence Institute at the University of Alberta.|
|Magnitude 4.3 earthquake aftershock as seen by the fiber optic cable connecting Ridgecrest, CA and Inyokern Airport on July 16, 2019.|
|So, faster than light travel is impossible (probably, sorry for the spoiler if you haven't listened to the Miguel Alcubierre interview I posted yesterday). What about slower than light travel? Could we, for example, accelerate at 1 G for 1 year, then turn around and decelerate at 1 G for 1 year to arrive at our destination? Fraser Cain answers.|
|The full text of the new book Mathematics for Machine Learning by Marc Peter Deisenroth, A. Aldo Faisal, and Cheng Soon Ong is available online for free.|