|China clamping down.|
| Overton "automates the life cycle of model construction, deployment, and monitoring." "The machine itself fixes and adjusts machine learning models in response to external stimuli, making it more accurate and repairing logical flaws that might lead to an incorrect conclusion. The idea is that humans can then focus on the high-end supervision of machine learning models."
With Overton, the schema defines what the model computes but not how the model computes it. In other words, Overton is free to embed sentences using an LSTM or a Transformer, or change hyperparameters, like hidden state size. Overton engineers focus on how to monitor the application quality and improve supervision of of the deep learning models, but not on building the model. Overton is responsible for training the model and producing a production-ready binary.
Overton can accept supervision at multiple granularity and Overton models often perform other tasks alongside the main task, for example part-of-speech tagging.
Overton is often good at training models even with low-quality supervision.
Engineers can define "slices", such as "nutrition-related queries" (to Siri) or "queries with complex disambiguation". The engineer defines what supervision would improve the slice and Overton figures out how to improve the model.
Overton has already powered industry-grade systems at Apple for more than a year and reduced the error rates of those systems, enabled small teams to perform the same duties as several, larger teams, and improved product turn-around times.
|Larry Ellison proclaims Oracle Linux is "autonomous." "It provisions itself, it scales itself, it tunes itself, it does it all while it's running. It patches itself. While it's running." "Once a vulnerability is discovered, we fix it while it's running."|
|Face recognition technology in China beaten by plastic surgery. "Huan said she discovered she had been logged out of the online shopping and payment gateways she used because the secure identification process, backed by facial recognition technology, simply did not know who she was. Huan said her work was also affected as she could no longer sign in and off work by scanning her face. Checking in to hotels and boarding high-speed trains had also become a problem as she had used facial recognition to register on those platforms."|
| "If a fishing vessel had steamed past the area last October, the crew might have glimpsed half a dozen or so 35-foot-long inflatable boats darting through the shallows, and thought little of it. But if crew members had looked closer, they would have seen that no one was aboard: The engine throttle levers were shifting up and down as if controlled by ghosts. The boats were using high-tech gear to sense their surroundings, communicate with one another, and automatically position themselves so, in theory, .50-caliber machine guns that can be strapped to their bows could fire a steady stream of bullets to protect troops landing on a beach."
"The secretive effort -- part of a Marine Corps program called Sea Mob -- was meant to demonstrate that vessels equipped with cutting-edge technology could soon undertake lethal assaults without a direct human hand at the helm. It was successful: Sources familiar with the test described it as a major milestone in the development of a new wave of artificially intelligent weapons systems soon to make their way to the battlefield."
| "When humans face a complex challenge, we create a plan composed of individual, related steps. Often, these plans are formed as natural language sentences."
"Facebook AI has developed a new method of teaching AI to plan effectively, using natural language to break down complex problems into high-level plans and lower-level actions. Our system innovates by using two AI models -- one that gives instructions in natural language and one that interprets and executes them -- and it takes advantage of the structure in natural language in order to address unfamiliar tasks and situations. We've tested our approach using a new real-time strategy game called MiniRTSv2, and found it outperforms AI systems that simply try to directly imitate human gameplay."
"MiniRTSv2 is a streamlined strategy game designed specifically for AI research. In the game, a player commands archers, dragons, and other units in order to defeat an opponent."
"Though MiniRTSv2 is intentionally simpler and easier to learn than commercial games such as DOTA 2 and StarCraft, it still allows for complex strategies that must account for large state and action spaces, imperfect information (areas of the map are hidden when friendly units aren't nearby), and the need to adapt strategies to the opponent's actions."
"We used MiniRTSv2 to train AI agents to first express a high-level strategic plan as natural language instructions and then to act on that plan with the appropriate sequence of low-level actions in the game environment. This approach leverages natural language's built-in benefits for learning to generalize to new tasks. Those include the expressive nature of language -- different combinations of words can represent virtually any concept or action -- as well as its compositional structure, which allows people to combine and rearrange words to create new sentences that others can then understand. We applied these features to the entire process of planning and execution, from the generation of strategy and instructions to the interface that bridges the different parts of the system's hierarchical structure."
|Pinterest claims Pinterest Lens, their visual search technology that lets you snap a photo of a product in the real world and Pinterest will tell you what it is and where you can find it or something just like it, "can identify more than 2.5 billion objects."|
|"10 reasons why PyTorch is the deep learning framework of the future." "1. PyTorch is Pythonic." "2. Easy to learn." "3. Higher developer productivity." "4. Easy debugging." "5. Data Parallelism." "6. Dynamic Computational Graph Support." "7. Hybrid Front-End." "8.Useful Libraries." "9. Open Neural Network Exchange support." "10. Cloud support."|
| "Aristo was tested on 119 questions from the eighth-grade exam and was correct on over 90 percent of them, a remarkable performance. It was also correct on over 83 percent of 12th-grade questions. While the Times reported that Aristo 'passed the test,' the AI2 team noted that the actual tests New York students take include questions that refer to diagrams, as well as 'direct answer' questions, neither of which Aristo was able to handle."
"This is exciting progress, but we must keep in mind that a high score on a particular data set does not always mean that a machine has actually learned the task its human programmers intended. Sometimes the data used to train and test a learning system has subtle statistical patterns -- I'll call these giveaways -- that allow the system to perform well without any real understanding or reasoning."
"For example, one neural-network language model -- similar to the one Aristo uses -- was reported in 2019 to capably determine whether one sentence logically implies another. However, the reason for the high performance was not that the network understood the sentences or their connecting logic; rather, it relied on superficial syntactic properties such as how much the words in one sentence overlapped those in the second sentence."
| "A few years ago, Joelle Pineau, a computer science professor at McGill, was helping her students design a new algorithm when they fell into a rut." "Pineau's students hoped to improve on another lab's system. But first they had to rebuild it, and their design, for reasons unknown, was falling short of its promised results. Until, that is, the students tried some 'creative manipulations' that didn't appear in the other lab's paper."
"Lo and behold, the system began performing as advertised. The lucky break was a symptom of a troubling trend, according to Pineau."
"Pineau is trying to change the standards. She's the reproducibility chair for NeurIPS, a premier artificial intelligence conference. Under her watch, the conference now asks researchers to submit a 'reproducibility checklist' including items often omitted from papers, like the number of models trained before the 'best' one was selected, the computing power used, and links to code and datasets. That's a change for a field where prestige rests on leaderboards -- rankings that determine whose system is the 'state of the art' for a particular task -- and offers great incentive to gloss over the tribulations that led to those spectacular results."
| "DeepMind has quietly open sourced three new impressive reinforcement learning frameworks." OpenSpiel, SpriteWorld, and bsuite.
"OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games."
"Spriteworld is a python-based RL environment that consists of a 2-dimensional arena with simple shapes that can be moved freely."
"bsuite is a collection of experiments designed to highlight key aspects of agent scalability."
| Grover's algorithm for quantum computers allows a search of an unordered list of items to be done in time proportional to the square root of the number of items. "Despite the interest, implementing Grover's algorithm has taken time because of the significant technical challenges involved. The first quantum computer capable of implementing it appeared in 1998, but the first scalable version didn't appear until 2017, and even then it worked with only three qubits. So new ways to implement the algorithm are desperately needed."
"Today Stéphane Guillet and colleagues at the University of Toulon in France say this may be easier than anybody expected. They say they have evidence that Grover's search algorithm is a naturally occurring phenomenon. 'We provide the first evidence that under certain conditions, electrons may naturally behave like a Grover search, looking for defects in a material.'" "Free electrons naturally implement the Grover search algorithm when moving across the surface of certain crystals."
|Experimental AI in Games (EXAG) conference papers. Each link goes to a research paper on AI in video games.|
| Patents on AI -- have an opinion? The United States Patent and Trademark Office and Department of Commerce are soliciting comments on patenting artificial intelligence inventions. You need to submit your comments by October 11, 2019.
"Inventions that utilize AI, as well as inventions that are developed by AI, have commonly been referred to as 'AI inventions.' What are elements of an AI invention?"
"What are the different ways that a natural person can contribute to conception of an AI invention and be eligible to be a named inventor?"
"Do current patent laws and regulations regarding inventorship need to be revised to take into account inventions where an entity or entities other than a natural person contributed to the conception of an invention?"
"Should an entity or entities other than a natural person, or company to which a natural person assigns an invention, be able to own a patent on the AI invention?"
"Are there any patent eligibility considerations unique to AI inventions?"
"Are there any disclosure-related considerations unique to AI inventions?"
"How can patent applications for AI inventions best comply with the enablement requirement, particularly given the degree of unpredictability of certain AI systems?"
"Does AI impact the level of a person of ordinary skill in the art?"
"Are there any prior art considerations unique to AI inventions?"
"Are there any new forms of intellectual property protections that are needed for AI inventions, such as data protection?"
"Are there any other issues pertinent to patenting AI inventions that we should examine?"
"Are there any relevant policies or practices from other major patent agencies that may help inform USPTO's policies and practices regarding patenting of AI inventions?"
| DeepMind has developed a neural network that acts as a drop-in replacement for Schrödinger's equation approximators in quantum chemistry systems. "Any physicist will tell you that chemistry is just a branch of applied physics. Well to be honest they will tell you that just about any other subject is a branch of applied physics, but in the case of chemistry they are closer to being right. Chemical reactions are basically all about electrons and how they orbit atoms and molecules. The differences in energies control how things react and the orbits of electrons in molecules determine the shape, and hence the properties, of the substance."
"In principle then, chemistry is easy. All you have to do is write down the Schrödinger equation for the reactants and solve it. In practice, this isn't possible because multi-body Schrödinger equations are very difficult to solve. In fact, the only atom we can solve exactly is the Hydrogen atom with one proton and one electron. All other atoms are solved by approximations called perturbation techniques. As for molecules -- well we really don't get off the starting blocks and quantum chemists have spent decades trying to perfect approximations that are fast to compute and give accurate results. While progress has been impressive, many practical calculations are still out of reach and in these situations chemistry reverts to guesswork and intuition."
"Neural networks can be thought of as function approximators. That is, you give a neural network a function to learn and it will. You present the inputs, the x, and then train it to produce f(x), where x in this case stands in for a lot of different variables. The idea is that it learns to produce the known values of f(x), i.e. the ones you give it as the training set, but it also manages to produce good results when you give it an x it has never seen."
"What exactly the neural network does is find an approximate multi-electron wave function that satisfies Fermi-Dirac statistics, i.e. the wave function is antisymmetric. All that is needed are the initial electron configurations -- how many etc -- and the neural network will then output the wave function for any configuration. The network was trained to find the approximation to the ground state wave function by minimizing its energy state. The ground state is by definition the state with the lowest energy and the network varied the parameters until it reached a minimum.
| The size of the proton is 0.833 femtometers. So there.
"The proton's radius was not trivial to chase down. To deduce its value, Eric Hessels of York University in Toronto and colleagues had to measure the Lamb shift: the difference between hydrogen's first and second excited energy levels, called the 2S and 2P states."
"By firing a laser into a cloud of hydrogen gas, Hessels and his team caused electrons to jump from the 2S state to the 2P state, where the electron never overlaps the proton. Pinpointing the energy required for the electron to make this jump revealed how weakly bound it was in the 2S state, when residing partly inside the proton. This directly revealed the proton's size."
|Yann LeCun, AI Chief Scientist at Facebook, interviewed by Lex Fridman (MIT). Topics discussed: HAL9000 as value misalignment problem. The legal system is designing the objective function for humans. he fact that you can build gigantic neural nets, train them on relatively small amounts of data with stochastic gradient descent at it actually works breaks everything you read in every textbook -- every pre-deep-learning textbook. (You need to have fewer parameters than you have data samples, if you have a non-convex objective function then you have no guarantee of convergence, all those things you read in textbooks that tell you to stay away from this -- they are all wrong.) It was obvious to him that it would work (because of how the brain works) before he read those textbooks. What is learning? Discrete mathematics is incompatible with learning. Discrete mathematics is the math you do in computer science. Machine learning is "the science of sloppiness". Different types of memory in the brain and different types of reasoning. Humans are not very good at causal reasoning. How with his early work, he could not distribute code because lawyers would not allow it. Convolutional neural networks used to be patented. Today, the industry does not believe in patents, though they are still there for legal reasons. He dislikes the term "artificial general intelligence" (AGI) because he thinks it implies human intelligence is general. Human intelligence is actually very specialized. Supervised vs unsupervised vs self-supervised learning. Self-supervised learning means learning to re-construct a part of its input that has been masked out. Getting machines to give not a single output but a whole set of outputs. Machines don't learn like humans and animals. The best reinforcement learning algorithms current algorithm would have to drive a car off a cliff hundreds of times before it learned not to do that. Humans and animals learn models of the world. We learn "gravity" as small babies (around 8 or 9 months) and don't need to drive cars off cliffs as adults to know what will happen. Sofia vs the movie "Her". Common sense is impossible from only language. It requires interaction with the real world to understand how it works. Predicting future emotional states (basal ganglia). The impossibility of intelligence without emotion.|
| John Carmack, legendary game designer (Wolfenstein, Doom, Quake, etc) went on Joe Rogan's podcast. He said he likes Beatsaber because your actions in reality are exactly the same as in the game, because you're swinging light sabers. With most games, you feel like a mime, because of the lack of haptic feedback.
Further topics discussed include: and the challenges of making a VR system that is immersive and doesn't make people feel sick or disoriented. People doing mundane things in VR like watching Netflix. The decision to open source Doom. E-sports bigger than UFC. Companies having to be more conservative with modern games because they cost so much. What kind of headset people could wear all day or a lot, and what augmented reality systems like Microsoft Hololens are actually useful for. Neuralink and brain-computer-interface (BCI) technologies and usefulness for disabled people vs games. Artificial general intelligence (AGI). Moore's law and the economics of semiconductors. Working 13-hour days, toxic culture vs people being obsessed and wanting to work extremely hard. Staying in programming vs management. The "physicality" of time: trying to do things fast (SpaceX) vs taking your time (Blue Origin). Hobbies such as turbocharging Ferrari cars. (But now he has a Tesla). Rockets. Making the world better by using virtual reality to give people the world they can't have in the real world due to lack of resources.
| "Alex Zhavoronkov, CEO of Insilico Medicine, a startup that generates potential drugs using artificial intelligence, was recently given a challenge by one of his pharma company partners. His team would see how quickly Insilico's AI could identify new molecules that bind with a protein associated with tissue scarring. Then they'd put the molecules to the test, synthesizing a few of them in the lab to see if the AI was onto something, or only dreaming."
"The team, along with collaborators at the University of Toronto, took 21 days to generate 30,000 designs for molecules targeting a protein involved in fibrosis. They synthesized six in the lab, of which four showed potential promise in initial tests. Two were then tested in cells, and the most promising one in mice. The team found their AI-generated molecule was both potent against the targeted protein and also displayed qualities that could be considered 'drug-like.'"
| "ElasticDL is a Kubernetes-native deep learning framework built on top of TensorFlow 2.0 that supports fault-tolerance and elastic scheduling."
"TensorFlow has its native distributed computing feature that is fault-recoverable. In the case that some processes fail, the distributed computing job would fail; however, we can restart the job and recover its status from the most recent checkpoint files."
"ElasticDL, as an enhancement of TensorFlow's distributed training feature, supports fault-tolerance. In the case that some processes fail, the job would go on running. Therefore, ElasticDL doesn't need to checkpoint nor recover from checkpoints."
| ChocoPy is a restricted subset of Python 3, which can easily be compiled to RISC-V, designed for classroom use in undergraduate compilers courses. "The language is fully specified using formal grammar, typing rules, and operational semantics. ChocoPy is used to teach CS 164 at UC Berkeley."
"Familiar: ChocoPy programs can be executed directly in a Python (3.6+) interpreter. ChocoPy programs can also be edited using standard Python syntax highlighting. Safe: ChocoPy uses Python 3.6 type annotations to enforce static type checking. The type system supports nominal subtyping. Concise: A full compiler for ChocoPy be implemented in about 12 weeks by undergraduate students of computer science. This can be a hugely rewarding exercise for students. Expressive: One can write non-trivial ChocoPy programs using lists, classes, and nested functions. Such language features also lead to interesting implications for compiler design. Bonus: Due to static type safety and ahead-of-time compilation, most student implementations outperform the reference Python implementation on non-trivial benchmarks."
| AI systems for interpreting video, such as for self-driving cars, are trained using data sets labeled with the correct answers (supervised learning), but this doesn't scale because of the meticulous labor involved in labeling the data.
Researchers at Google are hoping to circumvent this problem by correlating the actions in videos with the human speech in the videos.
The videos they are using for this are instructional videos, such as cooking, gardening, and vehicle repair.
They break the video into 1.5-second segments called "video tokens", and to verify the system, they take text and generate "video tokens" and then try to predict video that comes next.
|The vastness of our cosmos provides us with an endless source of mystery and intrigue. How lucky are we that the universe is not simple to understand and cannot be entirely known, for what a boring reality that would be? Poetic musings inspired by Carl Sagan.|
|AI has mastered 6-player poker. This is the paper for the Pluribus system. I posted a press release and a description based on a preprint back in July.|
|Deepfake Detection Challenge. Facebook is commissioning a database of deepfakes to use for an ongoing initiative to develop deepfake detection technology. Report includes some people seeking to develop counter-deepfake technology at universities and DARPA.|
|ALPHRED 2 can use its four symmetrical limbs to hop, walk, and run, or it can use one of its limbs as an arm for knocking on doors, or use two of them for picking up cardboard boxes.|
|The 1-pixel adversarial attack. Discussion of the premise that adversarial attacks can be exploited to make more robust models.|
| "North Dakota authorities relying on DNA collected from a cigarette butt have charged a man with engaging in a riot for his involvement in a Dakota Access pipeline protest three years ago."
"The charges relate to a Sept. 6, 2016, protest on the Standing Rock Indian Reservation. An affidavit says more than 100 demonstrators, many with their faces covered, halted construction and vandalized equipment."
|Google's code review guidelines have been open sourced.|
| "Paul Hildreth peered at a display of dozens of images from security cameras surveying his Atlanta school district and settled on one showing a woman in a bright yellow shirt walking a hallway."
"A mouse click instructed the artificial-intelligence-equipped system to find other images of the woman, and it immediately stitched them into a video narrative of her immediate location, where she had been and where she was going."
"There was no threat, but Hildreth's demonstration showed what's possible with AI-powered cameras. If a gunman were in one of his schools, the cameras could quickly identify the shooter's location and movements, allowing police to end the threat as soon as possible, said Hildreth, emergency operations coordinator for Fulton County Schools."
|The National Reconnaissance Office (NRO) is working on an AI system designed to remove human decisions from strategic defense. It's called Skyne... er, Sentient. It seeks to be self-aware... of available system assets and status, system performance, and capabilities. It seeks to be mission-aware, with the ability to apply priorities, historical knowledge, signatures, and patterns.|
| "It is 1986, you are behind the wheel of a Mercedes truck that cost 749 million euros to build. The truck has driven itself from Munich to Copenhagen and back and you only had to touch that wheel a few times in the whole trip. You are convinced that autonomous cars are a solved problem."
"A team of talented engineers come in with an autonomous driving business, the potential market is huge, your friends have been investing and you fear missing out the venture of the century. You decide to invest."
"It's been 30 years, 200 companies, 150 billion dollars invested, 0 autonomous cars."
| "It seems pretty clear to me by now that GPT-2 is not as dangerous as OpenAI thought (or claimed to think) it might be."
"The model is good at all sorts of interesting things, but arguably least good at the things required for disinformation applications and other bad stuff."
"It is best at the smallest-scale aspects of text -- it's unsettlingly good at style, and I frequently see it produce what I'd call 'good writing' on a phrase-by-phrase, sentence-by-sentence level. It is less good at larger-scale structure, like maintaining a consistent topic or (especially) making a structured argument with sub-parts larger than a few sentences."
"For example, when generating fiction-like prose, it will frequently fail to track which characters are in a given scene (e.g. character A has some dialogue yet character B refers to them as if they're not in the room), and has a shaky grasp of dialogue turn conventions (e.g. having the same character speak twice on successive lines). In nonfiction-like prose, it tends to maintain a 'topic' via repeating a set of key phrases, but will often make wildly divergent or contradictory assertions about the topic without noting the discontinuity."
| "The managing director of a British energy company, believing his boss was on the phone, followed orders one Friday afternoon in March to wire more than $240,000 to an account in Hungary, said representatives from the French insurance giant Euler Hermes, which declined to name the company."
"The request was 'rather strange,' the director noted later in an email, but the voice was so lifelike that he felt he had no choice but to comply."
|Micro-robot swarm for dental plaque.|
| "I took control of the giant robot hand Jeff Bezos called 'weirdly natural' -- and he was right." The robot hands are controlled remotely by a haptic-feedback glove, which means "not only do the robotic hands mirror the person's movements, they also relay the sensation of touch back to them."
"I was amazed at how quickly I got used to the hands, and taking the hoops off the stick was disarmingly easy. The dexterity and delicacy with which the hand was able to mirror my movements was amazing. When the time came to replace the hoops, the hand started to feel decidedly less dexterous."
|"Meet GammaChef: Your smart robotic cooker." They say, "GammaChef is robotic chef that is capable of preparing any one pot meal." Can cook risottos, pastas, and stew. Works by having bar-coded ingredients containers, both refrigerated and non-refrigerated, a water tank, a pot for mixing things, a robotic stirrer, an induction hot plate, and a touch screen.|
| A GPT-2 text adventure game. In an ordinary text adventure game, if it said:
You stand in front of the door.
and you said:
it would respond with something like:
The door opens to reveal a cluttered office.
"GPT-2 is a predictive text model, which just means that it tries to predict what comes next after some text that you enter. that means if you give it
it will try to predict what happens next, based on its training data. Let the user choose their next action based on the response, and you have the makings of a text adventure game. This makes for a playable, if thoroughly weird, text adventure."
You are standing in a grassy field with a path along the stream, and a small crack across the floor at the bottom of a small pit.
The path heads north-south here, and appears to be a straight path.
Road near bridge.
Cross the bridge.
Cross the bridge and you will fall into a pit.
>fine, go into the pit
Fine, go into the pit.
Look around, and strange shapes (apparently made by an artist who never was in the room) are seen all over.
>talk to the shapes
Argue with the shapek fan, which appears to be hands-free. The cheer goes up! The heavens are now wide enough to allow for singing.
I'm scared that you have done something deliberately.
>get out of the pit
You are in the pit, so to speak, below the ramparts.
There is a narrow passage from east to west and a stone stairway leading down.
The room is deafeningly loud with an undetermined rushing sound.
The sound seems to reverberate from all of the walls, making it difficult even to think.
You are in the pit.
>go down the stairway
Go down the stairway is barred by evil spirits, who by some miracle of Zorkian technology have stopped the flow of the river.
>kill the spirits
The spirits, sensing a greater power, flee through the walls.
>go down the stairway
The pit is too narrow for most insects.
|When entering the PIN on the Ledger Blue cryptocurrency wallet, each button press creates a significant electro magnetic signal around the 169 MHz spectrum. "The idea was born to use TensorFlow/machine learning to automatically analyze these signals and using it to retrieve the PIN entered into the device - out of thin air!"|
|Play Street Fighter with body movements using Arduino and Tensorflow.js.|
|rlpyt (Reinforcement Learning + PyTorch) is a library of "modular, optimized implementations of common deep reinforcement learning algorithms in PyTorch, with unified infrastructure supporting all three major families of model-free algorithms: policy gradient, deep-q learning, and q-function policy gradient" from Berkeley Artificial Intelligence Research.|
| Neural Structured Learning is a new framework for TensorFlow for structured data. Essentially what it does is interpret the input as a graph and use neural graph learning on it.
If no explicit structure is given, it has tools to construct graphs, and even, they say, an "adversarial" system to "induce" implicit structured signals.
|AI learns to park. Well, after attempting hundreds of thousands of times. (In simulation, obviously.) And still doesn't do super great.|
|"Does GDPR protect the privacy of your emotions?" Specifically as ascertained by AI systems that detect emotional expressions in facial scans. Carrington Malin thinks it does.|
|Imperial College London is offering on online Master's in Machine Learning (via Coursera).|
| "The European Space Agency (ESA) yesterday took action to avoid a collision with a SpaceX broadband satellite after a bug in SpaceX's on-call paging system prevented the company from getting a crucial update."
"For the first time ever, ESA has performed a 'collision avoidance maneuver' to protect one of its satellites from colliding with a 'mega constellation." "The 'mega constellation' ESA referred to is SpaceX's Starlink broadband system, which is in the early stages of deployment but could eventually include nearly 12,000 satellites."
"Action had to be taken because the ESA's Aeolus satellite and a Starlink satellite were on a course that carried more than a 1-in-10,000 chance of a collision."
| "I helped build a robot that teaches sign language to children with Autism." "Design guidelines for a robot to be used with autistic children: Simple form, consistent, structured, simple behaviour, positive, supportive, rewarding experience and environment, modular complexity, modularity specific to child's preferences."
"All our modifications followed the design guidelines: for example, we changed the robot's human-like voice to robotic, to give it a 'simple form' (guideline 1). To make the InMoov a better sign language teacher, we made a few big adjustments. We gave it new Ada hands, designed by Open Bionics, and built by Metropolia University of Applied Sciences. We also embedded a screen into its chest, and lights onto its arms. The screen was added to provide another mode of communication (photographs are often used in AAC), and lights were added to capture the child's attention."
"I was surprised by how differently each child interacted with the robot. A few of the children signed with near-perfect accuracy throughout the entire experiment, imitating all the robot's signs in a mere 6 minutes. Some took as long as 28 minutes, struggling with each sign. One particular child -- who was not too keen on signing -- could not stop laughing at the robot. The child kept attempting to either hug or lunge at the robot throughout the experiment, with the speech therapist and neuropsychologist lunging after him to stop him in time."
| "A magnetically steerable, thread-like robot that can actively glide through narrow, winding pathways, such as the labrynthine vasculature of the brain."
"Over the past few years, the team has built up expertise in both hydrogels -- biocompatible materials made mostly of water -- and 3D-printed magnetically-actuated materials that can be designed to crawl, jump, and even catch a ball, simply by following the direction of a magnet."
"In this new paper, the researchers combined their work in hydrogels and in magnetic actuation, to produce a magnetically steerable, hydrogel-coated robotic thread, or guidewire, which they were able to make thin enough to magnetically guide through a life-size silicone replica of the brain's blood vessels."
"The core of the robotic thread is made from nickel-titanium alloy, or 'nitinol,' a material that is both bendy and springy." "The team coated the wire's core in a rubbery paste, or ink, which they embedded throughout with magnetic particles."
"Finally, they used a chemical process they developed previously, to coat and bond the magnetic covering with hydrogel -- a material that does not affect the responsiveness of the underlying magnetic particles and yet provides the wire with a smooth, friction-free, biocompatible surface."
More precisely, they made ferromagnetic particles from neodymium-iron-boron alloy (NdFeB), which can be magnetized and doesn't lose its induced magntization, coated it in silica (SiO2), and used it to make a magnitized "tip" in a "thread" composed of either silicone (polydimethylsiloxane aka PDMS) or thermoplastic polyurethane (TPU). This was all done with injection molding. After that the whole thing was then coated in hydrogel (hydrophilic polydimethylacrylamide (PDMAA) polymers) that makes it self-lubricating and safe for biological systems. The purpose of the silica is to prevent the water in the hydrogel from corroding the ferromagnetic alloy. Once complete, magnetic fields of 20 to 80 milliteslas (mT) are used to steer it. The "thread" still has to be pushed from behind, though.
| "Our brains are perhaps the final privacy frontier. They're the seat of our personal identity and our most intimate thoughts. If those precious three pounds of goo in our craniums aren't ours to control, what is?"
"Facebook took care to note that all brain data in the study will stay onsite at the university."
"Facebook is already great at peering into your brain without any need for electrodes or fMRI or anything."
"One of the major problems with algorithmic decision-making systems is that as they grow in sophistication, they can become black boxes.
"Another big risk is that this neurotechnology might normalize a culture of mind-reading."
"Rubbing out the distinction between mind and machine also comes with more philosophical risks, like the risk that we might feel alienated from ourselves."
"It'll take time for politicians and lawmakers to catch up to the new realities that brain-reading tech makes possible."
| "We focus on another type of reward hacking called reward tampering. In reward tampering, the agent doesn't exploit a misspecified reward function. Instead, it actively changes the reward function. For example, some Super Mario environments have a bug that allows execution of arbitrary code by taking the right sequence of in-game actions. This could in principle be used to redefine the score of the game."
"While this type of hacking is beyond the capabilities of current RL agents in most environments, the general quest to build more capable agents may eventually lead us to build agents that can exploit such shortcuts. Understanding reward tampering therefore ties in well with our safety work on anticipating future failure modes and figuring out how to prevent them before they occur."
"We describe a more principled way to fix the reward tampering problem. Rather than trying to protect the reward function, we change the agent's incentives for tampering with it."
|"Our data shows that impactful Chinese investment in AI research pre-dates their 2017 announcement regarding AI supremacy by more than a decade. By most measures, China is overtaking the US not just in papers submitted and published, but also in the production of high-impact papers as measured by the top 50%, top 10%, and top 1% most-cited papers. By projecting current trends, we see that China is likely to have more top-10% papers by 2020 and more top-1% papers by 2025."|
| "Deep learning can't progress with IEEE-754 floating point. Here's why Google, Microsoft, and Intel are leaving it behind." "The de facto standard for floating point is IEEE-754. It's available in all processors sold by Intel, AMD, IBM, and NVIDIA. But as the deep learning renaissance blossomed researches quickly realized that IEEE-754 would be a major constraint limiting the progress they could make. IEEE floating point was designed 30 years ago when processing was expensive, and memory access was cheap. The current technology stack is reversed: memory access is expensive, and processing is cheap. And deep learning is memory bound."
"Google developed the first version of its Deep Learning accelerator in 2014, which delivered two orders of magnitude more performance than the NVIDIA processors that were used prior, simply by abandoning IEEE-754. Subsequent versions have incorporated a new floating-point format, called bfloat16, optimized for deep learning to further their lead."
"Now, even Intel is abandoning IEEE-754 floating point for deep learning. Its Cooper Lake Xeon processor, for example, offers Google's bfloat16 format for deep learning acceleration. Thus, it comes as no surprise that competitors in the AI race are all following suit and replacing IEEE-754 floating point with their own custom number systems. And researchers are demonstrating that other number systems, such as posits and Facebook's DeepFloat, can even improve on Google's bfloat16."
| Researchers "have used AI to mask the emotional cues in users' voices when they're speaking to internet-connected voice assistants. The idea is to put a 'layer' between the user and the cloud their data is uploaded to by automatically converting emotional speech into 'normal' speech."
"Our voices can reveal our confidence and stress levels, physical condition, age, gender, and personal traits. This isn't lost on smart speaker makers, and companies such as Amazon are always working to improve the emotion-detecting abilities of AI."
"An accurate emotion-detecting AI could pin down people's 'personal preferences, and emotional states and may therefore significantly compromise their privacy."
"Their method for masking emotion involves collecting speech, analyzing it, and extracting emotional features from the raw signal. Next, an AI program trains on this signal and replaces the emotional indicators in speech, flattening them. Finally, a voice synthesizer re-generates the normalized speech using the AIs outputs, which gets sent to the cloud. The researchers say that this method reduced emotional identification by 96 percent in an experiment, although speech recognition accuracy decreased, with a word error rate of 35 percent."
The article doesn't go into any more detail as to how the system works, but it's based on generative adversarial networks (GANs), more precisely CycleGAN-VC2. It first does a "spectral conversion" where the input is converted into sound frequencies. The spectral features are then mapped using CycleGAN from utterances spoken in emotional ways to corresponding features of emotionless speech. Then the spectral features are converted back into sound waveforms.
|Computer screens (and color LED lights) don't actually make all the colors. They emit only red, green, and blue, and exploit the fact that the cells in your eyes can only detect specific wavelengths, and your brain uses that information to guess what the color is in the real world. I've noticed this from "rainbow glasses", those glasses that have diffraction gratings in them that turn light sources into rainbows. Well, they turn incandescent lights into rainbows. For LEDs, though, they turn them into 3 spots -- red, green, and blue. As this video demonstrates, when using RGB as a light source rather than just as a camera, this has implications that you may not have thought through.|
|Gravitational waves don't combine linearly, like light waves, near colliding black holes, because Einstein's theory of relativity is non-linear. And other facts about black hole collisions. Compressibility of spacetime. Conversion of matter to energy in black hole collisions. Michael Landry of LIGO Hanford interviewed by Dianna Cowern aka "Physics Girl".|
| A neural network was trained to classify photos of butterflies (subspecies of Heliconius). The training used triplets of images where two were from the same subspecies and the third was a different subspecies. The neural network learned to give a 'distance' where the photos of the same species would be closer together and different species farther apart. In addition it would calculate set of coordinates, and these coordinates could be used to calculate "distances".
The title, though, is "AI used to test evolution's oldest mathematical model", so you might be wondering what mathematical model this tests. That would be Müllerian mimicry theory. The theory is that species with common predators who have something to deter those predators, such as being poisonous or having sharp spines, will develop honest warning signals in parallel for their mutual benefit.
"We wanted to test Müller's theory in the real world: did these species converge on each other's wing patterns and if so how much? We haven't been able to test mimicry across this evolutionary system before because of the difficulty in quantifying how similar two butterflies are."
"Heliconius butterflies are well-known mimics, and are considered a classic example of Müllerian mimicry. They are widespread across tropical and sub-tropical areas in the Americas. There are more than 30 different recognisable pattern types within the two species that the study focused on, and each pattern type contains a pair of mimic subspecies."
"We found that these butterfly species borrow from each other, which validates Müller's hypothesis of mutual co-evolution. In fact, the convergence is so strong that mimics from different species are more similar than members of the same species."
| "The debate can finally be put to rest -- Lil Nas X's record-setting, chart-topping hit 'Old Town Road' is indeed country. But it's also a little rock 'n roll. And when you analyze the lyrics and chords together, it's straight-up pop."
"At least, that's according to an artificial intelligence tool developed by USC computer science PhD student Timothy Greer. Greer's method automatically predicts music genres by analyzing how lyrics and chords interact with one another throughout songs."
"The method classified 'Old Town Road' as country according to the lyrics; rock according to the chords (based on a Nine Inch Nails music sample); and pop according to the chords and lyrics combined."
|Bill Gates documentary.|
|Deep dive into deepfakes. RogueRocket News talks with deepfake producers Ctrl Shift Face, TheFakening, and DrFakenstein, as well as people developing deepfake detection technology at SRI.|
| Jeremy Howard of fast.ai interview. He starts off by saying one of the first programs he wrote was a program to find a better musical scale than the regular 12-tone musical scale (it's actually 11 tones -- the 12th tone is the one you started with an octave up). I actually wrote that program, too! (But not in BASIC on a Commodore 64.) In fact my interactive algorithmic music composition program gives you a choice of scales, including a 19-note scale that was derived by that program.
He loved Microsoft Access, because it combined a relational database with a programming environment, and since then it's become harder to program relational databases. I thought that was interesting because I tend to think technology always gets better, usually fast, but sometimes it gets worse. Delphi is a good programming environment. J is a good programming language. What's J? An array-oriented language, like numpy. He's very critical of languages, thinking we're using terrible languages. Not happy with Python. Hopeful Swift will be the next language in the AI domain. Swift is becoming an "infinitely hackable" language, which could make it the ideal language to make every other domain-specific language in. (I have to disagree with him on this -- I think domain-specific languages is a really bad idea -- it makes code from every programmer unreadable to every other programmer because they have to learn that programmer's "domain-specific language" first. The vast majority of computation people ask computers to do is pretty ordinary and doesn't justify a "domain-specific language". Having said that, he has a point that at this point in history, it may be justified in letting people experiment with inventing languages specifically for tensor computation, since the current languages are inadequate for things like, e.g. sparse neural networks.)
fast.ai came out of an idea to use deep learning to do something about the worldwide doctor shortage. That was the impetus for his previous company Enlitic. But he realized deep learning was becoming the state of the art in every domain. To maximize the positive impact of deep learning, instead of him picking an area and trying to become good at it, like medicine, he should enable the domain experts who have already specialized in that area, and who already have the data, to learn deep learning. So the purpose of fast.ai is to figure out how to get deep learning into the hands of people who could benefit from it and help them to do so in as quick and easy and effective way as possible.
Researchers all work on the same thing because their need to publish means they have to work on things their peers will recognize as an advance. As a result, practical things useful in the real world, like transfer learning, and "active" learning (human-in-the-loop learning) are not researched.
fast.ai won DAWNbench, a competition to see how fast you could train a model, by using higher learning rates (in the gradient descent math) than anyone thought possible. He only does everything on a single GPU that a normal person can use in everyday life to prove you can do deep learning without being Google or Intel. Different parts of models need to be trained at different rates and this is a useful area of research.
You don't need ImageNet anyway, most of us don't use 1.3 million images, we use smaller subsets of images. He's for developing techniques that can train on small datasets. One of fast.ai's students developed the world's state-of-the-art movie colorizer which can colorize a whole movie in a couple of hours on a single GPU. He thinks Google's "AutoML" approach is insane, the right approach is to practice training models so you gain intuition as to what will work.
TensorFlow is difficult for beginners, because you have to make a computation graph up front. With PyTorch you don't, but you still have to code training loops and such. So he made the fast.ai library, which is actually a set of libraries with progressively low-level control as you go down, and at the top level you can make a neural network in 3 lines of code. He's actively working on a Swift version of the fast.ai library.
The best advice for someone starting now is to use Python, PyTorch, and the fast.ai library. People who are strong coding skills learn the fastest, but people with a strong traditional statistics background struggle the most, because everything is different than what they're used to and counter-intuitive. Tenacity is important.
On doing startups, he says do something you understand and care about, and it has to be something practical, not a PhD thesis. Stay away from venture capital money as long as possible, hopefully forever. You don't want people on your back saying, "grow, grow, grow, grow." If you're self-funded, you can go at a pace that makes sense.
He uses spaced repetition (Anki) for learning Chinese, but not for anything else.
Asked when computers will reach human-level intelligence, he says he doesn't know why people make predictions, as there is no data, nothing to go on. There are societally important problems to solve right now. AI might create a labor force displacement problem.
|"In China, foreign AI companies banned or disadvantaged." "One of China's largest AI providers, Megvii, disclosed in its IPO filing that: Foreign-owned entities are prohibited or disadvantaged in the relevant City IoT project bidding process in practice. In practice, when selecting service providers, many end users, as well as many direct customers (which are our system integrators) engaged by such end users to assist them in the supplier selection process, would set implicit requirements that the service provider must not have any foreign shareholder, or at least consider foreign ownership as a disadvantage in their decision making process. Some government agencies even explicitly set forth such requirements in their project bidding invitation documents."|
| Coursera just did its first acquisition, Rhyme Softworks. "As we expand our hands-on learning capabilities with Coursera Labs, we're thrilled to announce our acquisition of Rhyme Softworks, an online platform for hands-on projects. With Rhyme's virtual machines, beginner to intermediate-level learners can follow along with self-paced or live guided sessions while simultaneously completing a project or assignment -- all from one browser using pre-configured Windows or Linux cloud desktops. Rhyme truly embraces the concept of 'learn by doing' and, moving forward, we see big opportunities to use Rhyme to extend the capabilities we'll offer with Coursera Labs. With this acquisition, we will also expand Rhyme's office in Sofia, Bulgaria to focus on Labs-powered innovation efforts."
Rhyme Softworks makes a system that enables you to learn actively working on essentially a virtual machine in the cloud that you can interact with through a web browser, which has all the software and data you need for your project, while at the same time watching instructional videos, and your instructor can monitor your progress in real time and send you messages.
|Crime... in... spaaaaaaaaaaace... allegedly.|
| "What is the point of racing driverless cars? "It's an important way to assess the quality of the sensors and cameras which autonomous vehicles (AVs) will rely on, explains Bryn Balcombe, Roborace's chief strategy officer."
"And 'testing performance limits on real roads is not something, as a member of society, I'm 100% comfortable with."
"'He plans to introduce obstacles for the DevBots to navigate, such as slower-moving lorries and tractors. Overtaking is the hardest race course task to automate,' says Johannes Betz, a post-doctoral researcher in charge of the Technical University of Munich's entry in the Roborace motorsport competition."
"The ultimate aim is to find out whether driverless cars can eventually 'perform at a level so you can't detect it's an AI."
|"OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games. OpenSpiel supports n-player (single- and multi- agent) zero-sum, cooperative and general-sum, one-shot and sequential, strictly turn-taking and simultaneous-move, perfect and imperfect information games, as well as traditional multiagent environments such as (partially- and fully- observable) grid worlds and social dilemmas."|
|VTOL cargo drone. Like a biplane with rotors on the end of each wing that turns sideways to take off and land.|
|"Waymo's robot taxi service is improving, but riders still have complaints." Weird drop-offs, circuitous routes, and shaky driving. More complaints in San Francisco than in Phoenix. San Francisco has tougher terrain, higher density, narrower roads, and more pedestrians and cyclists.|
| "The anthropologist of artificial intelligence. The algorithms that underlie much of the modern world have grown so complex that we always can't predict what they'll do. Iyad Rahwan's radical idea: The best way to understand them is to observe their behavior in the wild."
"Directly inspired by the Nobel Prize-winning biologist Nikolaas Tinbergen's four questions -- which analyzed animal behavior in terms of its function, mechanisms, biological development and evolutionary history -- machine behavior aims to empirically investigate how artificial agents interact 'in the wild' with human beings, their environments and each other. A machine behaviorist might study an AI-powered children's toy, a news-ranking algorithm on a social media site, or a fleet of autonomous vehicles. But unlike the engineers who design and build these systems to optimize their performance according to internal specifications, a machine behaviorist observes them from the outside in -- just as a field biologist studies flocking behavior in birds, or a behavioral economist observes how people save money for retirement."
|A robotic ship called Maxlimer is slated to be the first to cross the Atlantic. Maxlimer is remote controlled in port and autonomous on open sea. It launched in 2017 and spent a couple of years in testing, and in 2019 won an award for best ocean-mapping technology, did a cargo run between Britain and Belgium, hauling oysters and beer, and went to Norway and did an unmanned offshore commercial pipeline inspection using a small drone submarine. Since it doesn’t get hungry, tired, or sick, it could sail at a leisurely eight miles per hour for up to 9 months at a stretch, using 5% of the fuel of a standard ocean-going vessel, making it extraordinarily cheap. It can carry 2.5 tons of cargo.|
|Cracking the Code of Cicada 3301. This series of videos (I put the 4 parts on a handy playlist) are a profile of some of the people who have cracked Cicada 3301 puzzles (though they have been stumped by the last one), but doesn't go much into the puzzles themselves.|
| "About 200 years ago, the German physician Ernst Heinrich Weber made a seemingly innocuous observation which led to the birth of the discipline of Psychophysics -- the science relating physical stimuli in the world and the sensations they evoke in the mind of a subject. Weber asked subjects to say which of two slightly different weights was heavier. From these experiments , he discovered that the probability that a subject will make the right choice only depends on the ratio between the weights."
"For instance, if a subject is correct 75% of the time when comparing a weight of 1 Kg and a weight of 1.1 Kg, then she will also be correct 75% of the time when comparing two weights of 2 and 2.2 Kg -- or, in general, any pair of weights where one is 10% heavier than the other. This simple but precise rule opened the door to the quantification of behavior in terms of mathematical 'laws'."
"Weber's observations have since been generalised to all sensory modalities across many animal species, leading to what is now known as Weber's Law. It is the oldest and most firmly established law in psychophysics."
Now, a team of researchers has discovered that a similar rule applies for decision times -- at least in rats discriminating the relative loudness of a pair of sounds. A similar test was done in humans with the same result.
This rules out popular mathematical models that have been proposed to explain Weber's Law. The correct mathematical model much match both Weber's Law and what they call "time–intensity equivalence in discrimination" (TIED).
| "After two years of evaluating more than 1,000 studies, the team found that emotions are too nuanced to be identified strictly by facial expressions. Conversational AI is unique in that it doesn't just capture and evaluate the movement of facial muscles. It carefully evaluates a series of spoken and sometimes unspoken cues that represent emotion and intent."
Although the article is titled, "Voice is the safest and most accurate for emotion ai analysis," it doesn't really make the case that voice is accurate, only that face analysis is inaccurate. It mainly makes the case that voice is better because "we wear our faces everywhere" and can't control when cameras capture our facial movements, but "you consciously decide to speak and can control what you say, and to some degree, how you say it."
| "In the late 1970s, I sat down with technologist and entrepreneur, Gene Amdahl to discuss the prospect of building a 'super-chip.' Gene, like myself and others with a history of chip design, knew that using a whole wafer, via a method known as wafer scale integration, or WSI, would vastly improve performance. Ultimately, despite Gene's best efforts, his work on WSI was unsuccessful. The hypothesis was correct, but at that time there were simply too many uncharted, fundamental technical impediments to make wafer scale integration a reality."
"Three decades later, I sat down with another entrepreneur, Andrew Feldman, and to my surprise, had a similar discussion."
"Andrew and the Cerebras team have built that chip. Having successfully navigated issues of yield, power delivery, cross-reticle connectivity, packaging, and more, today they unveil the Cerebras Wafer Scale Engine (WSE), aka, the largest chip ever built. And it's remarkable. With a 1,000x performance improvement over what's currently available, the Cerebras WSE is comprised of more than 1.2 trillion transistors and is 46,225 square millimeters. It also contains 3,000 times more high speed, on-chip memory, and has 10,000 times more memory bandwidth."
Or so is claimed. We'll see if this chip takes over the market.
| A Complete List of SWAYAM Free Online Courses and MOOCs (2019). "SWAYAM is India's national MOOC platform, designed to achieve the three cardinal principles of India's Education Policy: access, equity and quality."
"Since its beta launch in July 2017, the platform has enrolled over 10 million learners. At the rate it's growing, in a few years, SWAYAM could become the world's largest MOOC provider, offering courses in a wide variety of disciplines from prestigious Indian institutions such as IITs and Central Universities."
"In July 2019, SWAYAM's homepage and course catalog were revamped, notably to include all courses offered by NPTEL, a group comprising some of India's most prominent engineering institutions. All these courses can be taken for free. Students can also choose to pay a small fee (about $15) to sit a proctored exam in an examination center in India to earn a certificate of completion."
"Certificates may, in turn, be used by students enrolled in India's higher education to earn academic credit for completing SWAYAM courses earmarked as credit-eligible by their universities."
Computer Science (36 courses), Programming (14 courses), Engineering (137 courses), Science (93 courses), Social Sciences (44 courses), Humanities (31 courses), Art & Design (17 courses), Health & Medicine (8 courses), Business (58 courses), Data Science (8 courses), Mathematics (25 courses), Personal Development (9 courses), and Education & Teaching (16 courses).
| Adversarial examples look different to an algorithm called SHAP. SHAP overlays images with a heatmap showing what parts of the image led to the prediction. The SHAP heatmaps are dramatically different for the "African elephant" image compared to it's adversarial counterpart, which still looks like an African elephant to our eyes but the neural network mistakes for a ping pong ball. The article also shows comparisons for "ambulance" vs. "sleeping bag", "burrito" vs. "zebra", and "gas pump" vs. "forklift".
"The above image juxtaposes the explanations for a clean image and its adversarial counterpart. You will immediately notice that there is a significant discrepancy in the activations for clean image as opposed to perturbed image - SHAP values for the 'correct' class are much sharper compared to those for the targeted class."
"It seems that the SHAP values are consistently higher for the correct class. In comparison, the SHAP values for the targeted class are much smaller. This is despite the fact that the model's output probabilities for the targeted classes are very high (~99%)!"
Given this, you might wonder how the SHAP algorithm works. Well, first of all, SHAP stands for SHapley Additive exPlanations. That's not too helpful. SHAP is actually not a single algorithm, but a combination of 6 algorithms: LIME (Locally Interpretable Model-agnostic Explanations), Shapley Sampling Values, DeepLIFT, QII (Quantitative Input Influence), Layer-wise Relevance Propagation, and Shapley Regression Values. These in turn are combined using a decision tree algorithm called Tree Interpreter. This is what in machine learning circles is known as an "ensemble" -- getting a more accurate result than any one algorithm can provide by combining multiple algorithms.
| Reinforcement learning for data center job scheduling. Optimal scheduling of jobs in data centers is intractable for many complex jobs, and people do painstaking manual scheduler customization, but this reinforcement learning system automatically learns sophisticated scheduling policies. "Instead of a rigid fair sharing policy, it learns to give jobs different shares of resources to optimize overall performance, and it learns job-specific parallelism levels that avoid wasting resources on diminishing returns for jobs with little inherent parallelism."
"Reinforcement learning is well-suited to learning scheduling policies because it allows learning from actual workload and operating conditions without relying on inaccurate assumptions."
"To successfully learn high-quality scheduling policies, we had to develop novel data and scheduling action representations, and new RL training techniques. First, cluster schedulers must scale to hundreds of jobs and thousands of machines, and must decide among potentially hundreds of configurations per job (e.g., different levels of parallelism). This leads to much larger problem sizes compared to conventional RL applications (e.g., game-playing, robotics control), both in the amount of information available to the scheduler (the state space), and the number of possible choices it must consider (the action space)."
"We designed a scalable neural network architecture that combines a graph neural network to process job and cluster information without manual feature engineering, and a policy network that makes scheduling decisions."
"Our neural networks reuse a small set of building block operations to process job DAGs, irrespective of their sizes and shapes, and to make scheduling decisions, irrespective of the number of jobs or machines."
"Conventional RL algorithms cannot train models with continuous streaming job arrivals. The randomness of job arrivals can make it impossible for RL algorithms to tell whether the observed outcome of two decisions differs due to disparate job arrival patterns, or due to the quality the policy's decisions. Further, RL policies necessarily make poor decisions in early stages of training. Hence, with an unbounded stream of incoming jobs, the policy inevitably accumulates a backlog of jobs from which it can never recover. Spending significant training time exploring actions in such situations fails to improve the policy. To deal with the latter problem, we terminate training 'episodes' early in the beginning, and gradually grow their length. This allows the policy to learn to handle simple, short job sequences first, and to then graduate to more challenging arrival sequences. To cope with the randomness of job arrivals, we condition training feedback on the actual sequence of job arrivals experienced, using a recent technique for RL in environments with stochastic inputs."
|A soft pump for soft robotics. Can be twisted and stretched, no moving parts. Works by applying an electric field and having a special fluid with particles in it with an electric charge.|
| "Spriteworld is a python-based reinforcement learning environment that consists of a 2-dimensional arena with simple shapes that can be moved freely."
"Spriteworld sprites come in a variety of shapes and can vary continuously in position, size, color, angle, and velocity. The environment has occlusion but no physics, so by default sprites pass beneath each other but do not collide or interact in any way. Interactions may be introduced through the action space, which can update all sprites each timestep. For example, the DiscreteEmbodied action space implements a rudimentary form of physics in which an agent's body sprite can adhere to and carry sprites underneath it."
Example tasks: "Goal-finding task: The agent must bring the target sprites (squares) to the center of the arena." "Clustering task: The agent must arrange the sprites into clusters according to their color." "Sorting task: The agent must sort the sprites into goal locations according to their color (each color is associated with a different goal location)."
| "DeepMind, the AI-centric subsidiary of Alphabet, is bleeding money: last year, it lost $571 million, according to the Financial Times. If that wasn't enough, it owes its corporate parent roughly $1.4 billion (although there's apparently no risk of actual default)."
"In return, DeepMind has generated roughly $125 million in revenue, all of it from sales to Alphabet."
"If that budgetary crunch wasn't enough, there are new rumblings of internal issues. DeepMind co-founder Mustafa Suleyman was recently placed on leave for undisclosed reasons, according to Bloomberg. Not only does Suleyman oversee DeepMind's applied division, but he's also regarded as an ambassador for AI, explaining its potential (and ethics) to the public and government officials."
|Who was the Murphy in Murphy's Law? Someone who worked with John Paul Stapp, an Air Force flight surgeon who became the (crazy daredevil) test subject in his own experiments on the effects of G-forces on the human body, experiments that led to such things as safe ejection seats on aircraft and modern car safety. Edward Murphy Jr worked with John Paul Stapp as an aerospace engineer working on safety-critical systems, and had a philosophy of trying to think of every possible single point of failure, and asking what happens if that system failed. It was actually John Paul Stapp who compressed it down to "Anything that can go wrong will go wrong" and named it after Edward Murphy.|
| "Why is it that you can remember the name of your childhood best friend that you haven't seen in years yet easily forget the name of a person you just met a moment ago? In other words, why are some memories stable over decades, while others fade within minutes?"
"In the test, a mouse was placed in a straight enclosure, about 5 feet long with white walls. Unique symbols marked different locations along the walls -- for example, a bold plus sign near the right-most end and an angled slash near the center. Sugar water (a treat for mice) was placed at either end of the track. While the mouse explored, the researchers measured the activity of specific neurons in the mouse hippocampus (the region of the brain where new memories are formed) that are known to encode for places."
"When an animal was initially placed in the track, it was unsure of what to do and wandered left and right until it came across the sugar water. In these cases, single neurons were activated when the mouse took notice of a symbol on the wall. But over multiple experiences with the track, the mouse became familiar with it and remembered the locations of the sugar. As the mouse became more familiar, more and more neurons were activated in synchrony by seeing each symbol on the wall. Essentially, the mouse was recognizing where it was with respect to each unique symbol."
"To study how memories fade over time, the researchers then withheld the mice from the track for up to 20 days. Upon returning to the track after this break, mice that had formed strong memories encoded by higher numbers of neurons remembered the task quickly. Even though some neurons showed different activity, the mouse's memory of the track was clearly identifiable when analyzing the activity of large groups of neurons. In other words, using groups of neurons enables the brain to have redundancy and still recall memories even if some of the original neurons fall silent or are damaged."
| Generative Adversarial Network (GAN) tutorial. 50 lines of PyTorch.
"The models play two distinct (literally, adversarial) roles. Given some real data set R, G is the generator, trying to create fake data that looks just like the genuine data, while D is the discriminator, getting data from either the real set or G and labeling the difference. Goodfellow's metaphor (and a fine one it is) was that G was like a team of forgers trying to match real paintings with their output, while D was the team of detectives trying to tell the difference. (Except that in this case, the forgers G never get to see the original data -- only the judgments of D. They're like blind forgers.)"
"In the ideal case, both D and G would get better over time until G had essentially become a 'master forger' of the genuine article and D was at a loss, 'unable to differentiate between the two distributions.'"
"In practice, what Goodfellow had shown was that G would be able to perform a form of unsupervised learning on the original dataset, finding some way of representing that data in a (possibly) much lower-dimensional manner. And as Yann LeCun famously stated, unsupervised learning is the 'cake' of true AI."
| Waymo dataset. "This release contains data from 1,000 driving segments. Each segment captures 20 seconds of continuous driving, corresponding to 200,000 frames at 10 Hz per sensor."
"This dataset covers dense urban and suburban environments across Phoenix, AZ, Kirkland, WA, Mountain View, CA and San Francisco, CA capturing a wide spectrum of driving conditions (day and night, dawn and dusk, sun and rain)."
"Each segment contains sensor data from five high-resolution Waymo lidars and five front-and-side-facing cameras."
"The dataset includes lidar frames and images with vehicles, pedestrians, cyclists, and signage carefully labeled, capturing a total of 12 million 3D labels and 1.2 million 2D labels.
| "Not all reinforcement learning agents are created equal and it is not obvious what type of reinforcement learning agent should be deployed in specific scenarios. We know that a particular self-learning RL agent can master Go through self-play, but will it be any good at driving a car? Aside from tedious trail-and-error, there are no practical answers to this question.
Or maybe there are. DeepMind has just released a collection of "experiments" that you can do on your reinforcement learning agent.
For example, there is an experiment called "memory length" that "is designed to test the number of sequential steps an agent can remember a single bit."
Another experiment, called "deep sea" which tests the agent's depth of exploration of an NxN grid. The idea is that there is a cost to "exploration", yet the agent must explore its environment if it is going to learn.
The complete list of experiments (though I'm skipping explanations) is: bandit, bandit noise, bandit scale, cartpole, cartpole noise, cartpole scale, cartpole swingup, catch, catch noise, catch scale, deep sea, deep sea stochastic, discounting chain, memory len, memory size, mnist, mnist noise, mnist scale, mountain car, mountain car noise, mountain car scale, umbrella distract, and umbrella length.
| Not Jordan Peterson. So with this website, you can type in any text, and it will make Jordan Peterson say it.
"The technology used to generate audio on this site is a combination of two neural network models that were trained using audio data of Dr. Peterson speaking, along with the transcript of his speech."
"The first model, developed at Google, is called Tacotron 2. It takes as input the text that you type and produces what is known as an audio spectrogram, which represents the amplitudes of the frequencies in an audio signal at each moment in time. The model is trained on text/spectrogram pairs, where the spectrograms are extracted from the source audio data using a Fourier transform."
"The second model, developed at NVIDIA, is called Waveglow. It acts as a vocoder, taking in the spectrogram output of Tacotron 2 and producing a full audio waveform, which is what gets encoded into an audio file you can then listen to. The model is trained on spectrogram/waveform pairs of short segments of speech."
To give it a whirl, I punched in, "Never gonna give you up. Never gonna let you down. Never gonna run around and desert you. Never gonna make you cry. Never gonna say goodbye. Never gonna tell a lie and hurt you."
It made me wait 20 minutes for the result, evidently the result of the site being under heavy load. But I put the audio I got back on my web server so you can follow the link below to hear it.
|Raisim is a physics engine for robotics and AI research from ETH Zürich that does rigid-body dynamics simulation. "It features an efficient implementation of recursive algorithms for articulated system dynamics (Recursive Newton-Euler and Composite Rigid Body Algorithm)."|
| "New US Air Force kit that can turn a conventional aircraft into a robotic one has completed its maiden flight. Developed by the Air Force Research Laboratory (AFRL) and DZYNE Technologies Incorporated as part of the Robotic Pilot Unmanned Conversion Program, the ROBOpilot made its first two-hour flight on August 9 at the Dugway Proving Ground in Utah after being installed in a 1968 Cessna 206 small aircraft."
ROBOpilot works by "replacing the pilot seat (and pilot) with a kit consisting of all the actuators, electronics, cameras, and power systems needed to fly a conventional aircraft, plus a robotic arm for the manual tasks. In this way, ROBOpliot can operate the yoke, rudder, brakes, throttle, and switches while reading the dashboard gauges and displays like a human pilot."
|Multiply Labs is making a robot that makes pharmaceuticals with multiple customized doses.|
|The DeepMind podcast: Coming soon.|
| A month ago someone used OpenAI's GPT-2 to create a fake Reddit where the AI wrote all the posts and all the comments. It was trained on a massive corpus of real Reddit posts. Somewhere in the middle of all that, the AI asked itself, "Do you think AI will be the downfall of humanity or the savior?", and wrote all the answers.
"The downfall of humanity. The salvation of humanity. The rise of the (presumably) benevolent AI in our place." "The downfall of humanity could be considered an existential crisis if it truly did not learn our 'codes of conduct'."
|AI-generated personal finance blog.|
|Put your address into this website and it'll tell you what native American tribe's territory you're living on. It says I'm living on Apache territory.|
|Robo-shorts that make walking and running easier (they claim). It can distinguish between walking and running by detecting when your center of mass changes position relative to your stride. From there it has motors that assist your glute muscles.|
| Interview with Paola Arlotta on brain development. We mostly study mice brains, not human brains. A human brain is built in human time -- 9 months of gestation plus 20 years to become brains that can have this type of conversation. A mouse brain takes 20 days. If you put mouse brain stem cells in a dish, they form faster than human brain stem cells. The brain starts as a neural tube. The stem cells start out all the same, but over time become heterogeneous, and then diverge further into non-stem cells and the actual cells of the brain. Neurons are made first and then glial cells. The tube curls and one end becomes expanded for the brain and mechanical forces shape the brain as well. We don't know the entirety of how the genetic code controls brain development, but it is very well controlled. We only know how some parts of it work, like the development of some cell types. New cell types are developed all the way until birth. Before birth, our cells have no myelin, and are myelinated after birth. This continues until we're 25-30. Some of our most recently evolved cells, though, that give us a lot of the cognition that we have and mice don't, have very little myelin. Less myelin may allow for more flexibility of functions.
Nature vs nurture: always both. The genes incorporate your 20 plus years of interacting with the environment into your brain. If you are born without vision, your visual cortex will develop to do something different. Her kids have very different personalities, because they have different genetics, even if they have the same parents, but also have amazing plasticity.
Organoids: organoids are not brains. They are cellular systems developed from stem cells in a culture dish that mimic some aspect of the development of the brain. They are 4-5 mm in size. They are our best way of studying the development of the brain. You can take cells from an autistic person and study brain cell development in an autistic person. Organoids are very different from each other, unlike brains. When we are all born our brains look very similar. Different parts of the brain have different cells and researchers have been able to make organoids that mimic some aspect of development of parts of the brain. The cerebral cortex is the part that really makes us human. If you grow the organoids for a long enough time, many cells of the cortex appear in culture. The astrocytes also appear. Astrocytes are support cells that also guide the development of synapses.
Neurodevelopmental disorders can come from cells that don't work properly or cells that are not "born" at all. Something could go wrong in the cell maturation process. We can compare the gene expression of a single cell in a normal person and a person with a neurodevelopmental disease. You can make an organoid of a brain from a specific person with a neurodegenerative disease using their stem cells and use it for screening drugs to find what would helps that person.
Stem cell biology (how to turn a skin cell into an embryonic stem cell that can become a brain or an organ) and technologies for studying the properties of single cells millions of cells at a time are growing exponentially.
|McGyver-ing robot. Robot who needs a squeegee and spatula but doesn't have them, but has other parts, can figure out how to make them by sticking the other parts together.|
|Deep learning for semantic data type detection. So they call types like "string", "integer", and "boolean" "atomic" data types, but what they want to identify are "semantic" data types, like "name", "birthdate", "weight", "rank", "location", "elevation", "grade", "product", "album", "elevation", etc -- "semantic" data types, in their terminology, reflect the meaning of the data. Some "semantic" types, like ISBNs and credit card numbers, can be identified by mathematical formulas. For everything else, you need deep learning.|
| "Exploring DNA with deep learning." Interview (written) with Lex Flagel who just published "The Unreasonable Effectiveness of Convolutional Neural Networks in Population Genetic Inference". People have been using deep learning in genetics research, but generally what they've been doing is combining a whole bunch of traditional statistics and running them though a classifier. Using one statistic on its own isn't good enough because there can sometimes be other things that cause that statistic to change other than the thing you're looking for. For example there's a statistic called Tajima's D that is supposed to detect population bottlenecks. If the DNA indicates that there was a recent bottleneck in a population, Tajima's D is supposed to go negative. The trouble is it can be fooled by positive selection instead of population size changes. So one statistic by itself isn't so good, but if you can calculate a whole bunch of them, and then feed those into a neural network, maybe it can tell you what's going on with the population.
What Lex Flagel did, though, was convert the genetic sequence alignment data into images and then feed the images into a convolutional neural network, the type of neural network designed for images. Doing it this way, you don't need to pre-calculate any statistics -- the neural network learns them for itself.
But as he explains in the interview, "However, there's a catch. Though neural networks can automate feature extraction, they are poor at giving explanations of how they did it. For example, a it's pretty easy to train a neural network to distinguish pictures of cats from pictures of dogs. But it's really hard to get them to tell you why they think a certain picture is a cat, or what generally distinguishes cats from dogs. Our method suffers from this problem as well. So the neural networks we built can make some pretty stunning inferences, which is great, but they can't teach you new theory or lead you to new equations. In contrast, because many classical methods in population genetics are derived from theory, they can explain themselves in the terms of that theory, which is really useful for understanding and learning."
|Whole genome sequencing from DNA from 2,308 people from 493 families found 69 genes that increase the risk of autism spectrum disorder. The genes found are primarily related to ion transport and the microtubule cytoskeleton in the brain. However, genes in people with no family history of autism were related to transcriptional and chromatin regulation.|
|A huge genome-wide association study on asthma with 37,846 British individuals with asthma, 9,433 of whom had asthma as children, and a control group of 318,237 people without asthma found that 61 independent genes are related asthma, 56 of which are related to childhood-onset asthma and 19 of which are related to adult-onset asthma. "Childhood-onset genes were highly expressed in epithelial cells (skin). Both childhood-onset and adult-onset asthma genes were highly expressed in blood (immune) cells."|
| "A genome-wide association study (GWAS) and bioinformatic analysis of more than 165,000 US veterans confirms a genetic vulnerability to post-traumatic stress disorder (PTSD), specifically noting abnormalities in stress hormone response and/or functioning of specific brain regions."
"In the European American group, the scientists found eight distinct genetic regions with strong associations between PTSD and how the brain responds to stress. It highlighted the role of one specific kind of brain cell: striatal medium spinal neurons, which are prevalent in a region of the brain responsible for, among other things, motivation, reward, reinforcement and aversion."