Boulder Future Salon Recent News Bits

Thumbnail Minecraft reinforcement learning competition. "Competitors are tasked with developing artificial intelligence agents that can obtain 'diamond' rewards in the popular video game Minecraft. Although standard training procedures require months or more to bring a system to human-level performance in complex games such as StarCraft or Dota 2, the MineRL challenge training time is limited to only four days."

"In Minecraft, obtaining diamonds requires a sequence of eight steps, from wood gathering to diamond mining. The AI agent must determine how to efficiently perform these steps and their correct order. Organizers created an imitation learning dataset comprising over 60 million frames of human player data, along with video inputs that can help AI agents determine the logical relationship between various steps in a short time and solve three types of tasks  --  navigation, obtaining, and survival  --  which represent some of most difficult challenges in reinforcement learning, such as sparse rewards and hierarchical policies."
Thumbnail Person puts electrodes on his muscles, then uses the muscles to control a robotic arm that helps him lift things. A neural network interprets the sensor data and translates it into commands to the robot arm.
Thumbnail A system that creates realistic talking head videos. It works by using a "face landmark detector". It maps the "landmarks" onto a different video of the same person. The talking head models work well even for new view angles not present in the training data.

The system uses an "embedding" neural network to map landmarks into vectors. The vectors are then used to initialize the parameters of adapted layers inside a "generator" neural network which generates the synthesized video. The resulting video has to make it through a "discriminator" neural network which rejects video frames that either don't look realistic or don't preserve the pose and identity of the person.

The system works on selfies, photos of people who don't exist any more, and art.
Thumbnail When I see the name of a company I never heard of on AI research papers, it invariably turns out to be a gigantic company I never heard of -- because it's in China. For example ByteDance. If you've ever used or heard of TikTok, ByteDance is the company that owns it. TikTok is known as Douyin in China. If you've heard of musical.ly, musical.ly was acquired by ByteDance and the functionality incorporated into TikTok.

In China, ByteDance is best known for Toutiao (which means "Headlines" in English), an AI-driven news app. ByteDance uses AI to personalize everything in all their apps. The company is valued at $78 billion and is considered the world's most valuable startup. The company has existed only 7 years.
Thumbnail Marine scientists and robotics experts tested the effectiveness of a computer vision (CV) system in processing all the images from autonomous underwater vehicles (AUV) exploring the seafloor.

"One of the UK's national AUVs -- Autosub6000, deployed in May 2016 -- collected more than 150,000 images in a single dive from around 1200m beneath the ocean surface on the north-east side of Rockall Bank, in the North East Atlantic. Around 1,200 of these images were manually analysed, containing 40,000 individuals of 110 different kinds of animals (morphospecies), most of them only seen a handful of times."

"The accuracy of manual annotation by humans can range from 50 -- 95%, but this method is slow and even specialists are very inconsistent across time and research teams. This automated method reached around 80% accuracy, approaching the performance of humans with a clear speed and consistency advantage. This is particularly true for some morphospecies that the algorithms work very well with. For example, the model correctly identifies one animal (a type of xenophyophore) 93% of the time."
Thumbnail GPS isn't available underwater, so underwater robots need to use a mathematical technique called Kalman filters with sensor data from accelerometers, magnetometers, gyroscopes, doppler velocity logs, and ultra short baseline sensors, if there is a ship near the surface.
Thumbnail Prodo.ai claims to have an AI code review assistant. "It interacts with human developers directly on their peer-review platform, spotting issues and recommending changes. It works out of the box without any configuration required, learning how to suggest the most impactful changes by taking into account the context and previous interactions with other humans. Unlike traditional rule-based tools, it takes into account linguistic features of the code to identify when it does not implement the intended functionalities."
Thumbnail "Nearly 55% of total commercial robots shipped in 2024, over 915,000 units, will have at least one Robot Operating System (ROS) package installed," according to a press release from ABI Research.
Thumbnail Watch a drone not get hit by a soccer ball.

The drone has an obstacle detection and evasive maneuver system that works using an event camera, which is a camera that detects motion changes in hardware quickly.
Thumbnail "Much of our work in robotics concentrates on self-supervision, in which systems learn directly from raw data (rather than from extensive structured training data specific to a particular task) so they can adapt to new tasks and new circumstances. To do this in robotics, we're advancing techniques such as model-based reinforcement learning (RL) to enable robots to teach themselves through trial and error using direct input from sensors."

"We are developing model-based RL methods to enable a six-legged robot to learn to walk -- without being given task-specific information or training."

"The robot starts learning from scratch with no information about its environment or its physical capabilities, and then it uses a data-efficient RL algorithm to learn a controller that achieves a desired outcome, such as moving itself forward."

"'Curious' AI systems are rewarded for exploring and trying new things, as well as for accomplishing a specific goal. Although previous similar systems typically explore their environment randomly, ours does it in a structured manner, seeking to satisfy its curiosity by learning about its surroundings and thereby reducing model uncertainty. We have applied this technique successfully in both simulations and also with a real-world robotic arm."

"We developed a new method for learning from touch to accomplish a new goal through self-supervised learning, without task-specific training data. And then we can, by assigning a new goal, use this model to decide what is the best sequence of actions to take. We took a predictive model originally developed for video input and used it instead to optimize deep model-based control policies that operate directly on the raw data -- which, in this case, consists of high-dimensional maps -- provided by a high-resolution tactile sensor. Our work shows that predictive models may be learned entirely without rewards, through diverse self-supervised exploratory interactions with the environment."

"Using this video prediction model, the robot was able to complete a series of complex tactile tasks: rolling a ball, moving a joystick, and identifying the right face of a 20-sided die. The model's success shows the promise of using video predictive models to create systems that understand how the environment will react to touch."
Thumbnail Today, May 20, 2019, is the official day the kilogram switches over to the new definition.

From today onward, ALL SI units are now defined by fundamental constants of nature:

- The transition frequency of caesium-133 (which defines the unit of time, the second, and the unit of frequency, the hertz, and from those, the unit of radioactive decays per unit time, the becquerel)
- the speed of light in a vacuum (which defines the unit of length, the meter, and from that the unit of absorbed dose of ionizing radiation, the sievert),
- the Planck constant (which now defines the unit of mass, the gram/kilogram, and from that we get the unit of force, the newton, the unit of pressure, the pascal, the unit of energy, the joule, and the unit of power, the watt),
- the charge of the electron (which defines the unit of charge, the coulomb, and from it, units of current and voltage, the ampere and the volt, as well as the unit of electrical capacitance, the farad, the unit of electrical resistance, the ohm, the unit of electrical conductance, the siemens, the unit of magnetic flux, the weber, the unit of magnetic field strength, the telsa, and the unit of electrical inductance, the henry),
- the Boltzman constant (which defines the unit of temperature, the kelvin, from which we get... just other units of temperature, like the degree Celsius),
- the Avogadro constant (which defines the unit of chemical quantity, the mole, and from that the unit of catalytic activity, the katal), and
- the luminous efficacy of monochromatic radiation of 555 nm light (which defines the unit of brightness of light, the candela, which in turn gives us the unit of luminous flux, the lumen, and the unit of illuminance, the lux)


The official definitions from now on are:

- the unperturbed ground state hyperfine transition frequency of the caesium 133 atom is defined as 9,192,631,770 hertz,
- the speed of light in vacuum is defined as 299,792,458 meters per second,
- the Planck constant is defined as 6.62607015 x 10^-34 joule-seconds,
- the charge of the electron is defined as 1.602176634 x 10^-19 coulombs,
- the Boltzmann constant is defined as 1.380649 x 10^-23 joules per degree kelvin,
- the Avogadro constant is defined as 6.02214076 x 10^23 per mole, and
- the luminous efficacy of monochromatic radiation at 555 nanometers (540 x 10^12 Hz) is defined as 683 lumens per watt.
Thumbnail An AI pioneer (Yann LeCun), a neuroscientist (Peter Ultric Tse), a philosopher and cognitive scientist (Susan Schneider), and a physicist and AI researcher (Max Tegmark) discuss the future of AI. The future of machines learning to model the world, machine creativity, machine emotions, machines with human values, artificial general intelligence, and consciousness.

This is going to be the last of these "philosophical" discussions about AI that I'm going to post, and from now on stick to technical discussions, because these "philosophical" discussions just rehash the same ideas over and over and always get stuck in the same places (i.e. trying to define "consciousness").
Thumbnail "Holly Herndon's new album, 'proto,' is the result of a collaboration with a two-foot-tall gaming PC, which houses an artificial neural network that she designed with her husband, the artist Mat Dryhurst. For the past two years, they have been teaching this 'AI baby,' which they named Spawn and refer to using female pronouns, how to use her voice. They trained Spawn by talking and singing to her. Herndon, Dryhurst, and Spawn's 'godfather,' the artist Jules LaPlace, even formed a choir that would perform hymns for her. In time, Spawn began to produce sounds that weren't built on sampling. We often say that people find their voice; Spawn iterated hers."

"Herndon's singing is full of operatic swells and icy whispers -- it's what marks her music as intimate and human, despite the digital melee around her. On 'Platform,' she endeavored to make songs meant to trigger autonomous sensory meridian response, or A.S.M.R., which is usually experienced as a pleasurable, involuntary tingling. She often sounds as if she were trying to breathe with, or subsume herself within, imposing machines, to figure out how they might braid together as one. On 'proto,' that process is made concrete as she sings with Spawn, whose voice retains an eerie, frayed, metallic quality, something like a swarm of bees."
Thumbnail An AI trained on 100 million opinions can predict how smart, attractive, and trustworthy people will think you are from your photos.
Thumbnail "No AI currently exists that can outduel a human strapped into a fighter jet in a high-speed, high-G dogfight."

"Turning aerial dogfighting over to AI is less about dogfighting, which should be rare in the future, and more about giving pilots the confidence that AI and automation can handle a high-end fight. As soon as new human fighter pilots learn to take-off, navigate, and land, they are taught aerial combat maneuvers. Contrary to popular belief, new fighter pilots learn to dogfight because it represents a crucible where pilot performance and trust can be refined. To accelerate the transformation of pilots from aircraft operators to mission battle commanders -- who can entrust dynamic air combat tasks to unmanned, semi-autonomous airborne assets from the cockpit -- the AI must first prove it can handle the basics. To pursue this vision, DARPA created the Air Combat Evolution (ACE) program."
Thumbnail "The World Wide Web Consortium's (W3C) Machine Learning for the Web Community Group now has 'all major browsers -- Google, Microsoft, Apple, Mozilla -- on board along with the broader AI & web ecosystem.'"
Thumbnail "Lookout is a mobile application designed to support people with visual impairments. The smartphone app can real-time narrate the immediate environment, audibly identifying for example the people, objects, scenes, and text it perceives. The app can switch between 'Home', 'Work' and 'Play' modes to enable its algorithms to focus on environmentally relevant elements."

"Live Caption is a new feature announced at Google I/O that can generate real-time subtitles for any video or audio playing on Android Q devices. This is a game-changing functionality for the estimated 466 million people with hearing impairments."

"Google's new Project Euphoria was developed to help the millions of people with speech impairments caused by neurological conditions such as ALS, strokes, and Parkinson Disease. Leveraging advanced speech recognition algorithms, Google Assistant can be personalized by training speech recognition models with these individuals' voice samples."

"Project Diva (DIVersely Assisted) is a new accessibility program designed to enable users with Down syndrome and intellectual disabilities to command their Google Assistant. A large button plugged onto the Diva interface connects to Google Assistant via bluetooth."
Thumbnail This is from 2013 but I didn't see it until today. An atheist stem cell researcher has a religious conversion. Religion as hypnotic suggestion. From Darren Brown.
Thumbnail AI voice just about indistinguishable (as far as I can tell) from Joe Rogan.
Thumbnail A neural network that generates petitions by training on all the petitions (about 5 million) on change.org. Who would do such a thing? Janelle Shane, of course. The neural network used is an OpenAI 117M-GPT-2.

Theresa May MP: Stop The Pigeon Rally in Great Britain
Karen's mother: Please bring your own breadsticks for Karen.
baseleine planetarium: Unnecessary insults are not welcome in our flat, end it!
Anyone: Stop the use of the word 'shoe' in a derogatory way.
City of Toronto, The City of Toronto, The City of Toronto: Remove the 'Bam Bam' sign
Basketball Club of St. Louis: Stop the Feral Horseshoes at the Basketball Club
Tom Heneghan as OBE Change Council Localimian: Dammit you!
Denny: Put one more black bee sweater on Em1nt du Poste
Belfast City Council: Fire the cabbages at Clutch MarketI
Kim Hsu: Tougher Penalties for Pedestrians and Elephants on City Street in Austin Texas
Thumbnail "Advances in deep generative models based on neural networks opened the possibility of constructing more robust and less hand-engineered surrogate models for many types of simulators, including those in cosmology."

"A variety of deep generative models are being investigated for science applications, but the Berkeley Lab-led team is taking a unique tack: generative adversarial networks (GANs). In a paper published May 6, 2019 in Computational Astrophysics and Cosmology, they discuss their new deep learning network, dubbed CosmoGAN, and its ability to create high-fidelity, weak gravitational lensing convergence maps."

"A convergence map is effectively a 2D map of the gravitational lensing that we see in the sky along the line of sight. If you have a peak in a convergence map that corresponds to a peak in a large amount of matter along the line of sight, that means there is a huge amount of dark matter in that direction."

"Using the CosmoGAN generator network, the team has been able to produce convergence maps that are described by -- with high statistical confidence -- the same summary statistics as the fully simulated maps. This very high level of agreement between convergence maps that are statistically indistinguishable from maps produced by physics-based generative models offers an important step toward building emulators out of deep neural networks."
Thumbnail "Security robots now have facial recognition. Face of the future?" The subheading is, "Knightscope security robots can now detect faces. Whose faces?" The article doesn't really answer the question -- where the face database comes from -- but does say things like, "While facial recognition is largely seen as a tool to protect against known threats, it is also capable of greeting VIPs with a personal message and notifying our clients of VIP arrivals on site," which would imply the face database come from the company that buys the robot and is populated with the company's employees and others the company works with. That implies the robot wouldn't recognize members of the public if it was deployed in a mall or some other public facility.
Thumbnail The San Francisco Board of Supervisors voted today by 8-to-1 to make San Francisco the first major city in the United States to ban government use of face surveillance technology.

Note that this only prohibits the government of the city of San Francisco itself from using face recognition technology -- it doesn't stop private businesses or anyone else within the city from using the technology.
Thumbnail "Tensor2Robot (T2R) is a library for training, evaluation, and inference of large-scale deep neural networks, tailored specifically for neural networks relating to robotic perception and control. It is based on the TensorFlow deep learning framework."

"It is used internally at Alphabet, and open-sourced with the intention of making research at Robotics@Google more reproducible for the broader robotics and computer vision communities."
Thumbnail Christoph Keplinger works on artificial muscles for robots at CU Boulder. The artificial muscles he works on are called HASEL actuators, which use electrostatic forces to expand and contract.
Thumbnail "Eagle Vines Golf Club, at the north end of American Canyon, is a test bed for what a director says is the world's first use of a self-driving, self-navigating food and beverage delivery vehicle at a golf course. Four battery-powered carts, partnered with GPS mapping and a smartphone app, allow visitors to order food, beverages, golf balls and other items and receive them on the course in as little as 15 minutes -- with the icebox-like contraptions then automatically returning to the clubhouse from whence they came."
Thumbnail The house the robots built. "Straight walls partly exist for the convenience of builders and architects - but for a robot, a curved wall is almost as easy. So at the DFAB House, a small test building in the suburbs of Zurich, Switzerland, the main wall follows an elegant, irregular curve. It's built around a steel frame, welded by robots, which humans would have found almost impossible to construct unaided."

"Even stranger, the roof consists of a series of flowing, organic ridges, which look as if they were secreted by a giant insect. Awkward to dust, perhaps, but designed by computer and made with 3D printing to achieve the same strength as a conventional, straight roof, yet with half the weight."

"The house, built by Switzerland's National Centre of Competence in Research in Digital Fabrication, demonstrates what a computer-designed, robot-built house could look like."

Story sponsored by DXC technology, whoever that is.
Thumbnail "The company's first agricultural robot, dubbed the Virgo 1, can pick tomatoes without bruising them, and detect ripeness better than humans." "It can navigate large commercial greenhouses any hour of the day or night, detecting which tomatoes are ripe enough to harvest."

"One of the most unique things about the Virgo, Josh Lessing, founder and CEO of Root AI, notes, is that the company can write new AI software and add additional sensors or grippers to handle different crops." "In this way, the Virgo is a departure from other crop-specific harvesting machines on the market and in development, like Abundant Robotics' apple picker, Agrobot's strawberry picker, and Sweeper's pepper-picker."
Thumbnail Using an eye-tracking device, Molly Losh, director of the Neurodevelopmental Disabilities Lab at Northwestern University in Evanston, Illinois, and her team has also found some less obvious quirks in some parents of autistic children -- including telltale patterns of attention and language processing that distinguish these parents from both typical people and autistic individuals."

"The skill being tested is called 'rapid automatized naming.' It seems simple on the surface, but as people read, their eyes typically jump from one word or object to the next in movements called saccades. On the eye tracker's monitor, these jumps show up as thin red lines that traverse the screen like unspooling threads. To register the word or object, though, the reader's eyes must stop briefly on a point of fixation, represented by a red dot. The timing of this response requires the brain to sync sensory input, attention and executive function. 'It is an indirect window into our cognitive ability,' says Kritika Nayar, a graduate student in Losh's lab who leads this work."

"In typical people, the eyes lead the way, looking one or two objects ahead of the one being named. Typical people tend to fixate on only one point, and they call out names fluidly, one after the other, as the red dot bounces steadily along. Autistic individuals name the objects on the screen less fluidly: Their gaze skitters around each object before settling on a fixation point, and they perseverate more in general, getting stuck on objects and looking back at previous items. On the monitor, the red dot mostly moves with their voice, not ahead of it."

"Parents with characteristics of the broad autism phenotype, meanwhile, fall neatly into a cognitive middle ground. They perform more fluidly than autistic people but get stuck more often and fixate on more points than typical people do -- something they would never notice in everyday life."
Thumbnail Whether an animal freezes or flees depends on the activation of specific neurons in the brain, specifically, the animal will flee if the intermediodorsal thalamic nucleus (IMD) is inhibited, which is controlled by the limbic thalamic reticular nucleus (TRN), a shell-shaped nucleus located between the cortex and the thalamus (which relays motor signals to the cerebral cortex), which in turn is controlled by the cingulate cortex, part of the cerebral cortex, the outer layer of the brain.
Thumbnail Killer robots are coming whether you like it or not. "If AI systems improve over time by becoming more robust and transparent, the pressure to use them to aid soldiers in the battlefield will increase."

"Eventually, the machines will gradually push the humans out of the loop. First, they stand in supervisory roles and finally they'll end up as 'killswitch operators' that monitor these autonomous weapons. Machines can be much faster than humans. The act of killing an enemy is based on reflexes, and if soldiers realise that these types of tools can outperform them, they'll eventually come to trust and rely on them."
Thumbnail "Hailo, a Tel Aviv-based AI chipmaker, today announced that it is now sampling its Hailo-8 chips, the first of its deep learning processors. The new chip promises up to 26 tera operations per second (TOPS), and the company is now testing it with a number of select customers, mostly in the automotive industry."

"The company says that the Hailo-8 will outperform all other edge processors and do so at a smaller size and with fewer memory requirements."
Thumbnail AI trained to talk like a Londoner using "overheard" conversation.
Thumbnail Interview of Chris Lattner, the creator of the LLVM compiler toolchain that is the compiler for most new languages like Swift and Julia, and is getting into compiling for tensor processing unites (TPUs) and other emerging AI hardware. He describes the overall process a compiler follows, the particular difficulty of C++, and the work involved in doing optimizations. Java changed the world by making things like just-it-time (JIT) compilation mainstream. Clang, which is built on LLVM, has made every iPhone app, Google's production server applications, many GameCube game and PlayStation 4 games. Linux distributions still use gcc.

He describes the process of developing Swift, and how Swift is statically and dynamically compiled and interpreted (statements are dynamically compiled as you execute them). How Swift interacts with Python -- the "Python Object". Swift For TensorFlow can automatically build graphs for you and can do automatic differentiation for you. The compiler generates bfloat16 code for tensor processing units (TPUs), which are floating point numbers that have a smaller mantissa and a larger exponent. This is a key part of what makes TPU performance so amazing.

MLIR, Google's new project, is LLVM 2, a common infrastructure for vendors to plug compilers for machine learning into.

At Tesla, he didn't feel he fit in with Telsa's gung-ho culture. Elon has a very clear vision for the future and is able to get people to believe in it and work on it.
Thumbnail Amazon's new robots can "pack 600-700 packages per hour, or four to five times the rate of a human."

"The robots can wrap packages inside boxes it custom-assembles to fit each item. While the robots cost over a million dollars each, Amazon expects to recover the costs within two years."

"Reuters notes that Amazon is interested in cutting humans out of the warehouse process altogether to save on labor costs, but that the task of picking items out of bins remains too difficult for robots to perform in a cost and time effective manner."
Thumbnail The Godot video game engine (pronounced like it's in French) is an open-source video game engine with no licensing issues that can compete against commercial engines like Unreal Engine and Unity and could become the Linux of the video game world, according to this video game developer.
Thumbnail "China's high-speed rail carries record 10 billion passengers."

China only has 1.4 billion people, so some people must have ridden more than once.

"China had almost 30,000 kilometers of high-speed railway track in 2018, twice as long as the rest of the world's railways combined."
Thumbnail Color x-rays. In case you're wondering how color x-rays are possible, given that colors are in the visible spectrum and x-rays are not, the idea is to use different frequencies of x-rays simultaneously and visualize the result with visible-light colors.
Thumbnail Neural Logic Machines (NLM) combine neural networks with logic programming. The key intuition behind NLMs is that neural networks can approximate logic operations such as logical ANDs and ORs, and logic can be expressed in the wiring among the neural modules.

"The researchers evaluated the performance of NLM compared with representative frameworks Memory Networks (MemNN) and Differentiable Inductive Logic Programming (∂ILP)."

"NLM outperforms the baselines with 100 percent accuracy in both reasoning tasks, and with 100 percent completeness in solving decision making problems. In contrast, MemNN failed to accomplish some tasks, and achieved relatively low accuracy in certain reasoning tasks; while ∂ILP performed perfectly in almost all reasoning tasks, but had difficulties scaling beyond small-sized rule sets and could not solve decision-making problems such as blocks world."
Thumbnail "DeepMind and Google researchers have proposed a powerful new graph matching network (GMN) model for the retrieval and matching of graph structured objects. GMN uses similarity learning for graph structured objects and outperforms graph neural network (GNN) models on graph similarity learning (GSL) tasks."

So this is about neural networks that operate on graphs, such as the social graph in Facebook that keeps track of who is connected to whom. The article is about comparing similarity of graphs, and a simpler example of that would be how, in computer programs, the control flow structure can be represented as a graph, and there is a computer security technique that involves comparing the control flow graph of a program to the control flow graphs of known viruses to see if they are similar. The same program compiled with different compilers, for example, generates different machine code, but the control flow graph is the same.

So far this has been done by using a neural network to turn the graph into a vector (called an "embedding" for reasons I have yet to figure out), and then graphs are compared by comparing their embedding vectors. Here the researchers have come up with a way to compare graphs directly by using an "attention" mechanism (an idea probably borrowed from language translation neural networks, though the article doesn't say that) to directly associate nodes between two different graphs.
Thumbnail 128 language quine relay. A quine is a computer program which takes no input and produces a copy of its own source code as its output. In this quine relay, a Ruby program generates a Rust program, which generates a Scala program, which ... this goes on through 128 languages and on the last step a REXX program generates the original Ruby program again.

I don't know what the point of this is other than to show such a thing is possible which you wouldn't've thought.
Thumbnail RDBOX (robot developer's box) is a system for making cloud computing services that complement physical Robot Operating System (ROS)-based robots by making containers based on Kubernetes that can be run either on internet cloud computing services such as Amazon Web Services or on your own machine close to the robot.
Thumbnail NeuronBlocks is a system for building natural language processing deep learning models that makes building your models "like playing Lego." They analyzed NLP jobs submitted to a GPU cluster and found 87% of them did common tasks such as sentence classification and sequence labelling. Further analysis showed more than 90% of models could be composed of common components like an embedding layer, a convolutional network, a recurring network, a transformer network, and so on. Based on this they created a "block zoo" and "model zoo" to provide a suite of reusable and standard components and popular models that use them. The system is based on PyTorch.
Thumbnail "For the first time in aviation history, an aircraft has been manoeuvred in flight using supersonically blown air, removing the need for complex movable flight control surfaces. In a series of ground-breaking flight trials that took place in the skies above north-west Wales, the MAGMA unmanned aerial vehicle (UAV) demonstrated two innovative flow control technologies which could revolutionise future aircraft design."

"Wing Circulation Control: Taking air from the aircraft engine and blowing it supersonically through narrow slots around a specially shaped wing tailing edge in order to control the aircraft."

"Fluidic Thrust Vectoring: Controlling the aircraft by blowing air jets inside the nozzle to deflect the exhaust jet and generate a control force."
Thumbnail AI generates YouTube comments given only a title.
Thumbnail "The team likens traditional deep learning methods to a lottery. Training large neural networks is kind of like trying to guarantee you will win the lottery by blindly buying every possible ticket."

"With a traditional neural network you randomly initialize this large structure, and after training it on a huge amount of data it magically works."

"To test their so-called 'lottery ticket hypothesis' and demonstrate the existence of these smaller subnetworks, the team needed a way to find them. They began by using a common approach for eliminating unnecessary connections from trained networks to make them fit on low-power devices like smartphones: They 'pruned' connections with the lowest 'weights' (how much the network prioritizes that connection)."

"Their key innovation was the idea that connections that were pruned after the network was trained might never have been necessary at all. To test this hypothesis, they tried training the exact same network again, but without the pruned connections. Importantly, they 'reset' each connection to the weight it was assigned at the beginning of training. These initial weights are vital for helping a lottery ticket win: Without them, the pruned networks wouldn't learn. By pruning more and more connections, they determined how much could be removed without harming the network's ability to learn."
Thumbnail Images that convolutional deep learning neural networks have trouble with are the same images that brains take longer to process, and the reason they take longer to process is probably because they are making use of feedback circuits, according to models that more and more accurately model vision neurons. In order for convolutional neural networks to do the same thing, they will need to become recurrent networks, in addition to convolutional networks.
Thumbnail The Tertill Weeding Robot is available. In case you're wondering how the Tertill tells plants from weeds, "Tertill has a very simple method. Weeds are short, plants are tall. A plant tall enough to touch the front of Tertill's shell activates a sensor that makes the robot turn away. A plant short enough to pass under Tertill's shell, though, activates a different sensor that turns on the weed cutter." There's a collar you can put around short plants until they are tall enough. $299.
Thumbnail Atlas robot walking across some terrain where it has to do some intelligent path planning.
Thumbnail "Talk to Transformer" runs a medium sized GPT-2 model that you can play with. To give it a whirl, I typed in, "I didn't call for a boycott of the New York Post," and it responded,

"I didn't call for a boycott of the New York Post, but many newspapers, newspapers from outside and within the county have denounced my call; and many media outlets have condemned the behavior of the reporter who called my report out. To see a video of this story: http://bit.ly/1TlNrYH"

"The reaction to my call, of course, was outrage and condemnation, among media and otherwise. A quick search reveals that there have been plenty of articles and blog posts condemning my article, but very few articles condemning what the media did. There are those who say nothing I say is true, even though I clearly do not get it. There are others who think I'm delusional if it's not true, especially for what I say. We also have articles written defending my 'right of free speech' or 'freedom from criticism,' and 'freedom of expression,' but those articles have no credibility."

"So I don't need a petition or a boycott;"

Weirdly the bitly link seems to actually go to something real -- on ebay, not a video. I assume that's just due to random chance.
Thumbnail "How to Build OpenAI's GPT-2: 'The AI That's Too Dangerous to Release'" "The GPT-2 model that we're going to build won't start generating fake Brexit campaigns. The original model was trained for months, harnessing the power of 100+ GPUs. So unless you've got that kind of computing power, it's a feat if your mini-GPT can get subject-verb agreement right."

"GPT-2 stands for 'Generative Pretrained Transformer 2':"

"'Generative' means the model was trained to predict (or 'generate') the next token in a sequence of tokens in an unsupervised way."

"'Pretrained' means OpenAI created a large and powerful language model, which they fine-tuned for specific tasks like machine translation later on."

"'Transformer' means OpenAI used the transformer architecture, as opposed to an RNN, LSTM, GRU or any other 3/4 letter acronym you have in mind."

"'2' means this isn't the first time they're trying this whole GPT thing out."
Thumbnail "Taste-testing robots powered by artificial intelligence are guaranteeing the quality and authenticity of some mass-produced Chinese food, according to a report submitted to China's central government last month."

"The machines, which can learn on the job, are planted at various points along production lines to monitor the state of the food from raw ingredients to end product. They are equipped with electrical and optical sensors to simulate human eyes, noses and tongues, with a 'brain' running a neural network algorithm, which looks for patterns in data."
Thumbnail Robot hummingbird.
Thumbnail "These robots are the size of a speck of dust. Thousands fit side-by-side on a single silicon wafer similar to those used for computer chips, and, like Frankenstein coming to life, they pull themselves free and start crawling."

"Marc Miskin, a professor of electrical and systems engineering at the University of Pennsylvania, developed a technique to put layers of platinum and titanium on a silicon wafer. When an electrical voltage is applied, the platinum contracts while the titanium remains rigid, and the flat surface bends. The bending became the motor that moves the limbs of the robots, each about a hundred atoms thick."
Thumbnail "The first known sorting robot to work in a US recycling facility was installed only three years ago. That's when fledgling company AMP Robotics installed and tested its robot at Alpine Waste & Recycling's Altogether MRF near Denver. Since then, the recycling sector has embraced robotics."

"Resource Recycling compiled data from major players in the North American robots business -- AMP Robotics, BHS and Machinex -- and supplemented the data with information obtained from news sources and past reports".

"For the most part, robots are working in single-stream MRFs. The 37 identified facilities in the US and Canada with robots break down this way..."
Thumbnail "Pre-Order your AWS DeepRacer today. AWS DeepRacer is the fastest way to get rolling with machine learning, literally. Get hands-on with a fully autonomous 1/18th scale race car driven by reinforcement learning, 3D racing simulator, and global racing league." $399.00.
Thumbnail First attempt at removing cars off the roads with neural nets. Key words are "first attempt."
Thumbnail NPR 1A on "Putting The 'Art' In Artificial Intelligence" with Marcus du Sautoy, author of The Creativity Code: Art and Innovation in the Age of AI.
Thumbnail A Hong Kong tycoon is "going after the salesman who persuaded him to entrust a chunk of his fortune to the supercomputer whose trades cost him more than $20 million."

"Samathur Li Kin-kan is now suing Tyndaris for about $23 million for allegedly exaggerating what the supercomputer could do. Lawyers for Tyndaris, which is suing Li for $3 million in unpaid fees, deny that Costa overplayed K1's capabilities. They say he was never guaranteed the AI strategy would make money."
Thumbnail "The driver normally makes decisions based on his/her model of the world. Suddenly encountering a slippery road, however, leads to unexpected skidding. Online adaptation of the driver's world model based on just a few of these observations of model mismatch allows for fast recovery."

"Humans have the ability to seamlessly adapt to changes in their environments: adults can learn to walk on crutches in just a few seconds, people can adapt almost instantaneously to picking up an object that is unexpectedly heavy, and children who can walk on flat ground can quickly adapt their gait to walk uphill without having to relearn how to walk."

'Robots, on the other hand, ...'

They note that the real challenge is to successfully enable model adaptation "when the models are complex, nonlinear, high-capacity function approximators," i.e., neural networks. Neural networks normally require a huge amount of data and time for training.

The system works by explicitly training for adaptation during training (or "meta-training") time, then "fine tuning" the resulting model on each time step to adapt to the current environment.
Thumbnail Bee tech. ApisProtect develops internet-connected beehive sensors that "monitor temperature, humidity, carbon dioxide, sound and movement; data which can be analysed by machine learning to provide alerts for issues like disease, pests or unusual behaviour."

"BeeHero has a platform which monitors a whopping 20,000 hives, and around a billion bees."

Pollenity "is currently focused on selling its AI-powered Beebot sensors directly to smaller and hobbyist beekeepers."

"Arnia, founded in 2009, is one of the longest standing bee tech startups on the European scene and has long analysed the sonic 'buzz' of our beehives to determine swarm behaviour (when a bee leaves a colony to start a new hive)" and "has collected terabytes of sound data and swathes of bee behaviour research."
Thumbnail "In my analysis, the current moment on the social robotics timeline is akin to the era following the failure of the Apple Newton, long before today's ubiquity of smartphone devices. The Newton, introduced in 1993 and pronounced dead in 1998, was the first commercial handheld computing device. It was a massive technological feat, but was priced too high for its usefulness and did not work well enough in its most common applications. That said, almost immediately after Apple's cancellation of the Newton product line, rival Palm dominated the early 2000s with a very successful line of handheld devices almost identical to the Newton, leading directly to the development of smartphones within five years."

"Like the Newton, Jibo, Kuri, and Cozmo were courageous trailblazers into a virtually unknown use-case."
Thumbnail "The FarmWise robot uses artificial vision to navigate through fields, then analyzes plants to determine which are weeds and which should stay in the ground. 'The most challenging part was to build a system that's both accurate enough, but more importantly general enough, for the intrinsic variability that you see on the farm,' says Sébastien Boyer, cofounder & CEO of FarmWise, the Silicon Valley startup that designed the robot. The robot has to be able to recognize plants at various stages of growth and different crops. 'We basically had to leverage those recent advances in computation algorithms -- algorithms similar to what Facebook and Google are using to recognize us in our pictures -- to reach that level of consistency and repeatability across many different fields.' When the machine recognizes a weed, it uses a hoe-like attachment to automatically remove it."
Thumbnail Traffic stop robot. Not autonomous, really just a telepresence system that sticks out in the front of a police car.
Thumbnail "Welcome to the Artificial Overmind League. The rules are simple: build an AI in Python that can play StarCraft II, outsmart the other players, and rise in the rankings."

"Once per month, Starcraft II World Champion and gaming icon Serral will face off against the top rated bot - and we'll livestream the match. The first match against Serral will be played June 17th."

"Are you ready to take part?"

"Sign up your team."
Thumbnail "In the aftermath of an earthquake, a snakelike robot that can crawl through rubble and tight air pockets is able to access places that no person could -- or should -- be able to go. The Sarcos Guardian S, a small robotic visual inspection platform, is designed for exactly those scenarios: searching for cracks in industrial pipelines, finding people trapped in unstable buildings, sensing whether hazardous gases at an accident site could pose a safety risk to first responders."

"Microsoft is building an end-to-end toolchain to help make it easier for every developer and every organization to create autonomous systems for their own scenarios -- whether that's a robot that can help in life-threatening situations, a drone that can inspect remote equipment or systems that help reduce downtime in a factory by autonomously calibrating equipment."
Thumbnail "Will Xiao, a graduate student in the Department of Neurobiology at Harvard Medical School, designed a computer program that uses a form of responsive artificial intelligence to create self-adjusting images based on neural responses obtained from six macaque monkeys. To do so, he and his colleagues measured the firing rates from individual visual neurons in the brains of the animals as they watched images on a computer screen."

"Over the course of a few hours, the animals were shown images in 100-millisecond blips generated by Xiao's program. The images started out with a random textural pattern in grayscale. Based on how much the monitored neurons fired, the program gradually introduced shapes and colors, morphing over time into a final image that fully embodied a neuron's preference. Because each of these images is synthetic, Xiao said, it avoids the bias that researchers have traditionally introduced by only using natural images."

"At the end of each experiment, this program generates a super-stimulus for these cells."

"The results of these experiments were consistent over separate runs, explained senior investigator Margaret Livingstone: Specific neurons tended to evolve images through the program that weren't identical but were remarkably similar."

"A neuron that they suspected might respond to faces evolved round pink images with two big black dots akin to eyes." "A neuron in one of the animals consistently generated images that looked like the body of a monkey, but with a red splotch near its neck. The researchers eventually realized that this monkey was housed near another that always wore a red collar."
Thumbnail "The researchers first created a one-to-one map of neurons in the brain's visual area V4 to nodes in the computational model. They did this by showing images to animals and to the models, and comparing their responses to the same images. There are millions of neurons in area V4, but for this study, the researchers created maps for subpopulations of five to 40 neurons at a time."

"Once each neuron has an assignment, the model allows you to make predictions about that neuron."

"The researchers then set out to see if they could use those predictions to control the activity of individual neurons in the visual cortex. The first type of control, which they called 'stretching,' involves showing an image that will drive the activity of a specific neuron far beyond the activity usually elicited by 'natural' images similar to those used to train the neural networks."

"The researchers found that when they showed animals these 'synthetic' images, which are created by the models and do not resemble natural objects, the target neurons did respond as expected. On average, the neurons showed about 40 percent more activity in response to these images than when they were shown natural images like those used to train the model."
Thumbnail "The rise of deep learning has created a sea change over the last five years because deep learning has made it so robots can see much more clearly. There have been advances in other areas as well -- more controls work, mechanical engineering, the materials work. There are perception solutions that work way better compared to five years ago. This has created a lot of opportunities in robotics applications."
Thumbnail "Facebook open sourced two new tools targeted to streamline adaptive experimentation in PyTorch applications: Ax: Is an accessible, general-purpose platform for understanding, managing, deploying, and automating adaptive experiments. BoTorch: Built on PyTorch, is a flexible, modern library for Bayesian optimization, a probabilistic method for data-efficient global optimization."
Thumbnail "We offer to apply AI to floor plans analysis and generation. Our ultimate goal is three-fold: (1) to generate floor plans i.e. optimize the generation of a large and highly diverse quantity of floor plan designs, (2) to qualify floor plans i.e. offer a proper classification methodology (3) to allow users to 'browse' through generated design options."

"Our methodology follows two main intuitions (1) the creation of building plans is a non-trivial technical challenge, although encompassing standard optimization technics, and (2) the design of space is a sequential process, requiring successive design steps across different scales (urban scale, building scale, unit scale). Then, in order to harness these two realities, we have chosen nested Generative Adversarial Neural Networks or GANs."

"Through the creation of 6 metrics, we propose a framework that captures architecturally relevant parameters of floor plans. On one hand, Footprint Shape, Orientation, Thickness & Texture are three metrics capturing the essence of a given floor plan’s style. On the other hand, Program, Connectivity, and Circulation are meant to depict the essence of any floor plan organization."
Thumbnail Does 2 + 2 = 4?
Thumbnail Machine-learning and supercomputers have been used to spot millions of imperceptible earthquakes -- as small as magnitude 0.3 -- "hiding in the seismological records of southern California, one of the most tectonically treacherous corners of the United States."

"Distinguishing between sources of low-level ground shaking is 'anything but trivial', says Yehuda Ben-Zion, acting director of the Southern California Earthquake Center at USC and co-leader of the Mining Seismic Wavefields project. His group, which was studying the anatomy of seismic faults, found that the California ground shakes constantly. Vibrations from planes, trees, houses and even antennas shaking in the wind generate rumbles that, to a seismograph, look like earthquakes and can make up up 10 -- 50% of signals in a set of seismological data."
Thumbnail Debate between AI and human. The participants were only told the subject of the debate 15 minutes before the debate, which is: "We should subsidize preschool." The AI was assigned the "for" position and the human was assigned the "against" position. Representing AI is IBM Project Debater and representing humans is debate champion Harish Natarajan.

I don't know how I feel about the subject of the debate but I feel quite certain about one thing: I never want to debate IBM Project Debater. IBM Project Debater is a formidable debater.
Thumbnail Squishy Robotics makes robots designed to be dropped.
Thumbnail Honeywell and Siemens are making robotic systems to unload trucks. "Honeywell's apparatus is a behemoth on wheels that has a bank of suction cups to grab packages stacked high. A portable conveyor catches or scoops them up from the trailer bed."

"Siemens took a different approach. A rolling belt must be permanently installed on the truck trailer's floor with packages loaded on top. When the trailer is at the loading dock, a large machine is attached to the belt and packages are pulled in and sent to the sorting hub. Unloading a standard trailer takes about 10 minutes, compared with approximately an hour for one person moving the boxes."
Thumbnail Artificial insemination of rhinos with snakelike robots.
Thumbnail scite_ is a machine learning system that analyses scientific papers to see whether they support, contradict, or just mention the papers they're citing.

You put in a doi and it tells you whether the paper is supported, contradicted, or just mentioned by other papers. To give it a whirl, I put in 10.1038/ejcn.2013.116 ("Beyond weight loss: a review of the therapeutic uses of very-low-carbohydrate (ketogenic) diets"). It said 1 supporting, 62 just mentioning, 0 contradicting.
Thumbnail "How does one measure the benefits of a zero cost digital services? The team of economists used what they call 'massive online choice experiments.'"

"We do hundreds of thousands of online choice experiments. We get people to compare their preferences between two goods. Would you rather give up online music for a month or Facebook for a month? Wikipedia or Twitter? We do these comparisons with lots of different goods and lots of different people. You start getting a ranking of all the goods."

"In the massive online choice experiments, the researchers asked US adult Facebook users to choose between keeping their access to Facebook or giving it up for one month and get paid $E. Participants were randomly assigned one of twelve price points where E could equal 1, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, or 1000. The participants were informed prior to making the decision that one out of every 200 participants would be randomly selected to receive the value of his or her selection. The combined responses formed the demand curve using a statistical model."

"Our estimate was the median user would have to be paid $48 to give up Facebook for one month."
Thumbnail "Facial recognition technology used by London's Metropolitan Police incorrectly identified members of the public in 96 per cent of matches made between 2016 and 2018."

"Biometric photos of members of the public were wrongly identified as potential criminals during eight incidents across the two-year period, Freedom of Information (FoI) requests have revealed."

"In one incident, a 14 year-old black child in school uniform was stopped and fingerprinted by police after being misidentified by the technology, while a man was fined for objecting to his face being scanned on a separate occasion."

"The technology made incorrect matches in 100 per cent of cases during two deployments at Westfield shopping centre in Stratford, east London last year."
Thumbnail Google AI is sponsoring the 6th "Fine-Grained Visual Categorization Workshop." "Whereas traditional image classification competitions focus on distinguishing generic categories (e.g., car vs. butterfly), the FGVCs go beyond entry level categories to focus on subtle differences in object parts and attributes. For example, rather than pursuing methods that can distinguish categories, such as 'bird', we are interested in identifying subcategories such as 'indigo bunting' or 'lazuli bunting.'"

"This year there will be a wide variety of competition topics, each highlighting unique challenges of fine-grained visual categorization, including an updated iNaturalist challenge, fashion & products, wildlife camera traps, food, butterflies & moths, fashion design, and cassava leaf disease. We are also delighted to introduce two new partnerships with world class institutions -- The Metropolitan Museum of Art for the iMet Collection challenge and the New York Botanical Garden for the Herbarium challenge."

"In the iMet Collection challenge, participants compete to train models on artistic attributes including object presence, culture, content, theme, and geographic origin."

"In the Herbarium challenge, researchers are invited to tackle the problem of classifying species from the flowering plant family Melastomataceae."
Thumbnail Datacosm is an animated film about data, the seeding of it, the consumption of it, the corruption of it, and identity theft, with an AI selecting (not composing) music that the live pianist will play.
Thumbnail "A system failure at an auto manufacturer can cost up to $1.3 million an hour. An offshore oil platform going offline can waste around $3.5 million a day."

"Many firms implement predictive maintenance programs to detect equipment flaws before damage occurs. Traditional techniques rely on installing a large number of purpose-built sensors and measuring the performance of specific machines."

"Reliability Solutions is taking a different approach. The Krakow, Poland-based startup uses deep learning to derive insights from the huge amount of data already being collected by the myriad of sensors previously installed by their clients, on premise."

"One of the largest energy companies in Europe turned to Reliability Solutions to build a predictive model that could detect the failure of a fluidized bed combustion boiler."
Thumbnail "Let's put the ridiculous aside. Radiologists will be empowered by AI, not replaced by it. Radiology has bought into the now-famous quote from Curtis Langlotz, MD, PhD, professor of radiology and biomedical informatics and director of the Center for Artificial Intelligence in Medicine and Imaging (AIMI Center) at Stanford University: 'Artificial intelligence will not replace radiologists ... but radiologists who use AI will replace radiologists who don't.' It's the rallying cry."

"Medical imaging's present and future is human plus machine. Radiologists are intrigued with AI's potential to expedite and improve their ability to interpret images. Some are starting to benefit from AI apps, and test plenty more of their own and commercial apps in development. Like many other industries, healthcare looks to AI to quickly wring insights from data, making information more useful and actionable."

"Now is the time to get onboard and embrace AI, says the president of radiology's leading professional association."

The article is sponsored by some company called Pure Storage but says nothing about storage (or purity).
Thumbnail A 1911 movie about a robot, The Automatic Motorist, got a soundtrack produced by OpenAI's MuseNet, with the first few notes of each segment cued with Mozart, Beethoven, and Tchaikovsky.
Thumbnail "It was a hot February morning at Wish Farms, a large strawberry-growing operation outside Plant City, Florida. Gary Wishnatzki, the proprietor, met me at one of the farm offices. In the high season, Wish Farms picks, chills, and ships some twenty million berries -- all handpicked by a seasonal workforce of six hundred and fifty farm laborers."

"Wishnatzki's is only one of a number of startups that are trying to build a strawberry-picking robot. Among them are a machine that has been developed at Utsunomiya University, in Japan, another by Dogtooth, in the U.K., and a third by Octinion, in Belgium. The Spanish company Agrobot is also testing one. There are prototypes of high-tech orange, grape, and apple harvesters in development as well. A Silicon Valley startup called Blue River Technology created a robotic lettuce-thinner that has been getting a lot of attention from California specialty-crop farmers. (John Deere bought the company in 2017.)"

"All these prototypes rely on a handful of converging technologies -- artificial intelligence, robotics, big data, GPS., machine vision, drones, and material science -- that have been slowly finding their way onto the farm."

"The farms most amenable to automation are indoor ones -- both greenhouses and the newer vertical farms that have begun to appear in urban areas in recent years. (Dairy barns also lend themselves to automated systems, because of the factorylike regularity of the setting.) The Netherlands is a global leader in indoor farming."
Thumbnail The full text of Andrew Ng's book Machine Learning Yearning is available online for free.
Thumbnail Billion Songs is a neural network that generates song lyrics. For some reason the title for every song is "Untitled".
Thumbnail "Traditional computer vision is a broad collection of algorithms that allow to extract information from images (typically represented as arrays of pixel values). There is a wide range of methods used for various applications, such as denoising, enhancement and detection of various objects. Some methods are aiming at finding simple geometric primitives, e.g. Edge detection, followed by morphological analysis, Hough Transform, Blob detection, Corner detection , various techniques of image thresholding etc."

"Contrary to popular belief, the tools discussed above combined together can form very powerful and efficient detectors for particular objects. One can construct a face detector, a car detector, street sign detector and they are very likely to outperform deep learning solutions for these particular objects both in accuracy as well as computational complexity. But the problem is, each detector needs to be constructed from scratch by competent people, which is inefficient, expensive and not scalable."

"Deep learning surprisingly taught us something very interesting about visual data (high dimensional data in general): in ways it is much 'shallower' than we believed in the past. There seems to be many more ways to statistically separate a visual dataset labeled with high level human categories, than there are ways to separate such dataset that are 'semantically correct'. In other words, the set of low level image features is a lot more 'statistically' potent than we imagined."
Thumbnail The ReWork Deep Learning in Finance Summit. "The first speaker was Jackson Hull from British financial services company GoCompare Group." "Hull spoke on the use of deep learning to improve customer experience, leveraging transactional datasets to provide customers with more value from financial services."

"The next speaker was Huma Lodhi, a principal data scientist from Direct Line Group." "Huma shed light on the importance of scene understanding technologies for insurance and risk management, and talked about the successful applications of deep learning for different tasks ranging from risk modeling to claim settlement."

"The third speaker, Manuel Proissl is the head of predictive analytics at UBS." "Proissl focused on human-augmented training of domain-specific neural networks, and discussed use cases and recent advances in methods to address model transparency, adversarial robustness, algorithmic bias and fairness."

"The final speaker was Rich Radley, a customer engineer from Google Cloud." "Radley showed how Google is partnering with financial services organizations to apply deep learning techniques to problem domains such as forecasting, risk, and financial crimes, and how AI can improve customer experience."
Thumbnail Person wearing a picture patch invisible to YOLOv2.
Thumbnail Velodyne has a partnership with Nikon to mass produce Velodyne lidar sensors.
Thumbnail Colossus the robot, first announced by Shark Robotics in 2017, assisted the Paris Fire Brigade in fighting a fire the Notre Dame Cathedral. "Colossus is always being piloted remotely by a firefighter trained to operate the machine."

"Colossus acts as a kind of technical support station to the firefighting team by supplying information from its sensors to both the remote pilot and the other firefighters in real time." "Firefighters obviously want to know the temperature, and Colossus has an advanced thermometer, but they can also use the robot to find out whether there are any hazardous chemicals in the air besides smoke."

"Colossus is built to withstand the extreme heat you feel when you go into a burning building for up to 8 hours at a time, and it will not feel pain or sustain damage if a building collapses on top of its chassis. However, it is also able to take over some of the less strategic aspects of firefighting. For example, Colossus is capable of moving wounded fighters to a safe place or carrying up to one ton of equipment across the scene. The heaviest hose it can lift would take three or four human firefighters to lift otherwise."
Thumbnail Robot crawler made with kirigami, a Japanese paper craft that uses tiny cuts in the paper. As the robot crawls, the cuts "pop up" into a 3D textured surface. The skin can be programmed by controlling the cuts and the curvature of the surface. Cuts of the same size and different sizes cascade or deform the skin in different ways.
Thumbnail Sheet pile driving robot. Deploys sheet piles in natural environments to reduce erosion.
Thumbnail "Seven Dreamers, the Japanese company behind the AI-powered laundry-folding robot Laundroid, has filed for bankruptcy."
Thumbnail Anki, the company that makes 'cute' robots like Cozmo, raised over $200 million in venture capital, and had revenue 'approaching' $100 million in 2017, is shutting down, laying off almost 200 employees. The company was unable to raise enough VC money or secure an acquisition from a company like Microsoft, Amazon, or Comcast.
Thumbnail MuseNet is a deep neural network "that can generate 4-minute musical compositions with 10 different instruments, and can combine styles from country to Mozart to the Beatles. MuseNet was not explicitly programmed with our understanding of music, but instead discovered patterns of harmony, rhythm, and style by learning to predict the next token in hundreds of thousands of MIDI files. MuseNet uses the same general-purpose unsupervised technology as GPT-2, a large-scale transformer model trained to predict the next token in a sequence, whether audio or text."

The page has 7 samples.
Thumbnail The Power of Self-Learning Systems with DeepMind's Demis Hassabis. He gives a high level overview of the history of AlphaZero, the differences between AlphaZero and AlphaStar (said they'd publish a paper with the details but I checked and the paper is not out), unsolved problems in AI, and DeepMind's efforts to use AI to advance science with AlphaFold. (He doesn't mention DeepMind's mathematical theorem prover which came out shortly after this talk, and mentions that DeepMind is starting other science efforts but doesn't say what they are.)
Thumbnail "There is strong evidence that this time is indeed different, and Moore's Law is soon to be over for good. Already, Dennard scaling, Moore's Law's lesser known but equally important parallel, appears to have ended. Dennard's scaling refers to the property that the reduction of transistor size came with an equivalent reduction of required power.8 This has real consequences -- even though Moore's Law has continued over the last decade, with feature sizes going from ~65nm to ~10nm; the ability to speed up processors for a constant power cost has stopped. Today's common CPUs are limited to about 4GHz due to heat generation, which is roughly the same as they were 10 years ago. While Moore's Law enables more CPU cores on a chip (and has enabled high power systems such as GPUs to continue advancing), there is increasing appreciation that feature sizes cannot fall much further, with perhaps two or three further generations remaining prior to ending."

"Multiple solutions have been presented for technological extension of Moore's Law, but there are two main challenges that must be addressed. For the first time, it is not immediately evident that future materials will be capable of providing a long-term scaling future. While non-silicon approaches such as carbon nanotubes or superconductivity may yield some benefits, these approaches also face theoretical limits that are only slightly better than the limits CMOS is facing. Somewhat more controversial, however, is the observation that requirements for computing are changing. In some respects, the current limits facing computing lie beyond what the typical consumer outside of the high-performance computing community will ever require for floating point math. Data-centric computations such as graph analytics, machine learning, and searching large databases are increasingly pushing the bounds of our systems."

"For these reasons, neural computing has begun to gain increased attention as a post-Moore's Law technology."

The article goes on to argue that knowledge of the brain is undergoing its own dramatic scaling and once the materials science and chemistry of devices has essentially been optimized, the way forward is algorithmic and architectural advances inspired by the brain, and goes on to explore 4 areas: "feed-forward sensory processing", "temporal neural networks", "Bayesian neural algorithms", "dynamical memory and control algorithms", and "cognitive inference algorithms, self-organizing algorithms and beyond".