Boulder Future Salon Recent News Bits

Thumbnail Dreamer is a reinforcement learning system that internalizes a 'world model' and plans future actions by 'imagining' the outcomes of actions. "They say that it not only works for any learning objective, but that Dreamer exceeds existing approaches in data efficiency and computation time as well as final performance."

"Throughout an AI agent's lifetime, either interleaved or in parallel, Dreamer learns a latent dynamics model to predict rewards from both actions and observations. In this context, 'latent dynamics model' refers to a model that's learned from image inputs and performs planning to gather new experience. The 'latent' bit indicates that it relies on a compact sequence of hidden or latent states, which enables it to learn more abstract representations, such as the positions and velocities of objects. Effectively, information from the input images is integrated into the hidden states using an encoder component, after which the hidden states are projected forward in time to anticipate images and rewards."

"A representation bit encodes observations and actions, and a transition bit anticipates states without seeing the observations that will cause them. A third component -- a reward component -- projects the rewards given the model states, and an action model implements learned policies and aims to predict actions that solve imagined environments. Finally, a value model estimates the expected imagined rewards that the action model achieves, while an observation model provides feedback signals."
Thumbnail The $1 million Deepfake Detection Challenge. "AWS, Facebook, Microsoft, the Partnership on AI's Media Integrity Steering Committee, and academics have come together to build the Deepfake Detection Challenge (DFDC). The goal of the challenge is to spur researchers around the world to build innovative new technologies that can help detect deepfakes and manipulated media."

"Challenge participants must submit their code into a black box environment for testing. Participants will have the option to make their submission open or closed when accepting the prize. Open proposals will be eligible for challenge prizes as long as they abide by the open source licensing terms. Closed proposals will be proprietary and not be eligible to accept the prizes. Regardless of which track is chosen, all submissions will be evaluated in the same way. Results will be shown on the leaderboard."

The challenge is all done with video, and there are 4 datasets involved, a training set containing videos and labels indicating whether they are deepfakes or not, a public validation set that you can test against, a public test set used for the public leaderboard but not the final prizes, and the private test set, against which your code will be re-run for your final leaderboard scores.

First prize is $500,000, second prize is $300,000, third prize is $100,000, fourth prize is $60,000, and fifth prize is $40,000.
Thumbnail A neural network that can produce 3D objects from 2D images has been developed. Nvidia seems to want to steal all the credit so they can brag about how fast the system trained on Nvidia GPUs, but the research was actually done not only by Nvidia, but by researchers from the University of Toronto, the Vector Institute (which is Geoffrey Hinton's outfit in Toronto), McGill University (in Montreal), and Aalto University (in Helsinki, Finland).

Anyway, the thing that surprised me about this is that instead of doing the obvious thing, which would be to render 3D worlds as 2D using a video game engine and then using the 2D image and 3D model as training pairs for training a neural network (I assume this has already been done a million times), what the researchers did here was make the actual 3D rendering differentiable! What that means is they carefully redid the math in the whole 3D rendering pipeline so you can do derivatives on it (in the calculus sense), which matters because the way neural networks are usually trained is with some variant of gradient descent, and the gradient descent technique relies on knowing derivatives to know which way to go to "descend" in the cost function, which is the function that calculates the difference between what the neural network output and what the "correct" answer is. By making the whole 3D rendering process differentiable, they are enabling the neural network to learn to understand how light rays -- and they are using a ray-tracing system -- project 3D objects onto 2D images.

One simplification they made is creating two different differentiable systems, one for foreground objects, and one for the background. Whether this will result in any drawbacks in the long run remains to be seen.
Thumbnail Over 2,000 new, small genes have been discovered, expanding the number of human genes by 10%. "These previously unknown genes are known as small open reading frames (smORFs), and the scientists have developed a method for detecting these important genetic sequences in human cell lines." The total number of smORFs is estimated to be between 2,500 and 3,500. Since the number of known genes before was about 25,000, this is about a 10-14% increase in the number of known genes.

If you're wondering what "smORF" stands for -- well, I already told you what it stands for, "small open reading frame", but if you're wondering what that means, a "reading frame" is a series of DNA base pairs that can be interpreted as "triplets" where each "triplet" maps to a specific amino acid. An "open" reading frame is a reading frame bookended by specific "start" and "stop" triplets that don't map to amino acids. And a "small" open reading frame is, well, a small open reading frame. So there's your smORF.

What they did was adapt a technique called ribo-seq (ribosome sequencing) to find smORFs. Ribo-seq is a system that, instead of transcribing all the RNA, only transcribes the RNA that goes through the ribosome, which is the organelle within cells that maps RNA to actual amino acids. DNA is transcribed onto "messenger" RNA (mRNA) to go to the ribosome and the mRNA is what the ribosome actually uses to connect amino acids into chains that become proteins. Ribo-seq is a technique that catches the mRNA at the ribosome and so only transcribes the mRNA that is actively being converted to protein. By detecting proteins that were coming from DNA not previously considered part of genes, they were able to discover smORFs.
Thumbnail Ever sit around wondering, if you didn't know the speed of light, how would you figure it out? Becky Smethurst stomps through the whacky history of trying to measure the speed of light. How about making a cog and shining light through the teeth of the cog and reflecting it off a mirror 8 miles away, and spinning up the cog so that when the light arrives back it exactly hits the other side of the next tooth in the cog so you can't see it? Knowing the distance and the spacing of the teeth in your cog gives you the speed of light. Or how about watching when Jupiter's moons are eclipsed by the planet? Or how about looking at the parallax of stars? Here's one even more out there: how about measuring the permeability and permittivity of electric and magnetic fields in a vacuum? Not so obvious that that gets you the speed of light.
Thumbnail Exploring the inner workings of DeepMind's MuZero, the successor to AlphaZero. "Not only does MuZero deny itself human strategy to learn from, it isn't even shown the rules of the game."

"In other words, for chess, AlphaZero is set the following challenge: Learn how to play this game on your own -- here's the rulebook that explains how each piece moves and which moves are legal. Also it tells you how to tell if a position is checkmate (or a draw)."

"MuZero on the other hand, is set this challenge: Learn how to play this game on your own -- I'll tell you what moves are legal in the current position and when one side has won (or it's a draw), but I won't tell you the overall rules of the game."

"Imagine trying to become better than the world champion at a game where you are never told the rules. MuZero achieves precisely this."

"Why does MuZero have three neural networks, whereas AlphaZero only has one?" "Whereas AlphaZero only has only one neural network (prediction), MuZero needs three (prediction, dynamics, representation)."

"In the absence of the actual rules of chess, MuZero creates a new game inside its mind that it can control and uses this to plan into the future. The three networks (prediction, dynamics and representation) are optimised together so that strategies that perform well inside the imagined environment, also perform well in the real environment."
Thumbnail Materials to make refrigerators dozens of times more energy-efficient have been invented. I don't actually know how this works (the paper is paywalled, from the abstract I can see it has something to do with the cooling effects of a pressure-induced phase transition in NH4HSO4, whatever that is); I just thought it was ironic that an invention for increasing the efficiency of refrigeration was invented in Siberia. Do they actually need refrigeration in Siberia?
Thumbnail "Researchers have developed a more accurate method of measuring bisphenol A (BPA) levels in humans and found that exposure to the endocrine-disrupting chemical is far higher than previously assumed."

The study, "provides the first evidence that the measurements relied upon by regulatory agencies, including the US Food and Drug Administration, are flawed, underestimating exposure levels by as much as 44 times."

"Roy Gerona, assistant professor at University of California, San Francisco, developed a direct way of measuring BPA that more accurately accounts for BPA metabolites, the compounds that are created as the chemical passes through the human body."

"Previously, most studies had to rely on an indirect process to measure BPA metabolites, using an enzyme solution made from a snail to transform the metabolites back into whole BPA, which could then be measured."

"Gerona's new method is able to directly measure the BPA metabolites themselves without using the enzyme solution."

"The disparity between the two methods increased with more BPA exposure: the greater the exposure the more the previous method missed."

Approximately 9 million tons of BPA are produced per year for a wide range of consumer products such as plastics, epoxy resins (used in glue, paint, and structural materials), and thermal paper. BPA can affect endocrine signaling pathways mediated by estrogens, androgens, progestins, and thyroid hormone. Exposure during gestation has been linked to changes in developing tissues with postnatal effects on growth, metabolism, behavior, fertility, and cancer risk.
Thumbnail Turns out plants also have microbiomes. Who knew? "Britt Koskella, a UC Berkeley assistant professor of integrative biology, studies the microbial ecology of plants and how it affects plant health, much like biologists study the human microbiome's role in health. Focusing on agricultural crops, she has some of the same concerns as biologists who worry about the transmission of a healthy human microbiome -- skin, gut and more -- from mother to baby."

"When seedlings are first put into fields, for example, there are often no nearby adult plants from which they can acquire leaf and stem microbes. In the absence of maternal transmission, Koskella wondered, how do these plants acquire their microbiomes, and are these microbiomes ideal for the growing plants?"

"Increasing evidence also shows that microbiomes can affect yield, tolerance to drought and even the flowering time of plants."

"The researchers' experiments, conducted in greenhouses on UC Berkeley's Oxford Tract, involved taking five types of tomatoes and spraying four successive generations of plants with the microbiomes of the previous generation. The first generation was sprayed with a broad mix of microbes found on a variety of tomatoes in an outdoor field at UC Davis."

"By sequencing the 16S ribosomal subunits of the tomatoes' microbial communities after each generation -- a technique that allows identification of different bacterial taxa -- they were able to show that, by the fourth generation, only 25% of the original microbial taxa remained."

"When lead author Norma Morella, who is now a postdoctoral fellow at the Fred Hutchinson Cancer Research Center in Seattle, sprayed tomato plants with a microbial mixture -- half from the partially adapted microbiome of the first generation, half from the more mature fourth generation microbiome -- the fourth generation microbes took over, suggesting that they were much better adapted to the tomato."

They mentioned 16S ribosomal subunits. 16S amplicon sequencing is a technique used to identify bacteria. Amplicon refers to how small bits of DNA or in this case, RNA, are 'amplified' using the polymerase chain reaction (PCR) technique. Once the RNA is amplified, then a specific gene, the 16S ribosomal RNA gene, is transcribed and used to identify the bacteria, which works because it is present in all bacterial species. For fungus, they used the internal transcribed spacer technique. The internal transcribed spacer region is the spacer RNA between the ribosomal RNA subunits and is considered the "barcode sequence" for fungus.
Thumbnail This just in in the "Huh what?" department: "A physicist at the University of California, Riverside, has performed calculations showing hollow spherical bubbles filled with a gas of positronium atoms are stable in liquid helium. The calculations take scientists a step closer to realizing a gamma-ray laser."

Huh what? How did we get from liquid helium to a gamma-ray laser? Ok, let's take a closer look at this. First of all, "positronium" isn't a regular element, like polonium or plutonium. "Positronium" is an electron orbiting, instead of a proton, like in a hydrogen atom, it's orbiting it's anti-matter twin, called an anti-electron or alternatively, a "positron" (for "positive electron" -- it's like an electron only it has positive instead of negative change). Because an electron and a positron have the same mass, they orbit their common center of mass, symmetrically, rather than the electron orbiting the proton (to the extent they can be said to be "orbiting" at all -- orbitals at the subatomic level are actually probability clouds). (Raise your hand if you already knew this -- this is the first time I've ever heard of positronium -- though apparently it was discovered in 1957 so there's been plenty of time for us to know about it.) And since they're a matter-anti-matter pair, if they ever touch, they annihilate each other, releasing gobs of energy. And those gobs of energy are in the form of photons -- particles of light. And since they have to carry so much energy, they come out in the form of gamma rays, which are high-energy photons.

So now we know where the gamma rays come from, but we still don't know what's going on with the liquid helium. Well, normally, positronium annihilates very fast -- on the order of nanoseconds. But somehow, in liquid helium, the liquid helium can form little spherical bubbles that contain "atoms" of positronium, and the positronium will turn into a Bose-Einstein condensate, and become stable. Bose-Einstein condensate is that forgotten state of matter that can exist at super-low pressures and super-low temperatures near absolute zero. When you were taught the states of matter as a kid, you were probably taught that they were solid, liquid, and gas, and maybe plasma was included. They forgot Bose-Einstein condensate. The thing that's weird about Bose-Einstein condensate is that multiple subatomic particles fall down into the same quantum state, act as a giant super-particle, and quantum effects normally only visible on tiny scales becomes visible on macro scales. How exactly this happens, I don't know, but apparently Satyendra Nath Bose and Albert Einstein were smart enough to figure it out.

Anyway, once you put your positronium in liquid helium and get the spherical bubbles and the Bose-Einstein condensate, the positronium becomes stable. At least that's the theory. Nobody's actually done it. If you look back at that first sentence, you'll see it has the word "calculations" in it. Allen Mills, a professor in the Department of Physics and Astronomy, did a bunch of calculations and thinks this should work, but nobody's actually done it.

Still don't know how you get from liquid helium filled with bubbles with positronium "atoms" in them to a gamma-ray laser. This sounds like the weirdest way to make a laser ever. And what would a gamma-ray laser be useful for, anyway? The press release says it "may have applications in medical imaging, spacecraft propulsion, and cancer treatment." Medical imaging? How would that work? Don't gamma rays have enough energy that they're ionizing? So wouldn't medical imaging with gamma rays give you cancer? It doesn't sound like a good idea.
Thumbnail Artificial intelligence to identify and count the calls of different species of seabirds exists, but a much simpler system to calculate acoustic diversity, complexity, and intensity is more accurate and a lot cheaper at measuring how well a protected ecosystem is rebounding from invasive species. The latest AI techniques aren't always the best solution, who knew?

"The new study took advantage of a highly successful campaign to remove invasive species and restore seabird nesting colonies in the Western Aleutian islands as a natural experiment. These remote islands, located in the Northern Pacific Ocean between Russia and Alaska, are important nesting sites for many species of seabirds."

"In the 18th century, fur traders introduced arctic foxes to the islands, and they quickly began devastating seabird populations. Later, rats brought to the islands during World War II added to the plight of nesting seabirds. The islands are now part of the Alaska Maritime National Wildlife Refuge managed by the U.S. Fish and Wildlife Service, which in 1949 began efforts to remove the invasive species and restore healthy seabird colonies."

"Today, some of the islands still have invasive predators while others have had them removed for varying amounts of time. One island that was never inhabited by foxes or rats was used as a reference for comparison to restored islands, representing a healthy seabird island. Overall, the varying timelines for the removal of invasive species from each island created a perfect opportunity for Abraham Borker, who led the study as a graduate student in the Conservation Action Lab at UC Santa Cruz, to test if soundscapes reflect the recovery of island seabirds."
Thumbnail Deep Java Library: Open source library to build and deploy deep learning in Java. From Amazon Web Services (AWS) Labs.
Thumbnail AlphaStar, DeepMind's StarCraft 2 AI, on Two Minute Papers with Károly Zsolnai. I've previously summarized the paper so I'll just hand this over to Károly Zsolnai for his take. The video includes bits of footage from the matches with Serral, a professional StarCraft 2 player.
Thumbnail What is information? Jim Al-Khalili explains.
Thumbnail "MuJoCo is a physics engine aiming to facilitate research and development in robotics, biomechanics, graphics and animation, and other areas where fast and accurate simulation is needed. It offers a unique combination of speed, accuracy and modeling power, yet it is not merely a better simulator. Instead it is the first full-featured simulator designed from the ground up for the purpose of model-based optimization, and in particular optimization through contacts. MuJoCo makes it possible to scale up computationally-intensive techniques such optimal control, physically-consistent state estimation, system identification and automated mechanism design, and apply them to complex dynamical systems in contact-rich behaviors. It also has more traditional applications such as testing and validation of control schemes before deployment on physical robots, interactive scientific visualization, virtual environments, animation and gaming."

Accurate simulations are essential for making AI for robots in the real world, because the learning algorithms are (at the moment) inefficient. You wouldn't, for example, want an AI learning to drive a car drive a car off a cliff thousands of times before learning not to, but you can do that in a simulation and then transfer the resulting AI into a real robotic car once it's been trained. However that will only work if the simulation is close enough to the real world. So far it has proven difficult to get AIs trained in simulation to work in actual reality.

Oh, you might want to know what MuJoCo stands for. It stands for "Multi-Joint dynamics with Contact." It's a commercial product ($500 per year for noncommercial use, unless you're a student, in which case you can use it free but they want your educational institution to pay a license fee, $2,000 per year per user for commercial use). I'm pretty much priced out of it, but my reinforcement learning expertise isn't at a level yet where I could use it anyway.
Thumbnail New algorithm revolutionizing time-series analysis. "The Matrix Profile (and the algorithms to compute it: STAMP, STAMPI, STOMP, SCRIMP, SCRIMP++ and GPU-STOMP), has the potential to revolutionize time series data mining because of its generality, versatility, simplicity and scalability. In particular it has implications for time series motif discovery, time series joins, shapelet discovery (classification), density estimation, semantic segmentation, visualization, rule discovery, clustering etc (note, for pure similarity search, we suggest you see MASS for Euclidean Distance, and the UCR Suite for DTW)."

"Our overarching claim has three parts: given only the Matrix Profile, most time series data mining tasks are trivial, the Matrix Profile can be computed very efficiently, and algorithms that are built on top the Matrix Profile inherit all its desirable properties."

"The advantages of using the Matrix Profile (over hashing, indexing, brute forcing a dimensionality reduced representation etc.) for most time series data mining tasks include: It is exact, it is simple and parameter-free, it is space efficient, it allows anytime algorithms (ultra-fast approximate solutions), it is incrementally maintainable (having computed the Matrix Profile for a dataset, we can incrementally update it very efficiently), it does not require the user to set similarity/distance thresholds, it is embarrassingly parallelizable, it has time complexity that is constant in subsequence length, it can be constructed in deterministic time, and it can handle missing data."
Thumbnail Possible solution to the crisis in cosmology: spacetime has curvature. The crisis in cosmology, no doubt already freaking out some of you who have been following the news, is that different measurements of the expansion rate of the universe come out different. This has implications for various other things such as the age of the universe. The error bars used to overlap, so people used to not worry about it, but accuracy of the measurements has improved and now the error bars don't overlap. The two main measurement methods are Cephid variable stars and the cosmic microwave background. One possible explanation why they come out different is that spacetime has curvature. A mathematical model with slight curvature was created that is able to match the observations from both methods.
Thumbnail AI + government. The UK government released a heavily redacted report of how AI can be used by government, leaving many to wonder about the implications of AI's deployment across the public sector without the public knowing how, or weighing in on its uses. The report said out of 117 opportunities to use AI in government identified, 116 were unimplemented ideas, 31 were proof-of-concepts, and 30 were fully deployed and in production. Although who knows what they are because redacted.

According to the Financial Conduct Authority, 2/3rd of financial companies in the UK use of AI to make business decisions, and use of AI to more than double in the next 3 years. Trust towards AI in the UK is decreasing. The Information Commissioner's Office issues fines for organizations that don't explain to people how their personal information is used by AI to make decisions.

New South Wales in Australia is using cameras with AI to catch drivers using their phones while driving.

The president of Indonesia gave the order to replace "echelon III and IV officials", whatever that means, with AI.

The Cyberspace Administration of China banned publish and distribution of materials created by AI without clearly marking it in a prominent manner, to prevent publication of fake news.
Thumbnail 20 kilometers of undersea fiber-optic cable were "turned into the equivalent of 10,000 seismic stations along the ocean floor. During their four-day experiment in Monterey Bay, they recorded a 3.5 magnitude quake and seismic scattering from underwater fault zones."

"Their technique, which they had previously tested with fiber-optic cables on land, could provide much-needed data on quakes that occur under the sea, where few seismic stations exist, leaving 70% of Earth's surface without earthquake detectors."

"The technique the researchers use is Distributed Acoustic Sensing, which employs a photonic device that sends short pulses of laser light down the cable and detects the backscattering created by strain in the cable that is caused by stretching. With interferometry, they can measure the backscatter every 2 meters (6 feet), effectively turning a 20-kilometer cable into 10,000 individual motion sensors."

"These systems are sensitive to changes of nanometers to hundreds of picometers for every meter of length. That is a one-part-in-a-billion change."

"During the underwater test, they were able to measure a broad range of frequencies of seismic waves from a magnitude 3.4 earthquake that occurred 45 kilometers inland near Gilroy, California, and map multiple known and previously unmapped submarine fault zones, part of the San Gregorio Fault system. They also were able to detect steady-state ocean waves -- so-called ocean microseisms -- as well as storm waves, all of which matched buoy and land seismic measurements."
Thumbnail Machine-learning algorithm for automatically classifying the sleep stages of lab mice. No word on a comparable system for humans. The system works oddly by putting the electroencephalogram (EEG) and electromyogram (EMG) signals into a convolutional system first, as a "feature extraction" system, and then feeding that into a "scoring" system that uses LSTM. Seems weird to be because the signals coming off the EEG and EMG are time series, and generally you don't put time series into convolutional neural networks but you do put them into LSTMs, and they're not converting the time series to spectrograms first. Don't ask me. EEG signals are fed directly into the convolutional network and for EMG signals, they say the amplitude is more informative than its frequency-domain features for identifying sleep stages, so they using a moving root mean squared filter on the EMG signals before feeding them into the convolutional neural network. The convolutional neural network's job is to identify the "important" parts of the signal and then those are fed into the LSTM as a time series.
Thumbnail Brain regions associated with suicide have been identified. Reduced volume and activity in the lateral ventral prefrontal cortex and medial ventral prefrontal cortex, which likely impairs ability to regulate negative emotions, high levels of activity in the lower right lateral dorsal prefrontal cortex when shown angry faces, thought to be due to high sensitivity to criticism and social rejection, lower volume in the medial dorsal prefrontal cortex, and lower ventral anterior cingulate cortex activation, which is thought to indicate less ability to anticipate positive rewards. The lateral dorsal prefrontal cortex has higher activity in suicidal people than normal people when doing tasks that don't involve emotion in the brain scanner, like stroop tasks, and this is thought to be because those task require inhibition of automatic responses and this is more difficult in suicidal people. The lateral dorsal prefrontal cortex is thought to play a critical role in decision-making. The medial dorsal prefrontal cortex has a lower level of activity in response to angry faces.

Numerous brain areas have been studied in association with suicide with inconsistent results. The insula (insular cortex) has higher or lower activity in association with various mental illness diagnoses. It is thought to mediate between the ventral prefrontal cortex system and the dorsal prefrontal cortex/inferior frontal gyrus system, which, if dysfunctional, would diminish the person's ability to prevent the loss of emotional regulation in the dorsal prefrontal cortex from affecting the internal self-referential states in the default mode network, but that is uncertain. The amygdalae, hippocampus, and entorhinal cortex often have altered structures in association with mental illness diagnoses associated with suicide (primarily major depressive disorder and bipolar disorder, which account for over half of suicide deaths, but also mood disorders, borderline personality disorder, substance abuse disorders, schizophrenia, posttraumatic stress disorder (PTSD), and anxiety disorders) but don't actually seem to correlate with suicide itself. The striatum, thalamus, putamen, and caudate are central to the brain's reward system but structural alterations in these regions had inconsistent associations with suicide. There is evidence that dysfunction in the brain's default mode network, which is the part of the brain that is active when your "mind is wandering" and daydreaming, and which actually is the key part of the brain for self-referential processes like your sense of identity, your life story (autobiographical memory) and prospective imagery of your future self, plays a key role in suicide, however, the default mode network is not a single brain region but has connections spanning the medial ventral prefrontal cortex, cerebral cortex, posterior cingulate cortex, precuneus, and other brain regions.
Thumbnail "A novel circuit design that enables precise control of computing with magnetic waves -- with no electricity needed. The advance takes a step toward practical magnetic-based devices, which have the potential to compute far more efficiently than electronics" has been devised.

"Classical computers rely on massive amounts of electricity for computing and data storage, and generate a lot of wasted heat. In search of more efficient alternatives, researchers have started designing magnetic-based 'spintronic' devices, which use relatively little electricity and generate practically no heat."

"Spintronic devices leverage the 'spin wave' -- a quantum property of electrons -- in magnetic materials with a lattice structure. This approach involves modulating the spin wave properties to produce some measurable output that can be correlated to computation. Until now, modulating spin waves has required injected electrical currents using bulky components that can cause signal noise and effectively negate any inherent performance gains."

"The MIT researchers developed a circuit architecture that uses only a nanometer-wide domain wall in layered nanofilms of magnetic material to modulate a passing spin wave, without any extra components or electrical current. In turn, the spin wave can be tuned to control the location of the wall, as needed. This provides precise control of two changing spin wave states, which correspond to the 1s and 0s used in classical computing."

Ok, this article is a bit inscrutable, so I'll add a few comments. It's about something called spintronics, which is short for "spin transport electronics". The basic idea is that instead of representing 0 and 1 with the presence or absence of electrons (electric charge), you represent them with the spin of the electrons, which you can think of as spin-up and spin-down. This raises the problem of how you "transport" spin from place to place, a difficult proposition compared with just shoving around the loose electrons in the materials that have loose electrons (that is, metals, which have electrons in the conduction band. In chemistry, atoms have electrons that are close to the nucleus, then electrons that are on the outside that are involved in chemical reactions -- these are called the valence band electrons -- then electrons outside of those that can be knocked loose and form electric currents -- these are called the conduction band electrons. If you've ever heard the term "band gap," there has to be a "band gap" between the valence band and the conduction band for a material to act as a semiconductor.)

What they're doing here, though, goes beyond simply transporting spin -- which they are doing using antiferromagnetic materials -- they are actually doing "switching", which is getting a material to flip from 0 to 1. While they don't mention transistors in the article, getting a material to do switching is a key step on the way to making transistors. Transistors make logic operations possible which in turn makes computers possible. If you're wondering what antiferromagnetic materials are, you've probably heard of "ferromagnetic" materials, which are materials that can be magnetized (like your refrigerator magnets), so the question here is the "anti" part. In an "anti" ferromagnetic material, the material separates into ions and the ions alternate, with each ion having the magnetic north and south poles opposite from the ions around it. The spin is transported through this material in the form of a "magnon", which is what in physics is known as a quasiparticle -- a particle that doesn't actually exist, but is created by the actions of many other particles and the quantum fields that surround them, such that you can think of the quasiparticle as if it was a real particle. Here the quasiparticle is a "magnon" that carries magnetic spin from place to place, flowing like a giant electron wave (and remember, electrons are waves themselves) through the material.

I can't really describe to you the technique that does the actual switching, though, because my knowledge of the physics isn't good enough. It's a technique called spin-orbit-torque-switching. It involves something called spin-orbit coupling, which is about the interaction between an electron in an orbit of an atom changing energy levels in response to interacting with electric and magnetic fields in its environment.
Thumbnail The coldest chemical reaction, colder than interstellar space, made it possible to see intermediaries in a chemical reaction that are normally impossible to see because the reaction is too fast. For reference, the vacuum of space has a temperature of around 2.7 kelvin (2.7 above absolute zero -- the temperature of the cosmic microwave background), and clouds of gas and dust within our galaxy are typically 10-20 kelvin (10-20 degrees C above absolute zero).

The researchers here got a reaction between two potassium rubidium molecules (KRb + KRb -> K2 + Rb2) to take place at a temperature of 500 nanokelvin (500 millionths of a degree above absolute zero), which drew the reaction out to microseconds -- normally it is femtoseconds. So this makes the reaction millions of times slower. The cooling was done by using laser interference to take motion out of the molecules. With the rotation and vibration of the reactants "frozen out" and the number of energetically allowed exits for the products being made more limited, the reaction becomes "trapped" in its intermediate stage, which is a single, big, combined molecule (K2Rb2) that can be directly observed using velocity map imaging (which I won't try to explain here -- involves using something called an electrostatic lens, which works on a principle similar to old CRT TVs).
Thumbnail "An artificial intelligence predicts the future." GPT-2 interviewed by journalists. "Q: Which technologies are worth watching in 2020? A: I would say it is hard to narrow down the list. The world is full of disruptive technologies with real and potentially huge global impacts. The most important is artificial intelligence, which is becoming exponentially more powerful."
Thumbnail "Machines make all kinds of difficult decisions for us. Recommendation engines find us the cheapest flights, the best car insurance, the optimum mobile phone package, serve us advertisements for things we didn't know we wanted, find us books to read, movies to watch, suggest gift ideas, and curate playlists of our favourite artists. We even let machines shortlist our romantic prospects. So can artificial intelligence find our perfect match when it comes to political candidates?"

"Step in Doru Frantescu, director of Vote Watch Europe. This year, hundreds of thousands of EU citizens used a tool the think tank produced to match voters with their most suitable candidate in the European Parliament elections. To do this, Frantescu's team put together a suite of 25 questions drawn from real-life decisions made by the EU parliament."

"For US citizens, startup provides a similar tool, covering local, national and the 2020 Presidential elections."

"But Beth Singler, an AI researcher at the University of Cambridge, warns against investing too heavily in their predictive power. 'If you have someone running in a presidential election, the candidate may never have been president before, so you can't say they'll act in the way that they, or their data, predicts,' she says. 'You can't say that people would definitely act in a certain way -- people are messy."
Thumbnail Ocado to build robot warehouses in Japan. Wait, does that mean the British just out-roboted the Japanese?
Thumbnail "After breakfast one morning in August, the mathematician Terence Tao opened an email from three physicists he didn't know. The trio explained that they'd stumbled across a simple formula that, if true, established an unexpected relationship between some of the most basic and important objects in linear algebra."

"The formula 'looked too good to be true,' says Tao, who is a professor at UCLA, a Fields medalist, and one of the world's leading mathematicians. 'Something this short and simple -- it should have been in textbooks already,' he said. 'So my first thought was, no, this can't be true.'"

Physicists working with neutrinos had "noticed that hard-to-compute terms called 'eigenvectors,' describing, in this case, the ways that neutrinos propagate through matter, were equal to combinations of terms called 'eigenvalues,' which are far easier to compute."
Thumbnail Is music really a "universal language"? "Not only is music universal (in the sense of existing in all sampled cultures) but also that similar songs are used in similar contexts around the world."

"Some of these regularities are unsurprising (for example, that dance songs are faster and more rhythmic than lullabies), and some are more intriguing (for example, that ritual healing songs are less melodically variable than dance songs). These broad, universal acoustic patterns are easily identified by naïve Western listeners, who successfully categorized the song type of sound recordings. The listeners' familiarity with world music played a minor and dispensable role in their correct classification. Furthermore, on the basis of ethnographic records, acoustically similar song types occur in certain shared contexts, and not others, across the world."

"Crucially, variability of song context within cultures is much greater than that between cultures, indicating that despite the diversity of music, humans use similar music in similar ways around the world. Additionally, the authors found that the principle of tonality (building melodies from a small set of related notes, built upon a base tonic or 'home' pitch) exists in all cultures. This suggests the existence of a universal cognitive bias to generate melodies based on categorical building blocks."
Thumbnail 10 bizarre deep space astronomical objects. Moon moons, rogue planets, the anti-matter fountain at the center of the galaxy, the double quasar QSO 0957+0561, the Omega Centauri star cluster, Hoag's Object, SN2006gy, Thorne-Zytkow Objects, Stephan's Quintet, and diamond planets.
Thumbnail A company called Epilog claims they will market a $999 kit in mid 2020 to turn recent Toyota, Honda, Subaru, Chevrolet, Ford, Jeep, and Nissan models into autonomous vehicles. Here you can see it driving around the uneventful suburb of San Jose, California.
Thumbnail Lee Sedol, the South Korean Go champion who was defeated by AlphaGo in 2016, has retired from Go, saying, "I'm not at the top even if I become the number one. There is an entity that cannot be defeated."
Thumbnail 5 ways Antarctica is the place to study space. The IceCube Nutrino Observatory, the BICEP microwave observatory, the South Pole Telescope which gets 6 months of dark sky, NASA cosmic ray observatory balloons and magnetic field balloons, and the Antarctic Search for Meteorites.
Thumbnail Deep Learning and Artificial Intelligence in Health Care site from JAMA Network (Journal of the American Medical Association). Collection of research articles, reviews of research, opinion pieces, and other learning resources, and applications of machine learning in medicine.
Thumbnail Life expectancy in the United States has gone down every year since 2014. This study looks at people in the prime working years of 25 to 64 years old. The decrease in life expectancy in this age group is primary due to drug overdose and suicide, and is concentrated in economically distressed areas such as "rust belt" states.

There is one graph on the webpage and more in the paper which all show 2010 as the turning point, where the downward trend in mortality reversed and mortality started increasing. In the graphs for "select causes", where they break out mortality rates by cause, you can see that in the 25-34 age group, 35-44 age group, and 45-54 age group, drug overdose deaths started increasing earlier, in the 2000s, but started to increase sharply around 2010.
Thumbnail L5p neurons link states of consciousness to contents of consciousness. Examples of states of consciousness are being awake, dreaming, being in deep sleep, or being anesthetized, while examples of contents of consciousness are a dog, a paper, the taste of coconut, and an itch. Research on states of conscious has focused on the thalamus and thalamocortical interactions while research into the contents of consciousness has focused on cortical processing, which is to say, processing that takes place in the cerebral cortex. Cortical layer 5 pyramidal (L5p) neurons are part of both thalamo-cortical and cortico-cortical loops, and link the state of consciousness to the contents of consciousness.

The layer 5 pyramidal cells have two distinct types of dendrites (the extensions of the neuron that receive input), dendrites at the base called basal dendrites and dendrites at the apex called apical dendrites. The basal dendrites receive input from the lower cortical areas and the apical dendrites receive input from higher cortical areas and what are called non-specific pathway thalamic nuclei. Thalamic cells are considered to be part of either specific pathways (such as the lateral geniculate nucleus pathway for vision information) or non-specific pathways, and it is the non-specific pathways that decide the state of consciousness. The non-specific pathways have diffuse projection patterns across the entire cortex, so firing of certain cortical columns could reach the whole thalamo-cortical system and change the state of consciousness. The non-specific pathways connect to the secondary somatosensory cortex (brain area involved in feeling tactile shapes and textures) and primary motor cortex (involved in planing and executing movements).

This was figured out by doing fiberoptic calcium ion imaging of the apical dendrites in mice and giving them air puffs. The behavior of the apical dendrites changes dramatically depending on the state of conscious of the animal, for example being 4 times higher typically in a quiet waking state vs under anesthesia. However, it also depends on learning, as an animal that learned to move its legs in response to the air puffs would see 14 times higher activity. Furthermore, the researchers were able to use optogenetics to stimulate the apical dendrites of the layer 5 pyramidal neurons and get the animal to act, for example they could get the animal to lick as if a whisker stimulus had been present.

The researchers think the "temporal resolution of conscious experience" is determined by the time it takes for information to propagate between layer 5 pyramidal neurons, non-specific thalamus pathways, and higher cortical areas. By "temporal resolution of conscious experience", they are talking about how an intermittent experience, such as a light switching on and off, will fuse together into a continuous experience if the frequency is increased high enough. The frequency needs to be increased high enough that the experience comes in too rapidly for the layer 5 pyramidal neurons, non-specific thalamus pathways, and higher cortical areas to register each event as a separate experience.

They also think it explains a psychological phenomena called "backward masking". In "backward masking", two events happen is rapid succession, but a person is only aware of the second one -- the second one "backward masks" the first one. Their theory is that the first event activates the loop between the layer 5 pyramidal neurons, non-specific thalamus pathways, and higher cortical areas, but the second event comes in and takes over this already-activated loop.
Thumbnail Lattice light-sheet microscopy uses wave interference patterns of light to create a "fine sheet of light" at a specific depth, and by changing the depth quickly, tissue damage can be avoided, and 3D movies can be made of cellular activity of living cells.
Thumbnail Google Stadia latency. The video is shown in slow motion to make the latency more visible, but it gives the impression the latency is larger than it really is, so you have to pay attention to the actual numbers, which give latency times in milliseconds. Also keep in mind they are on a $900/month fiber optic connection.
Thumbnail "A new class of brain cell that acts like the red pin on a Google map to tell you where you found things on past journeys" has been discovered.

"These neurons, dubbed memory-trace cells, are the place markers that record whether you had that mouth-watering gelati opposite the Trevi Fountain or just up the road from the Pantheon."

"On a more sombre note, they are clustered in a part of the brain that takes an early hit in the onset of Alzheimer's disease and may well explain the appalling degradation of memory seen in that illness."

The name of the precise area is the entorhinal cortex and the researchers tested it by having 19 people with drug-resistant epilepsy who had electrodes inserted in their brains to map where seizures start and guide treatment play a video game where they would go up to an object to remember where it was and then press a button later on when they were in the same place but the object wasn't there.
Thumbnail Laser-wielding robots strip paint off F-16s. The traditional way produces around a ton of hazardous waste per jet, debris that includes hexavalent chromium, which causes cancer. The new way, which uses two robots that use 6-kilowatt continuous-wave lasers, produces just 10 to 12 pounds of ash.
Thumbnail "The National Transportation Safety Board presented its findings today on the fatal crash involving an Uber test robocar and Elaine Herzberg."

"The NTSB's final determination of probable cause put primary blame on the safety driver's inattention. Contributory causes where Uber's lack of safety culture, poor monitoring of safety drivers, and lack of countermeasures for automation complacency. They put tertiary blame on the pedestrian's impaired crossing of the road, and the lack of good regulations at the Arizona and Federal levels."

"Most notably, they do not attribute the technology failures as causes of the crash. This is a correct cause ruling -- all tested vehicles, while generally better than Uber's, have flaws which would lead to a crash with a negligent safety driver, and to blame those flaws would be to blame the idea of testing this way at all."
Thumbnail Human tissues, complete with blood vessels, immune cells, and connective tissue, have been produced from stem cells. "In 2006, Japanese researchers came up with a new way of creating pluripotent stem cells through epigenetic reprogramming of connective tissue cells. Their discovery has yielded a highly valuable cell type which scientists can use to grow all cells of the human body in a Petri dish."

"When culturing these so-called 'induced pluripotent stem cells' (iPS cells) as three-dimensional cell aggregates, functional miniature versions of human organs, the so-called organoids, can be created by selectively adding growth factors. In the past years, this technique has been used to create cell culture models of the intestines, the lung, liver, kidneys and the brain, for example."

"Such organoid models are often surprisingly similar to real embryonic tissues. However, most remained incomplete because they lacked stromal cells and structures, the supportive framework of an organ composed of connective tissue." ("Stromal cells" just refers to the cells that connective tissue are made of.) "For instance, the tissues lacked blood vessels and immune cells. During embryonic development, all these cell types and structures are engaged in a process of constant exchange, they influence each other and thereby boost the development and maturation of the tissue and of the organ."

"We used a trick to achieve our goal. First we created so-called mesodermal progenitor cells from pluripotent stem cells." "Under the right conditions, such progenitor cells are capable of producing blood vessels, immune cells and connective tissue cells."
Thumbnail A camera that can make images by detecting single photons has been invented. Actually such cameras existed already, they only had 64 pixels, and this one increases that to a whopping 1,024, making it "high-resolution" by single-photon standards. What enabled the system to go up to 1,024 was the invention of a system to multiplex rows and columns, without which all the wires and resistors for all the pixels increased the heat load too much. And the reason why the heat load matters is you need superconductivity for the system to work -- it's made with superconducting nanowires -- and of course for superconductivity, the system needs to be super-cooled to cryogenic temperatures.

This new system is also better at detecting infrared wavelengths, which might make it useful in a space telescope someday.
Thumbnail A new framework for designing machine learning algorithms that make it easier for users of the algorithm to specify safety and fairness constraints has been introduced. "We call algorithms created with our new framework 'Seldonian' after Asimov's character Hari Seldon. If I use a Seldonian algorithm for diabetes treatment, I can specify that undesirable behavior means dangerously low blood sugar, or hypoglycemia. I can say to the machine, 'while you're trying to improve the controller in the insulin pump, don't make changes that would increase the frequency of hypoglycemia.' Most algorithms don't give you a way to put this type of constraint on behavior; it wasn't included in early designs."

Basically, what they've done is invented a mathematical system that, in addition to the objective function that specifies what a machine learning algorithm is supposed to optimize, a separate and additional set of constraints that the machine learning algorithm is supposed to avoid can be specified, and both are incorporated into the training process.
Thumbnail Discussion on algorithmic fairness, differential privacy, and the applicability of mathematical game theory to situations where real human beings where both cooperation and competition are in play.
Thumbnail Deep Learning with PyTorch book. They say it's available for free for a limited time. I was able to get the free PDF download. Now I just need the time to read it (141 pages).
Thumbnail Google's Play Store recommendation system on Android now uses a deep learning system developed by DeepMind. "Today, Google Play's recommendation system contains three main models: a candidate generator, a reranker, and a model to optimise for multiple objectives. The candidate generator is a deep retrieval model that can analyse more than a million apps and retrieve the most suitable ones. For each app, a reranker, i.e. a user preference model, predicts the user's preferences along multiple dimensions. Next these predictions are the input to a multi-objective optimisation model whose solution gives the most suitable candidates to the user."
Thumbnail "AlphaZero had the advantage of knowing the rules of games it was tasked with playing. In pursuit of a performant machine learning model capable of teaching itself the rules, a team at DeepMind devised MuZero, which combines a tree-based search with a learned model. MuZero predicts the quantities most relevant to game planning, such that it achieves industry-leading performance on 57 different Atari games and matches the performance of AlphaZero in Go, chess, and shogi."

"The researchers say MuZero paves the way for learning methods in a host of real-world domains, particularly those lacking a simulator that communicates rules or environment dynamics."

"With respect to Go, MuZero slightly exceeded the performance of AlphaZero despite using less overall computation, which the researchers say is evidence it might have gained a deeper understanding of its position. As for Atari, MuZero achieved a new state of the art for both mean and median normalized score across the 57 games, outperforming the previous state-of-the-art method (R2D2) in 42 out of 57 games and outperforming the previous best model-based approach in all games."
Thumbnail Short video on the history of the original perceptron, made by Frank Rosenblatt in 1958.
Thumbnail "Introducing the everyday robot project." "Over the last few months we've been running an experiment at our offices that puts the robots to work on a task that has just the right amount of complexity: sufficiently hard that we honestly weren't sure whether it could be done, but not so hard that it would take a year to get a clear 'it's working' or 'it's impossible' signal. We also wanted to do something clearly useful. So we decided to teach robots how to sort waste -- dividing cups, bottles, snack wrappers, and more across landfill, recycling, and compost bins."

"For our robots to learn how to do these tasks, we're using a variety of machine learning techniques. These include simulation, reinforcement learning, and collaborative learning. Each night, tens of thousands of virtual robots practice sorting the waste in a virtual office in our cloud simulator; we then move the training to real robots to refine their sorting ability."
Thumbnail This 'robot lawyer' can take the mystery out of license agreements. "When I fed in Facebook's terms of service, for example, it alerted me to a total of six issues. These ranged from the mundane (Facebook may change its terms of service at any time) to a reminder that Facebook may store and process your data anywhere in the world, meaning it might be subject to different data protection laws. When scanning license agreements from Google, Do Not Sign told me the company reserves the right to stop providing its services at any time and that its services are used at the users' sole risk."
Thumbnail "Researchers at Carnegie Mellon University have developed a system that can accurately locate a shooter based on video recordings from as few as three smartphones." The way the system works is by first putting all the videos on a global timeline by synchronizing using sound. Then it uses the difference between the speed of light and speed of sound, combined with changes in muzzle blast sound at the different cameras, in a physics model to calculate the location of the gunshots and the camera locations. The system, called Video Event Reconstruction and Analysis (VERA), has a website where anyone can upload a video set.
Thumbnail A deep learning algorithm for quantum chemistry. Researchers "have developed a deep machine learning algorithm that can predict the quantum states of molecules, so-called wave functions, which determine all properties of molecules."

"The AI achieves this by learning to solve fundamental equations of quantum mechanics."

"Solving these equations in the conventional way requires massive high-performance computing resources (months of computing time) which is typically the bottleneck to the computational design of new purpose-built molecules for medical and industrial applications. The newly developed AI algorithm can supply accurate predictions within seconds on a laptop or mobile phone."

Before, what has been done is using reference calculations, computed from quantum physics, as training data for machine learning algorithms, which output chemical properties. Multiple machine learning systems have to be trained for all the chemical properties you are interested in. What's different here is that the machine learning algorithm is trained to predict the wavefunction itself. All the chemical properties can be calculated from the wavefunction.

The neural network architecture consists of three steps: initial representations of atom types and positions, representations of chemical environments of atoms and atom pairs, and the energy and Hamiltonian matrix predictions. (In quantum physics, the Hamiltonian is an operator representing kinetic and potential energies for all the particles in the system. The Hamiltonian matrix referred to here is an approximation of that.) The path through the network to the energy prediction is rotationally invariant, while the path to the Hamiltonian matrix predicts the angular momentum of predicted orbitals. The model uses about 93 million parameters to predict a Hamiltonian matrix with more than 100 atomic orbitals.
Thumbnail Steep declines in insect populations have happened over the last two decades in as geographically distinct places as the US, Germany, the Netherlands, Sweden, the British Isles, Puerto Rico, Costa Rica, and so on. You've probably heard explanations like pesticides and habitat loss, but one explanation you might not have heard is light pollution. "Modern light pollution is no longer confined to urban centers, but radiates outwards through the atmosphere and along road networks that run into or around otherwise pristine areas. Since 1992, levels of light pollution have doubled in high biodiversity areas, and are likely to continue to rise. By 2014, over 23% of the land surface of the planet experienced artificially elevated levels of night sky brightness; by comparison, agricultural crops cover approximately 12%."

Ways light pollution can interfere with insects: some insects have a "fatal attraction" to light, while others seek to avoid it. Light pollution can amplify polarized light that fools aquatic insects into laying eggs on non-aquatic surfaces. Light pollution can obscure natural nocturnal light sources such as the astronomical cues some insects use to navigate. Light pollution can interfere with the bioluminescent signals some insects use to reproduce. Light pollution can alter the circadian cycle of rest and activity, affecting foraging and pollinating activity.

The researchers think insects have poor evolutionary adaptation to light pollution because most other disturbances are similar to situations that have happened in the evolutionary past, but light pollution has never happened before in the planet's existence. The climate has changed before, habitats have fragmented before, invasive species have happened before, plants have invented new defenses similar to pesticides, and so on. Yet in the billions of years the planet has existed, the daily night/day cycle and the lunar cycle of light and dark have been the same, until now, and insects have not evolved any adaptation to the change.
Thumbnail The National Popular Vote Interstate Compact plan to obsolete the Electoral College.
Thumbnail "DeepFovea is a new AI-powered foveated rendering system for augmented and virtual reality displays. It renders images using an order of magnitude fewer pixels than previous systems, producing a full-quality experience that is realistic and gaze-contingent."

"This is the first practical generative adversarial network (GAN) that is able to generate a natural-looking video sequence conditioned on a very sparse input. In our tests, DeepFovea can decrease the amount of compute resources needed for rendering by as much as 10-14x while any image differences remain imperceptible to the human eye."

"When the human eye looks directly at an object, it sees it in great detail. Peripheral vision, on the other hand, is much lower quality, but because the brain infers the missing information, humans don't notice. DeepFovea uses recent advances in generative adversarial networks (GANs) that can similarly 'in-hallucinate' missing peripheral details by generating content that is perceptually consistent."
Thumbnail Nvidia has released a PyTorch library to accelerate 3D deep learning research. It has common 3D data manipulation functions like functions for meshes, pointclouds, signed distance functions, voxel grids, and the like, so as to take those burdens off you. In addition it has functions for rendering, lighting, shading, view warping, and the like, that are fully differentiable and ready for deep learning systems. It as visualization functionality to render 3D results. It also includes many state-of-the-art 3D deep learning architectures.
Thumbnail LISA Pathfinder, a spacecraft designed to test the feasibility of detecting gravitational waves in space (LISA stands for Laser Interferometer Space Antenna), was so sensitive it could detect "micrometeorites", particles only micrometers in size, i.e. dust, hitting the spacecraft. Those dust particles are believed to have come from distant comets as they moved through the solar system.
Thumbnail "I started working on my own video game '1982'. The setting was a personal one for me: The Lebanese Civil War. I became obsessed with understanding the bloody events my parents had to live through and was reading all the military and historical journals I could find. I wanted to turn the different tactics that the warlords and politicians used and turn them into a Civil War Simulator."

"I thought I'd program some simple AI to play against. But, every time I would make a design change, I would need to go update my AI and over time this was making me furious."

"So the idea of Yuri was to hook a Reinforcement Learning engine to a new game and then train the agent automatically whenever a design change was made."

"Yes, Reinforcement Learning is expensive but it'll save you a lot of time."
Thumbnail "Early scientific experimentation on animal learning was one of the inspirations that prompted later researchers to explore the use of reinforcement learning in artificial intelligence. The mechanism behind reinforcement learning is simple: allowing an agent to freely interact with an environment, and assigning reward functions if the agent succeeds in a task and negative rewards for failed attempts. What does reinforcement learning have to offer AI? Doina Precup, Research Team Lead at DeepMind, provided four answers: Growing knowledge and abilities in an environment, learning efficiently from one stream of data, reasoning at multiple levels of abstraction, and adapting quickly to new situations."
Thumbnail A new way of measuring the expansion rate of the universe has been invented. This is called the Hubble constant, named for Edwin Hubble, who did observations of Cepheid variable stars in other galaxies from Mount Wilson Observatory, the world's most powerful telescope at the time, and discovered galaxies were receding, and the further away they were, the faster they were receding. Since then another method has been invented, called the baryon acoustic oscillations method. Fluctuations in the density of the normal visible matter (baryonic matter) are used to calculate acoustic density waves in the primordial plasma of the early universe. The problem is that these two methods of measurement don't agree.

In the new method, interactions between gamma rays and the extragalactic background light are used. The extragalactic background light is a diffuse radiation field that fills the universe from ultraviolet through infrared wavelengths, and is mainly produced by star formation over cosmic history. Gamma-ray and extragalactic background light photons annihilate and produce electron-positron pairs. This interaction process generates an attenuation in the spectra of gamma-ray sources above a critical energy. How exactly this is used to calculate the Hubble constant, I don't understand (it involves a lot of math). But the result they get is 67.5 kilometers per second per megaparsec, which is closer to the baryon acoustic oscillations method than the Cepheid variable stars method. That number means that for every megaparsec (about 3.2 million light years) you go from here, that galaxy will be moving away from us at an additional 67.5 km/s. (Which is 243,000 kilometers per hour, or 151,000 miles per hour).
Thumbnail The golden age of the internet is over, according to gamer Glink.
Thumbnail Robots reading Vogue. Data mining in fashion. "Few magazines can boast being continuously published for over a century" (1892 to today), "familiar and interesting to almost everyone, full of iconic pictures -- and also completely digitized and marked up as both text and images. What can you do with over 2,700 covers, 400,000 pages, 6 TB of data?"

Histograms of color patterns, cover averages, n-gram search on words and phrases in ads and articles, topic modeling organized by word co-occurrence, advertisements by frequency, date, and industry, statistics on circulation, ratio of articles to advertisements, price per issue, and number of pages per year, colormetric (hue, saturation and lightness) analysis, fabric analysis using word embedding models (word2vec), and algorithmically-generated memos in the style of Vogue editor-in-chief Diana Vreeland.
Thumbnail In the latest Scimago Institutions Rankings for artificial intelligence, "the UK is leading AI developments in Europe while Iran is leading in the Middle-East."

The SCImago ranking is supposed to be a ranking based on a technique called eigenvector centrality measurement that is supposed to account for both the number of scientific papers, the number of citations, and the "importance" of the citations. But the chart on the article just goes by counting papers, putting China on top. You can see an H index column indicating the US still has the most impact. H index is still basically counting citations; it doesn't take into account the "importance" of the citations the way the SCImago ranking is supposed to.
Thumbnail The average Facebook user can't differentiate between misinformation and real news. The question is, am I above average? Are you above average? We'd have to be a lot above average because only 17% of Facebook users could do better than chance when tested.

Facebook's handling of vaccine-related news and ads has come under scrutiny. Seems to me vaccines are an interesting focal point for the whole "fake news" issue since actual children's lives are at stake.
Thumbnail Boeing 737-MAX Congressional testimony. This is over 8 hours long (spanning two days) and no, I don't expect you all to watch all of it (there won't be a test). I watched it because the math educator Matt Parker, who is also the author of a book called "Humble Pi: When Math Goes Wrong in the Real World", says that the aviation industry is an industry that has a "blame the process" culture rather than a "blame the person" culture. The medical industry (and he is talking about the UK medical industry here) has a "blame the person" culture -- if somebody dies because of a medical accident -- surgery gone wrong, equipment used incorrectly, too much medication given, etc -- the person responsible is identified, fired, and possibly sued or subject to criminal charges. So my interest in watching the Congressional testimony related to the Boeing 737-MAX accidents was to see if this was true and if so, what it looks like.

I would say it does seem true, more or less. It is true that Boeing identified and fired an employee (Mark Forkner, their top test pilot for the Boeing 737-MAX), however, no one is acting like he caused the problem or that firing him fixed it. What got him fired was an email where he used the phrase "Jedi mind-tricking". What he was referring to was "Jedi mind-tricking" the FAA into accepting certain modifications to the training procedures and manuals for Boeing 737-MAX pilots. It's not clear whether he was fired for embarrassing Boeing or for having a cavalier attitude towards safety or for lying to the FAA (though no specific evidence of lying came out in the Congressional testimony other than this phrase in one email). The fired employee is being subjected to a separate lawsuit and advised by his lawyers to not say anything, so we have no information from the fired employee himself.

Regarding the aforementioned training procedures and manuals, the Boeing executives were repeatedly hammered with questions about why the Maneuvering Characteristics Augmentation System (MCAS) system was omitted from the training and manuals. They repeatedly replied that more information is not necessarily better. As a software developer myself, I could sympathize. It is trivially easy with modern computers to overwhelm users with more information than they can process. The fact that Boeing was trying to minimize the amount of information that the pilots had to process didn't seem nefarious to me. But the Congresspeople with the benefit of 20/20 hindsight felt otherwise.

Getting back to the "blame the process" culture question, in the video the Boeing executives outline many changes to their processes they are making as a result of the accidents. They changed their safety review board structure, the safety review boards make weekly safety reports directly to the CEO (Dennis Muilenburg, who was part of the testimony), they created a whole new safety organization (headed by Elizabeth Pasztor, newly appointed Vice President of Safety, Security and Compliance for Boeing Commercial Airplanes), reporting to the Chief Engineer of Boeing's Commercial Airplanes division John Hamilton (who was part of the testimony), the board created a new aerospace safety committee, chaired by retired Admiral and former Vice Chairman of the Joint Chiefs of Staff Edmund Giambastiani Jr and retired Admiral John M. Richardson, said to have a "deep background in safety", they realigned the entire engineering organization of approximately 50,000 engineers to report directly to the Chief Engineer (John Hamilton), they created anonymous hotlines where any employee anywhere in the company can report a safety issue, and they started an investigation into reevaluating "human factors" assumptions. (The "human factors" part is related to the fact that Boeing assumed the pilots would respond to an alarm in under 10 seconds, with a typical time being 4 seconds, which might have been true if the MCAS alarm had been the only alarm the pilots had to deal with, but data from the crashes show multiple alarms went off at once and the pilots were confused and did not react the way Boeing engineers assumed). In addition to all of that on the Boeing side, on the regulatory side, the FAA appointed a Joint Authorities Technical Review (JATR), which issued an additional set of recommendations for changes to the FAA's processes.

So it does appear that the aviation industry has a "blame the process" culture. The same couldn't be said of the Congresspeople who repeatedly called for the CEO's resignation, thinking the solution to the problem was to blame a person (the CEO) and get rid of him. The CEO repeatedly told them that he felt leaving his position was "running away from the problem" and that he intended to figure out the underlying cause of the problem, fix it, and "see it through".

It got me thinking, the question essentially boils down to, if someone did 99 things right but made one mistake, is the solution to get rid of that person and replace them with someone who won't make that mistake, but possibly will make a different one, or is the solution for that person to learn from the mistake? If a person is motivated to learn from the mistake, and if they're supported by others making an effort to help them learn from the mistake, maybe the best thing to do is to actually keep that person in place. It does appear that the CEO and the company prioritized rushing planes to market and making short-term profits over long-term safety, but it seems unlikely after this, which cost the company over $9 billion in cancelled plane sales and money they have to repay the airlines for the cost of their planes being grounded, that they will do that going forward. It looks like they will be highly motivated to prioritize safety. It seems to me like as paradoxical as it may seem for people immersed in "blame the person" culture, the way to minimize loss of life in aviation accidents going forward may be to keep the CEO and everyone else at Boeing in place.

As a final thought, I don't really have an explanation for why the aviation industry would have a "blame the process" culture while the medical industry would have a "blame the person" culture. One cannot attribute this simply to the fact that lives are at stake, because lives are at stake in the medical industry also. I noticed that under the YouTube videos were people calling for prison time, and it got me thinking that "blame the person" culture is probably simply the human species' default. That raises the question of how the aviation industry managed to carve out an exception? I have no explanation and would appreciate any thoughts on the question.
Thumbnail "As for what I am going to be doing with the rest of my time: When I think back over everything I have done across games, aerospace, and VR, I have always felt that I had at least a vague 'line of sight' to the solutions, even if they were unconventional or unproven. I have sometimes wondered how I would fare with a problem where the solution really isn't in sight. I decided that I should give it a try before I get too old."

"I'm going to work on artificial general intelligence (AGI)."
Thumbnail Making a camera that can see wi-fi. Point it at a building and the bright spots in the image show you where the wi-fi routers are.
Thumbnail The intergalactic magnetic north pole is aligned with our solar system's magnetic north pole. The intergalactic magnetic field is actually stronger than the magnetic field in the solar system. The end of the heliosphere is non-uniform. If it's wavy, it could be due to solar cycles. Radiation from cosmic rays is much greater in interstellar space than it is in the solar system. And other discoveries from Voyager 2, which is still working (!) after 42 years.
Thumbnail System for handing over objects to a robot. The researchers found if the robot delayed too long it would be perceived as "not warm" but no delay at all was perceived as "discomforting", so they put in a short delay to make the handoff comfortable for humans.
Thumbnail Robot with artificial skin which is a giant hexagonal grid of sensors that can sense proximity, pressure, temperature, and acceleration. The cells have LED lights so you can see it when they're touched. It's another of these designs that doesn't send sensory signals from cells that aren't sensing anything.
Thumbnail "The idea is to trigger the censorship in China and let the authority delete or block the content you want it to block. Lots of people already knew this trick and used it nicely. For example, a mainlander stole the T-shirt design from a Taiwanese and sold the T-shirt on Taobao. The designer complained many times but the seller refused to withdraw the T-shirt. Then the designer claimed that he is for 'Taiwan Independence' (台独). After that the seller withdraw the T-shirt immediately because he's frightened of having anything to do with 'Taiwan Independence'."

"Some Japanese also know that trick. To prevent the mainlanders pirating their works, they just write the 'Tiananmen Square' and 'Xi Jinping' on some pages. Then the regime will 'help' them to protect the intellectual property."
Thumbnail Giant compendium of free online courses, many from top schools (Stanford, Yale, MIT, Harvard, Berkeley, Oxford, etc).
Thumbnail Intergroup contact has long been considered an effective strategy to reduce prejudice between groups. But when you bring people from different groups together online, that doesn't happen. At least, in this study, when they studied people from different groups interacting online, they actually became more polarized. The groups in question weren't Democrats and Republicans or Christians and atheists, they were Denver Nuggets fans and Portland Trailblazers fans. And other NBA fans. As the researchers got all the data from the Reddit subreddit r/nba. When speaking to other members of their own group, people would use 4-letter words like "help" and "thank". (Wait, that last one was 5 letters.) When speaking to the other group, they'd use 4-letter words like "refs".
Thumbnail RLCard is a toolkit for reinforcement learning in card games, developed by Texas A&M University. "Card games are ideal testbeds with several desirable properties. First, card games are played by multiple agents who must learn to compete or collaborate with each other. For example, in Dou Dizhu, peasants need to work together to fight against the landlord in order to win the game. Second, card games have huge state space. For instance, the number of states in UNO can reach 10163. The cards of each player are hidden from the other players. A player not only needs to consider her own hand, but also has to reason about the other players’ cards from the signals of their actions. Third, card games may have large action space. For example, the possible number of actions in Dou Dizhu can reach 10^4 with an explosion of card combinations. Last, card games may suffer from sparse reward. For example, in Mahjong, winning hands are scarce. We observe one winning hand every five hundreds of games if playing randomly. Moreover, card games are easy to understand with huge popularity. Games such as Texas Hold’em, UNO and Dou Dizhu are played by hundreds of millions of people. We usually do not need to spend efforts on learning the rules before we can dive into algorithm development."

Currently has Blackjack, Leduc Hold'em, Limit Texas Hold'em, No-limit Texas Hold'em, Dou Dizhu, Mahjong, and UNO.
Thumbnail Introduction to adversarial machine learning. The author, Arunava Chakraborty, wrote a library of PyTorch code for making adversarial machine learning easy. This tutorial walks you through black box and white box targeted and untargeted attacks. Black box means the AI model you are fooling is a "black box" that you don't look inside -- you figure out how to fool it entirely from the outside. White box means you can see inside the model and know exactly how its parameters are tuned, and use that knowledge to construct your attack. Untargeted means you only care that it gets it wrong, but you don't care how it gets it wrong -- you just want to make sure the neural network mistakes the stop sign for anything that's not a stop sign. Targeted means you want to fool it in a way you've decided in advance. You've decided you want to fool the neural network into thinking the stop sign is a speed limit sign.

The tutorial concludes with DeepFool, an untargeted white box attack system that figures out the absolute minimum modifications necessary to an image to fool the neural network. The original and modified photos literally look identical to the human eye.

This shows that neural networks do not "see" what is in images anything like what human brains are doing.
Thumbnail Robot "mini cheetahs" with a soccer ball doing somersaults.
Thumbnail Neural networks at Telsa. Very vertically integrated: build their own cars, arrange the sensors around the vehicle, collect all of the data, label all the data, train it on on-premise GPU clusters, run them on custom hardware when deployed to the fleet. The images from 8 cameras are processed by convolutional networks. They called their networks "hydranets" because they have a "shared backbone" but multiple "heads". These feed into a recurrent neural network that produces a "top down view", along with parallel networks for other tasks. For training, they train 48 neural networks that output about 1,000 predictions (output tensors), and take 70,000 GPU hours to train. None of them can regress -- ever. They have an automated workflow system that automates everything.
Thumbnail AI clones your voice after listening for just 5 seconds. It's an improvement on DeepMind's Tacotron and WaveNet techniques.
Thumbnail People were put in a brain imaging machine and asked to use chopsticks. When they used their dominant hand, one hemisphere of the brain became active but when they used their non-dominant hand, both hemispheres became active.
Thumbnail The politics of AI on PBS FRONTLINE. This program is only about the politics of AI, as its subtitle suggests, no technical details. How AI will deepen inequality, challenge democracy, and divide the world into two AI superpowers, the US vs China, with AlphaGo as China's "Sputnik moment".
Thumbnail Gravitational waves from 10 black hole mergers have been detected to date, but scientists are still trying to explain the origins of those mergers. "The largest merger detected so far seems to have defied previous models because it has a higher spin and mass than the range thought possible." New simulations suggest that "such large mergers could happen just outside supermassive black holes at the center of active galactic nuclei. Gas, stars, dust and black holes become caught in a region surrounding supermassive black holes known as the accretion disk. The researchers suggest that as black holes circle around in the accretion disk, they eventually collide and merge to form a bigger black hole, which continues to devour smaller black holes, becoming increasingly large in what Rochester Institute of Technology Assistant Professor Richard O'Shaughnessy calls 'Pac-Man-like' behavior."

"It offers a natural way to explain high mass, high spin binary black hole mergers and to produce binaries in parts of parameter space that the other models cannot populate. There is no way to get certain types of black holes out of these other formation channels."
Thumbnail "The microbiota is not accidental. The microbiota has co-evolved with us over very long periods of time, and it performs beneficial functions for us, just as we perform beneficial functions for it... We are all working together as an ecological unit." "Since our species -- and all of life -- has existed on earth, we have been in the company of microbes: microbes reside in our intestines, on our skin, and in the environments we live in. These bugs have at times been opportunistic pathogens, preying on the vulnerabilities of individuals and populations, but they have more frequently been some of our oldest evolutionary friends. The trouble is, the microbiota as we've known it is disappearing."
Thumbnail 21% of adolescents and 32% of young adults said they had used prescription opioids in the past year, according to a National Survey on Drug Use and Health conducted by the Substance Abuse and Mental Health Services Administration. "Adolescents" are defined as 12-17 years old and "young adults" are defined as 18-25 years old.
Thumbnail Russian universities have the best performance track record in the world over the last 20 years in the International Collegiate Programming Contest (ICPC), a contest that dates back to 1977 and has 50,000 students from over 3,000 universities participating. The final round will be held in Russia for the first time next year, at the Moscow Institute of Physics and Technology, in June 2020.
Thumbnail DeepMind published how their AlphaStar system works. Unfortunately the system combines so many algorithms it's hard to summarize. First there is a neural network with a self-attention system to pay attention to the player's and opponent's units. It uses something called a scatter connection system to "integrate spatial and not-spatial information", whatever that means. It uses a long-short-term-memory (LSTM) system to remember sequences of observations and deal with partial observability of the game arena. It uses a combination of something called an auto-regressive policy and a recurrent pointer network to manage the "structured, combinatorial" action space.

The learning occurs in three stages: supervised learning, reinforcement learning, and multi-agent reinforcement learning. In the supervised learning stage, the system was trained to predict exactly the actions that a human player took. The supervised learning stage was considered necessary because reinforcement learning by self-play was deemed to be incapable of discovering the wide variety of strategies needed to master the game. After the supervised learning stage, the system did use self-play with agents initialized to the parameters of the supervised agents. It was necessary to further extend this with a multi-agent reinforcement learning system, however, because the system could get stuck in cycles. By "cycles", we mean a situation where agent A beats agent B, agent B beats agent C, and agent C beats agent A, leading to a cycle where the system gets stuck indefinitely and makes no progress. The solution they came up with was inspired by an algorithm called Fictitious Self Play that avoids cycles by computing a best response against a uniform mixture of all previous strategies which converges to a Nash equilibrium. In their system, they used a non-uniform mixture of opponents. They used strategies from both the current iteration and previous ones, and gave each agent opponents that were tailored specifically for that agent. Agents were categorized into three categories: main agents, exploiter agents, and league exploiter agents -- I'll skip trying to explain what exactly the differences were but the point was to create diverse opponents with diverse strategies for agents to train against.

The agents played against humans handicapped in a way to make the games "fair" to human opponents. The AI agents had a limited camera view, rather than being able to see the full game all at once, they had action-per-minute (APM) limits, so they couldn't do actions a superhuman speed, and they had delays added to simulate the delays in human reaction time. These rules were developed in consultation with professional StarCraft II players and Blizzard employees.

AlphaStar won all the Protoss-vs-Terran games it played against humans, 99.91% of the Protoss-vs-Protoss, 99.94% of the Protoss-vs-Zerg, 99.83% of the Terran-vs-Protoss, 99.92% of the Terran-vs-Terran, 99.82% of the Terran-vs-Zerg, 99.70% of the Zerg-vs-Protoss, 99.51% of the Zerg-vs-Terran, and 99.96% of the Zerg-vs-Zerg. It's considered to be within the top 0.15% of StarCraft II players.
Thumbnail The suicide rate for 10-14 year olds went from 0.9 to 2.5 (2.8x increase) per 100,000 between 2007 and 2017.
Thumbnail Fish should move poleward as climate warms. "Warming waters have less oxygen and, therefore, fish have difficulties breathing in such environments. In a catch 22-type situation, such warming, low-oxygen waters also increase fish's oxygen demands because their metabolism speeds up."

"Fish's gills extract oxygen from the water to sustain the animal's body functions. As fish grow into adulthood their demand for oxygen increases because their body mass becomes larger. However, the surface area of the gills does not grow at the same pace as the rest of the body because it is two-dimensional, while the rest of the body is three-dimensional. The larger the fish, the smaller its surface area relative to the volume of its body."

This theory is called Gill-Oxygen Limitation Theory, or GOLT. Great acronym?
Thumbnail New record for high temperature superconductivity -- 161 K, in a thorium compound (thorium hydride). But you have to use insane pressures (175 gigapascals -- atmospheric pressure is around 100 kilopascals) and magnetic fields of 45 tesla. I guess the most powerful MRI machines are around 10 tesla, so that's a very powerful magnetic field.
Thumbnail Human population animation. Agriculture was invented more or less immediately after the last ice age ended 11,600 years ago -- I'm not going to posit an explanation why, I'll leave that to you to ponder -- and took a long time to get going, and my understanding is that the total human population on the entire planet before the invention of agriculture was about 3 million, which looks about right on the graph they show in the introduction on this video. They say the population was 170 million in 1 AD, which is the year the actual animation starts. On the animation, each dot equals 1 million people. The animation starts to really speed up after about 1400 and starts to sizzle after about 1700. Maybe you already knew all that, but what I didn't know was where the population was -- in the ancient world, human population was much more concentrated in India and China than I had realized. The video also notes that there is only one period in all human history where population declined -- the Black Death (bubonic plague) in the 1300s.
Thumbnail Realistic video game character movement from neural networks. The system used motion capture from humans as a starting point, then data augmentation was used to expand the tasks the animated characters would learn, then objects would be swapped out with new objects with the same contact points, for example switching to a different size chair. The user can interactively control the characters with control signals. The network automatically makes movements and transitions. Movements that it can do include walking, running, sitting, carrying objects, opening doors, and climbing. The system can adapt to different geometries, for example sitting on different chairs and carrying different size objects.
Thumbnail Video game world sizes.
Thumbnail "Understanding OpenAI's robot hand that solves the Rubik's cube." "What did OpenAI not do? 1. Use artificial intelligence to solve the puzzle." "2. Manipulate the cube using computer vision." "3. Choose to solve the task one-handedly to make it harder."

"Removing the above misconceptions should not create the impression that OpenAI's work lacked a purpose, but prepare us to appreciate the actual contributions. In fact, OpenAI's work was pioneering. This is what probably made it hard to make it understandable to the public and led them to mispresenting it."

"The objective was that of creating a general-purpose robotic arm. The hand could have been performing any sort of task; solving the Rubik's cube just made for a well-defined problem that required quick reflexes and skilful manipulation."

"This research area is today little-understood and unexplored."

"The main theoretical contribution of this work towards general-purpose robotics was a technique called adaptive domain randomization. This technique builds upon two older concepts in the AI literature: domain randomization and curriculum learning."
Thumbnail OpenAI has released the full-sized (1.5 GB) GPT-2 AI model. GPT-2 is the text-generating system that generates the most convincing text (to humans). They did a staged release over 9 months to watch for misuse. They decided there wasn't enough misuse to not release the full-sized model, and they thought there were benefits to releasing it, including: software engineering code autocompletion, grammar assistance, autocompletion-assisted writing, art, creating or aiding literary art, poetry generation, entertainment, video games, chatbots, and health and medical question-answering systems. They've partnered with outside groups to study the effects of GPT-2 and other advanced language models. Cornell University is studying human susceptibility to text generated by language models. The Middlebury Institute of International Studies Center on Terrorism, Extremism, and Counterterrorism (CTEC) is exploring how GPT-2 could be misused by, you guessed it, terrorists and extremists. The University of Oregon is developing a series of 'bias probes' to analyze bias in GPT-2. The University of Texas at Austin is studying the statistical detectability of GPT-2, including after it's been retrained on domain-specific datasets and text in different languages.

They are working with the Partnership on AI (PAI) to develop guidelines on responsible publication for machine learning and AI. They recommend building frameworks that take the trade-offs of benefits vs harms into consideration, engaging outside researchers and the public, and giving outside researchers early access.

The GROVER system did the best at detecting, successfully detecting 97% of fake Amazon reviews. A tool called GLTR assists humans and increases humans' ability to detect AI generated text. GPT-2's output was found to have biases with respect to gender, race, religion, and language preference. This is a reflection of the data it was trained on (text from the web).

They comment on future trends in language models: Language models are moving to mobile devices, they allow for greater control of text generation, they will have improved usability, and will be subject to greater risk analysis.
Thumbnail Learning is optimized when we fail 15% of the time. "We learn best when we are challenged to grasp something just outside the bounds of our existing knowledge. When a challenge is too simple, we don't learn anything new; likewise, we don't enhance our knowledge when a challenge is so difficult that we fail entirely or give up."

"So where does the sweet spot lie? According to the new study in the journal Nature Communications, it's when failure occurs 15% of the time. Put another way, it's when the right answer is given 85% of the time."

You would think, from this description, that they got this from doing empirical observations on thousands of human beings. But that's not what they did. They looked at neural networks. Specially those trained by the mathematical algorithm known as gradient descent. And they didn't even look at thousands of neural networks. They studied gradient descent itself and worked out an exact solution from first principles. That solution is:

Ideal failure rate = (1 - erf(1 / sqrt(2))) / 2

where erf is the Gauss error function, which is related to the Gaussian (normal) distribution. The answer comes out to about 0.1586553. That's where they get the 15%.
Thumbnail Supramolecules of singlehandedness, or chirality, have been reliably, reproducibly synthesized, possibly revealing something about the origin of life. Apparently people have been able to do this before one time here, one time there, but never reliably and reproducibly. What this is about is how some molecules have right and left mirror images. If you've ever seen names prefixed by "L" and "D" prefixes, like "L-phenylalanine" and "D-phenylalanine", the "L" stands for "left" and the "D" stands for "right". And if "D" standing for "right" doesn't make sense... well it does in Latin. Anyway, you could also think of "L" and "D" as "living" and "dead" as life uses the "L" forms exclusively. Why life uses the "L" forms exclusively is a mystery, labeled the mystery of the "homochirality of life".

Anyway, until now, nobody connected large-scale rotation with molecule-scale rotation, but that's what this experiment does. What they did is pretty complicated but I'll try to summarize. They took something called phthalocyanines, wrapped them in a monolayer of long alkyl chains, and arranged them into stacks held together with what are known as pi-pi bonds. Phthalocyanine is a large organic compound with the formula (C8H4N2)4H2, which was chosen because of its ability to have right, left, or no chirality. The alkyl chains are carbon-hydrogen chains which generally don't have rings, unlike the phthalocyanines, and their purpose here is to form the pi-pi bonds. Pi-pi bonds form when you have electron shells in two atoms line up and essentially form a new orbital, called a pi orbital, shared between them, and then you can have pi orbitals in neighboring molecules line up in such a way that they can stack (by alternating positive and negative electric charges). Once you're able to stack molecules in this manner, then you can have something known as a "supramolecule". At this point you should understand the first word in the first sentence.

What they did next was stir the supramolecules with a magnetic stirrer. Then they gradually removed the solvent and tested the resulting compound for chirality (using a technique called circular dichroism spectroscopy), and the test came back indicating clockwise or anticlockwise rotation depending on how they had rotated the magnetic stirrer.

What this has to do with the origin of life is that the researchers speculate that large-scale rotation, such as the vortex motion induced by the rotation of the Earth itself, called the Coriolis effect, might have locked in an initial chirality that life still uses to this day. At this point you should be able to understand the first sentence.
Thumbnail Tiny, self-propelled robots that remove radioactive uranium from simulated wastewater. "To make their self-propelled microrobots, the researchers designed ZIF-8 rods with diameters about 1/15 that of a human hair. The researchers added iron atoms and iron oxide nanoparticles to stabilize the structures and make them magnetic, respectively. Catalytic platinum nanoparticles placed at one end of each rod converted hydrogen peroxide 'fuel' in the water into oxygen bubbles, which propelled the microrobots at a speed of about 60 times their own length per second. In simulated radioactive wastewater, the microrobots removed 96% of the uranium in an hour. The team collected the uranium-loaded rods with a magnet and stripped off the uranium, allowing the tiny robots to be recycled."

The key is the invention of micromotors based on metal organic frameworks which can operate in hydrogen peroxide. By ZIF-8, they mean zeolitic imidazolate framework-8 doped with iron. ZIFs are minerals composed of aluminium, silicon, and oxygen, and are usually doped with iron, cobalt, copper, or zinc, and they went with the iron option. The magnetism is provided by iron oxide and platinum nanoparticles. Don't know how exactly these machines extract uranium. It has something to do with the iron in the ZIF-8 metal organic framework, in particular the Fe(II) form.
Thumbnail A brain region that helps build memories during deep sleep has been identified. It's called the nucleus reuniens and it "connects two other brain structures involved in creating memories -- the prefrontal cortex and the hippocampus -- and may coordinate their activity during slow-wave sleep."

"We found that the nucleus reuniens is responsible for coordinating synchronous, slow-waves between these two structures. This means that the reuniens may play an essential role for sleep-dependent memory consolidation of events."

"Slow-wave sleep is the deepest stage of sleep, during which the brain oscillates at a very slow, once-per-second rhythm. It is crucial for muscle and brain recovery, and has been shown to play a role in memory consolidation."

In the study they used optogenitics to activate the nucleus reuniens in rats and chemogenetics to inhibit it.

More precisely, they used hSyn-ChR2-EYFP for the optogenetic experiments, hSyn-hM4Di-HA-mCitrine for the chemogenetic experiments, and hSyn-mCherry as a control, whatever those are. Some kind of plasmids, which are DNA molecules in cells without being parts of chromosomes.
Thumbnail "3D printer that is so big and so fast it can print an object the size of an adult human in just a couple of hours." But how?

"HARP (high-area rapid printing) uses a new, patent-pending version of stereolithography, a type of 3D printing that converts liquid plastic into solid objects. HARP prints vertically and uses projected ultraviolet light to cure the liquid resins into hardened plastic. This process can print pieces that are hard, elastic or even ceramic. These continually printed parts are mechanically robust as opposed to the laminated structures common to other 3D-printing technologies. They can be used as parts for cars, airplanes, dentistry, orthotics, fashion and much more."

"A major limiting factor for current 3D printers is heat. Every resin-based 3D printer generates a lot of heat when running at fast speeds -- sometimes exceeding 180 degrees Celsius. Not only does this lead to dangerously hot surface temperatures, it also can cause printed parts to crack and deform. The faster it is, the more heat the printer generates. And if it's big and fast, the heat is incredibly intense."

"The Northwestern technology bypasses this problem with a nonstick liquid that behaves like liquid Teflon. HARP projects light through a window to solidify resin on top of a vertically moving plate. The liquid Teflon flows over the window to remove heat and then circulates it through a cooling unit."
Thumbnail Drone that can fly into disaster areas and tell living people from dead people. "Using a new technique to monitor vital signs remotely, engineers from the University of South Australia and Middle Technical University in Baghdad have designed a computer vision system which can distinguish survivors from deceased bodies from 4-8 metres away."

The system works by using a system called OpenPose to find human forms lying on the ground and finding the chest region. It uses a wavelet system to look for motion of the chest wall due to cardiopulmonary activity.

I imagine the system doesn't work if enough of the human body is occluded, say from debris from the earthquake or whatever the disaster is. But it looks like a good start.
Thumbnail "Expecting the unexpected: a new model for cognition." So this is about a concept called "predictive coding", which is that your brain learns simply by trying to predict what it will experience next, and then comparing the prediction with reality, and AI can be developed that learns the same way. It's pretty straightforward to make a recurrent neural network (RNN) work that way, and this research is an improvement on RNNs called PV-RNN (apparently the "P" is for "predictive-coding" and "v" is for "variational"), which allows the network to, instead of making a definite prediction, to make probabilistic predictions.

The article suggests that "the model may enable robots to 'socialize' by predicting and imitating each other's behaviors" and may offer insights into autism spectrum disorders (ASD). "An intriguing consideration is that the current results showing that the generalization capability of PV-RNN depend on the setting of the metaprior w bears parallels to observational data about autism spectrum disorders (ASD) and may suggest possible accounts of its underlying mechanisms." "Recently, there has been an emerging view suggesting that deficits in low-level sensory processing may cascade into higher-order cognitive competency, such as in language and communication have suggested that ASD might be caused by overly strong top-down prior potentiation to minimize prediction errors (thus increasing precision) in perception, which can enhance capacities for rote learning while resulting in the loss of the capacity to generalize what is learned.