| "AI designs quantum physics experiments beyond what any human has conceived." "Quantum physicist Mario Krenn remembers sitting in a café in Vienna in early 2016, poring over computer printouts, trying to make sense of what MELVIN had found. MELVIN was a machine-learning algorithm Krenn had built, a kind of artificial intelligence. Its job was to mix and match the building blocks of standard quantum experiments and find solutions to new problems. And it did find many interesting ones. But there was one that made no sense."
"MELVIN had seemingly solved the problem of creating highly complex entangled states involving multiple photons."
"Since then, other teams have started performing the experiments identified by MELVIN, allowing them to test the conceptual underpinnings of quantum mechanics in new ways. Meanwhile Krenn, working with colleagues in Toronto, has refined their machine-learning algorithms. Their latest effort, an AI called THESEUS, has upped the ante: it is orders of magnitude faster than MELVIN, and humans can readily parse its output."
| Financial stability implications of central bank digital currencies according to the BIS. The Bank for International Settlements, aka the BIS, is a "bank for central banks", i.e. a financial institution jointly owned by various central banks that provides financial services to central banks. According to the Basel Accords (named for Basel, Switzerland, where the BIS is headquartered), banks are supposed to maintain a minimum level of high quality liquid assets based on the outflows of the previous 30 days. The theory is that in a financial crisis, the government and central banks will organize a rescue within 30 days, so banks need enough high-quality liquid to fund cash outflows for 30 days.
The expectation is that the introduction of central bank digital currencies will result in outflows from bank deposits to the central bank digital currency.
The banks will have to purchase more high quality liquid assets to maintain their liquidity coverage ratio requirement because of the increased outflow to the central bank digital currency at the same time as their own assets decrease due to the outflow of funds. The liquidity coverage ratio is generally 1.25, meaning for every dollar of outflow in a 30 day period the bank has to have 1.25 dollars in high quality liquid assets.
They assume that lending will remain at prior levels, so banks will have to fund lending more though "wholesale funding" and less from deposits. "Wholesale" here means any of various interbank lending mechanisms, including federal funds, foreign deposits, and brokered deposits. If the banking sector wants to maintain its prior level of profitability (as measured by net interest income), it will have to actually increase its lending rate.
Banking sector return on equity is expected to decrease.
The authors of the paper do not know whether there would be upward or downward pressure on actual lending rates. Upward pressure means the cost of wholesale debt will go up, banks will bid up deposit rates, and banks will increase fees. Downward pressure means the cost of wholesale debt won't go up or will go down, banks may price lending based on long-term wholesale funding costs alone, divorcing lending rates from deposit rates, and competition from non-bank lenders and capital markets could increase.
If there is a fall in government bond yields, that would also reduce the return banks earned on their high quality liquid assets.
I'm only commenting on one of the 3 reports mentioned here. (The rest seemed to be a lot of verbiage about nothing, but you are welcome to read them if you care to.) None of them mention non-central-bank digital currencies, e.g. Bitcoin and the other cryptocurrencies. A curious omission given they are the reason central banks are creating central bank digital currencies in the first place.
| Why did human brains shrink? From about 10 million years ago, and especially since about 2.5 million years ago, the hallmark of our ancestors was rapid increase in brain size. Then, suddenly, around 3,000 years ago, brain size started shrinking. The brain growth rate went from 0.2, uh, brain size change units to -16.7 brain size change units. Ok I'm saying "brain size change units" here because the units used aren't exactly intuitive. The rate of brain size change rate is expressed as the logarithm (base 10) of the brain size in cubic centimeters per million years.
Anyway, if 3,000 years ago sounds suspiciously like the agricultural revolution might be the culprit, you're probably on the right track. To research this, these researchers compared brain size in ants, and came up with two possible explanations for how human brain size could have gotten smaller.
The first is anatomical. As brain cells grow more of what are known as "mushroom bodies" on their surface (part of how axons connect to dendrites), they become more energy efficient (as measured by cytochrome oxidase, a proxy for energy use in the brain, apparently). So the theory is at a certain point in our ancestors' evolution, nature figured out it was better to increase intelligence by increasing brain efficiency rather than continuing to increase brain size.
The other one relates to the group size. Generally as the group size gets larger and larger for a species, the brain size has to increase because the individuals have to have greater social intelligence. But the theory, inspired by the observation that there are many ant species where the brain size decreases as the group size gets larger and larger beyond a certain point, is that brains start to specialize and groups of brains start to perform brain functions as a group, and "group intelligence" emerges. As this "group intelligence" with specialization emerges, each brain doesn't have to do everything so the intellectual requirements for each individual brain decrease, allowing brain size to decrease.
This paper doesn't delve deeply into the history of agriculture, but I happen to know that agriculture emerged more or less immediately after the end of the last ice age 11,600 years ago, but was extremely slow to catch on and didn't really get going until around 5,000 years ago. The size of human societies started to grow very rapidly after that, and it looks like around 3,000 years ago, the process of specialization and shrinkage started to kick in as a consequence.
| The FDA released a list of "AI" and "ML" (machine learning) -"Enabled Medical Devices".
"The FDA is providing this initial list of AI/ML-enabled medical devices marketed in the United States as a resource to the public about these devices and the FDA's work in this area."
"The FDA plans to update this list on a periodic basis based on publicly available information."
I clicked on 3 on the first page (which itself has only a fraction of the 343 devices in the database). The Imbio RV/LV Software was "Automated Radiological Image Processing Software", the VBrain is "Radiological Image Processing Software For Radiation Therapy", and the Oxehealth Vital Signs device is "Software For Optical Camera-Based Measurement Of Pulse Rate, Heart Rate, Breathing Rate, And/Or Respiratory Rate".
|Machine Wisdom is an "inspirational quote" generator based on GPT-2. I clicked "Generate" a couple of times and it gave me "There's only one problem with love, it ends."|
| "Brain-inspired analog architecture that employs a 3D array of randomly-connected memristors to compute neural network training and inference at extremely low power."
"While some commercial chips currently use analog processor-in-memory techniques, they require digital conversion between network layers, consuming significant power. The limitations of current analog devices also means they can't be used for training AI models since they are incompatible with back-propagation, the algorithm widely used for AI training. Rain's aim is to build a complete analog chip, solving these issues with a combination of new hardware and a new training algorithm."
New algorithm? Hmmm not so sure that's going to work -- if a better algorithm than backpropagation was known wouldn't we see people using it and it overtaking backpropagation? The article seems to imply that their technique, called "equilibrium propagation" is too expensive on digital hardware but on their analog hardware is able to run efficiently and outperform backpropagation, at least if used on very "sparse" matrices which they say is closer to how the brain actually works.
Also little note about terminology. The article uses the term "taped out" which I originally misread as "tapped out". No, it's *taped* out, and they don't just mean designating something by putting tape around it or taping something to your refrigerator. "Taped out" is a term from the Olden Days of the chip industry when they made photomasks with actual black tape. Nobody uses actual black tape any more but the term "taped out" has stuck around and means when a design is finished and ready to be sent to manufacturing.
| They say this robotic arm uses "artificial muscles", which use water in some sort of hydraulic system that they've invented. They say it uses 200 watts at peak, which is more than actual human muscles.
"At this moment our robotic arm is operated only by a half of artificial muscles when compared to a human body. Strongest finger-bending muscle still missing. Fingers are going to move from left to right but they don't have muscles yet. Metacarpal and left-to-right wrist movement are also blocked. This version has a position sensor in each joint but they are yet to be software-implemented. We are going to add everything mentioned above in the next prototype."
| MuJoCo, a physics simulator for robotics, has been open-sourced. "The rich-yet-efficient contact model of the MuJoCo physics simulator has made it a leading choice by robotics researchers and today, we're proud to announce that, as part of DeepMind's mission of advancing science, we've acquired MuJoCo and are making it freely available for everyone, to support research everywhere. Already widely used within the robotics community, including as the physics simulator of choice for DeepMind's robotics team, MuJoCo features a rich contact model, powerful scene description language, and a well-designed API."
MuJoCo stands for Multi-Joint Dynamics with Contact, in case you were wondering.
"Because many simulators were initially designed for purposes like gaming and cinema, they sometimes take shortcuts that prioritise stability over accuracy. For instance, they may ignore gyroscopic forces or directly modify velocities. This can be particularly harmful in the context of optimisation: as first observed by artist and researcher Karl Sims, an optimising agent can quickly discover and exploit these deviations from reality. In contrast, MuJoCo is a second-order continuous-time simulator, implementing the full Equations of Motion. Familiar yet non-trivial physical phenomena like Newton's Cradle, as well as unintuitive ones like the Dzhanibekov effect, emerge naturally. Ultimately, MuJoCo closely adheres to the equations that govern our world."
"MuJoCo includes two powerful features that support musculoskeletal models of humans and animals. Spatial tendon routing, including wrapping around bones, means that applied forces can be distributed correctly to the joints, describing complicated effects like the variable moment-arm in the knee enabled by the tibia. MuJoCo's muscle model captures the complexity of biological muscles, including activation states and force-length-velocity curves."
Be sure to watch the videos comparing real-life phenomena with the corresponding MuJoCo simulations.
| I've made the point that neural networks for physics are less accurate than physics simulations, but may be valuable anyway because they can give approximate results much faster. Apparently many people don't understand this and think the neural networks are superior to conventional physics simulations. This article examines two examples of the 3-body problem.
"Such overblown reports lend false credence to the idea ... that AI generally and deep learning theory will soon replace all other approaches to computation or even to knowledge, when nothing of the sort has been established. The media enthusiasm for deep learning is sending the wrong impression, making it sound like any old problem can be solved with a massive neural network and the right data set, without attention to the fundamentals in that domain."
"The truth is that many of the hard, open problems in the world require a great deal of expertise in particular domains."
| "A newly developed coating" "allows for certain liquids to move across surfaces without fluid loss". "Nature has already developed strategies to transport liquids across surfaces in order to survive. We were inspired by the structural model of natural materials such as cactus leaves or spider silk. Our new technology can directionally transport not only water droplets, but also low surface tension liquids that easily spread on most surfaces."
"Current microfluidic devices have a key limitation: they can only effectively handle liquids with high surface tension, such as water. This property, also known as cohesion, means that the liquid has a greater tendency to stick to itself than to the sides of the channel it is being transported through."
"High surface tension liquids form discrete droplets that can be moved around independently, like raindrops on window glass. Cohesion can even be exploited to pull the liquid droplets along the channel through a process known as capillary action."
"By contrast, low surface tension liquids, such as alcohols and other solvents, tend to stick to the sides of the channels, and can currently be transported for only about 10 millimetres before the droplet disintegrates. Capillary action no longer applies, so this transport requires an external force, such as magnetism or heat, to move the droplets."
"The new coating enables low surface tension liquids to be transported over distances of over 150 millimetres without losing any of the liquid, about 15 times longer than currently possible."
With that background, I was wondering what this new "coating" is made of. The article says:
"The technology uses two newly developed polymer coatings, one of which is more liquid-repellent than the other. Both are composed of liquid-like polymer brushes. The more repellent coating acts as a background, surrounding the less repellent coating and creating tiny channels along the surface. The channels allow for the liquids to move in a desired pattern or direction without losing any of the liquid during transport or requiring additional energy input."
| Mapping EEG electrodes to pixels in images with millisecond precision. New brain imaging technique, this one geared towards high-resolution in time changes rather than high spatial resolution. It's based on EEG (electroencephalography) rather than fMRI (functional magnetic resonance imaging), and the experiment was done with 120 frames-per-second monitors to time neural responses to images changes in under 100 millisecond time scales.
The system is called dynamic electrode-to-image (DETI) mapping and the goal is to map the EEG responses to individual pixels in images, not just in space but in time as the images change. At the heart of the system is something called a log-Gabor encoding system. I'll skip the mathematical details (it uses a lot of logarithms, as the name implies, but also a lot of exponentials), but the idea is that while the Fourier transform gives frequency information from a digital signal, you can't tell what part of the signal it came from. The log-Gabor technique is a technique to do that. The end result is a technique for figuring out which areas of neurons in the brain are responding to each part of the image as the image changes over time, with millisecond precision.
The technique is expected to be used for a lot of research in the future, but one thing that has already been discovered is that the brain prioritizes processing of the ground plane over landmarks.
| Q: How would you make a house conduct electricity?
A: paint it with a metal paint
Q: What is an implication of a cat being an animal?
A: it has feelings
Q: Could a newborn look over the top of a fully grown horseradish plant?
A: no, because the plant is so tall
Q: Why couldn't Aristotle have used a cellphone?
A: he was born before cellphones
Q: A young boy was rushed to the hospital emergency room, but the ER doctor saw the boy and refused to operate. "This boy is my son," the doctor said. But the doctor wasn't the boy's father. How could this be?
[Intended answer to the riddle: The doctor was the boy's mom]
A: the doctor mistook the boy for his own son [Reasonable alternative answer!]
The above are the answers generated by a question-and-answer AI system called Macaw. But wait! Macaw can handle other "permutations", as they are called. It can go from a question, multiple-choice answers, and context to an answer and explanation. And it can generate answer and explanation if context is omitted. Or it can generate an answer only without the explanation. Given a question, answer, and context it can generate plausible multiple-choice options. Given an answer and context it can generate a plausible question and multiple choice options.
Example of "Given a question, produce an answer":
Q: What gas do producers produce that consumers need to survive?
Example of "Given a question, produce an answer + explanation":
Q: Which force pulls objects to the ground?
E: Gravitational force causes objects that have mass to be pulled down on a planet.
Example of "Given an answer, produce a question + explanation":
Q: Which animal has the largest ears?
E: The ears of an elephant are the largest.
Example of "Given an answer, produce a question + multiple-choice options":
A: car battery
Q: Which of these items is necessary for a car to start?
M: (A) car battery (B) windshield wiper blade (C) car radio (D) car radio antenna
Example of "Given an explanation, generate a question + answer:":
E: The leaves of a plant convert sunlight into food for the plant.
Q: How do plants get energy?
A: from the sun
So how does all this work? The system is based on a Google neural network called T5-CBQA. In case you're wondering, the "T" in the name means "transformer", indicating this is a transformer model. Transformers were invented for language translation. The "5" just means it's the 5th version they made. "CBQA" stands for "Closed Book Question Answering". The main idea behind T5-CBQA is that unlike a language translation system, which always translates from one language to another, with T5-CBQA you can put special codes in the input which tell the transformer what you want it to do. Examples of these special codes would be codes for "translate", "summarize", and so on.
The way this neural network was adapted for this project is they made special codes for what they call "slots". The "slots" are: question, context, multiple-choice options, answer, and explanation. For any given input, slots can be left empty, and the system can be asked to provide them in the output.
The way the system was trained was by using 7 datasets designed for training question-and-answer systems. During the training, the neural network was trained on *all* desired combinations of input slots filled in or empty or asked to be generated in the output for every training example. This is what enables the system to be versatile with the "permutations".
The system beat the competition in "general knowledge", "story understanding", "steps", "meta-reasoning", "hypotheticals", "science", "explanation", and "false presupposition" (though it did badly at "false presupposition" -- all the others did even worse). It got beat in "human behavior" by a system called CBQA, it got beat in "spatial" by GPT-3 and a system called Jurrasic, it got beat in "entity tracking" by GPT-3 and CBQA, and it got beat in "common sense" by GPT-3.
It might be worth elaborating on what these competition categories mean. An example of a "general knowledge" question might be "What is shiplap?", "story understanding" might be "I crashed my car. When I finally left the hospital, all I wanted to do was sleep. I had to call a taxi. Why was I in hospital?", "steps" might be "What are the steps involved in replacing a light bulb?", "meta-reasoning" might be "What is an incorrect implication of a cat being an animal?", "hypotheticals" might be "If plastic was a conductor, then would a plastic spoon conduct electricity?", "science" might be "What gases are involved in photosynthesis?", "explanation" might be "Why do houses have roofs?", "false presupposition" might be "What year did Tom Hanks land on the moon?", "human behavior" might be "I feel sad. What could I do to cheer myself up?", "spatial" might be "John is left of Sue. Where is Sue relative to John?", and "entity tracking" might be "My house is red. I painted my house white. What color is my house now?"
Sometimes Macaw is able to come up with surprisingly insightful answers, for example as we've seen above:
Q: How would you make a house conduct electricity?
A: paint it with a metal paint
But sometimes it gets things wrong. One of the categories it struggles with is "entity tracking." Here's an example of a wrong answer in that category:
Q: My house is red. I painted my neighbor's house white. What color is my house now?
A: half red, half white
Another area where it struggles is the "false presupposition" category. It correctly answered "What year did Tom Hanks land on the moon?" with "no year", but when asked about Mark Zuckerberg, didn't do so well.
Q: How old was Mark Zuckerberg when he founded Google?
Sometimes Macaw gives overly brief answers that are so brief you might not consider them correct.
Q: How does photosynthesis work?
A: sun shines on plants
All in all, a pretty impressive system but not perfect.
|Will China invade Taiwan? I don't know but I will pass along this guy's commentary for your consideration.|
|This robot walks, flies, skateboards, and walks along slacklines. I post robot videos all the time and most of them have only a few hundred views, but this one, because it somehow managed to get on the Veritasium channel, will probably get millions (lucky robot). Oh, it's already got 1.7 million. Maybe you're one of them and have already seen it? If not, it's a bipedal robot and quadcopter combined into one robot, and is able to use the legs and propellers together.|
| Using "words" to more efficiently assemble genomes. "Whole-genome assembly of long reads in minutes on a personal computer", inspired by natural language processing. If you don't have a reference genome for the organism you are sequencing, you can use a data structure called a de Bruijn graph to figure out how the DNA sequences from the "reads" that come off the DNA sequencing machines fit together. The idea is that the reads are further broken down into "k-mers", where "k" is a fixed number of DNA bases. So for example 20-mer has 20 DNA bases. These "k-mers" are usually handled in an overlapping manner, for example the first might be bases 1 to 20 and the next might be 2 to 21 and so on.
The de Bruijn graph takes these k-mers and figures out which ones overlap. This is done by matching k-1 bases from one k-mer to k-1 bases of the next in such a way as one is a "scrolled over" version of the other in the overlapping region. The formal definition, if that is more helpful, is that a de Bruijn graph of order k is a directed graph where nodes are strings of length k (k-mers), and two nodes x and y are linked by an edge if the suffix of x of length k-1 is equal to the prefix of y of length k-1.
The key insight here is that language models use "words" instead of letters as "tokens" -- small building blocks. Taking inspiration from this concept, these researchers invented a data structure they call a "minimizer-space de Bruijn graph". Instead of single nucleotides being the "tokens" of the de Bruijn graph, they use short sequences of nucleotides, which they have chosen to call "minimizers". They say the resulting genome is represented in what they call "minimizer space". "Minimizer-space de Bruijn graphs store only a small fraction of the nucleotides from the input data while preserving the overall graph structure, enabling them to be orders of magnitude more efficient than classical de Bruijn graphs. By doing so, we can reconstruct whole genomes from accurate long-read data in minutes -- about a hundred times faster than state-of-the-art approaches -- on a personal computer, while using significantly less memory and achieving similar accuracy."
For example, on a set of human genome reads, Hifiasm (competing algorithm) took 58 hours and 41 minutes, Peregrine took 14 hours and 8 minutes, while rust-mdbg -- this program, which, as you might have guessed, is written in the Rust programming language -- took 10 minutes and 23 seconds. That's minutes, not hours. Hifiasm used 195 GB of memory, Peregrine used 188 GB of memory, and rust-mdbg used just 10 GB of memory. Hifiasm was able to assemble 94.2% of the genome, Peregrine was able to assemble 96.2%, and rust-mdbg was able to assemble 95.5%. So the accuracy of the various algorithms is about the same.
| Bioinformatics programming language. There's a language called Seq that is designed for bioinformatics. On the home page of the site, it presents code like "from bio import *", "s = s'ACGTACGT'" (DNA sequence), "print(~s)" (reverse complement), and "kmer = Kmer(s)" (convert to k-mer) that looks like Python code. However, this language actually isn't Python.
Instead, it's a totally new language that was built from the ground up specifically for bioinformatics that uses syntax that looks identical to Python (actually a subset of Python) but with some new datatypes added. Because it's actually a new language and not Python, they were able to design it for high performance. It's completely statically typed and compiled, unlike Python which is dynamically typed and interpreted, and it is capable of running code in parallel, something regular Python with the global interpreter lock doesn't do. It uses the LLVM compiler toolchain, the same system used by the Clang C++ compiler (Apple's C++ compiler).
Seq internally represents DNA sequences using only 2 bits per base pair. It additionally has a built-in data type called the "k-mer". A "k-mer" represents a DNA sequence of fixed length, represented by "k". For example, a 20-mer is a DNA sequence of 20 base pairs.
Seq has a built-in operation to compute the reverse complement of a sequence or k-mer. The reverse complement operation is needed because of the double-stranded nature of DNA. DNA base pairs on one side of the double helix will match up with complementary base pairs on the other side, and the other side, if it were read as part of a sequence read by a DNA sequencing machine, would be read in the reverse direction. Therefore the reverse complement operation replaces A-bases with T-bases and vice versa, replaces C-bases with G-bases and vice versa, and reverses the whole sequence. This is optimized in Seq internally with a 4-base-long lookup table, and sequences longer than 4 base pairs are recursively subdivided until they are down to 4 bases.
For parallelization, Seq has a "|>" operator that works analogously with the pipe operator in Unix, except it works with functions. The functions are launched in separate threads and data is "piped" out of one and into the next as soon as the data is available. This enables bioinformatics researchers, who are biologists and not computer scientists, to easily write highly parallel code.
To give you a picture of how all these fit together in a real bioinformatics application, consider a case where you are given a gigantic set of "reads", where each "read" is around 100 base pairs, and asked to figure out where all the "reads" align with a reference genome for whatever species you're dealing with, for example the human genome.
The way this is typically done is first take each read and divide it up into k-mers, such that the k-mers overlap and each is offset by just 1 base in the original sequence. So if you're doing 20-mers, the first will have the first 20 bases, the second will have bases 2 to 21, and so on. These k-mers are then used to look up locations in the reference genome using a giant lookup table that has been prepared in advance. Crucially, this will only work for half of the k-mers -- to find the alignment of the other half, you have to look up the reverse complement.
Once this process has been used to find the approximate locations of all the k-mers in the reference genome, an algorithm called the Smith-Waterman algorithm is used to find the minimum "edit distance" between the k-mers and the reference genome, where "edit" allows not just for substitutions of bases with other bases, but insertions and deletions as well. What you're left with at the end of the process is a precisely aligned genome from the reads (as precisely as possible for the accuracy of your DNA sequencing machines anyway).
You might be wondering, what if you're sequencing some new species and don't have a reference genome? Many of the processes are the same. The reads are split into k-mers and their reverse complements computed. Because there is no reference genome, at that point the process diverges, and instead of doing a lookup into a reference genome, a data structure called a de Bruijn graph is constructed. A de Bruijn graph uses k-mers for nodes and the edges between the nodes represent k-mers with a new base added to one end. That is, a de Bruijn graph with k = 20 would have nodes with 20-mers where edges connect nodes where 19 of the 20 bases match and the edge represents scrolling the bases over by 1 and adding a new base on the end. This data structure allows software to identify all the places where the reads overlap and represent the same DNA sequence in the original genome before it was divided up into small "reads".
In performance tests, in 6 out of 8 benchmarks, Seq beat all the competition. In one of the remaining two, it was edged out by C++ and Julia, and in the other by C++ only. The fact that Seq can beat C++ is an astonishing feat. It means a bioinformatics researcher can write code that performs *better* than low-level C++ code in a high-level language that is essentially Python. This is accomplished with the bioinformatics-specific optimizations described above (2-bit representation of base-pairs, optimized reverse complement computation with 4-base lookup table, parallelization, and so on).