Boulder Future Salon News Bits

Thumbnail
Robophobia? Alrighty then. Somehow I suspect this was scripted (by humans) and isn't really an AI speaking. Would like to hear an actual GPT-3 generated version. I'll bet it would be less buzzwordy.

Thumbnail
The video lectures for MIT's introduction to deep learning course are available online for free. The course is taught by Alexander Amini and Ava Soleimany. Topics include: deep sequence modeling, deep computer vision, deep generative modeling, de-biasing facial recognition systems, deep reinforcement learning, limitations and new frontiers, pixels-to-control learning, neurosymbolic hybrid AI, generalizable autonomy in robotics, neural rendering, and ML for scent.

Thumbnail
"A Japanese robotics startup has invented a smart mask that translates into eight languages." "The cutouts on the front are vital for breathability, so the smart mask doesn't offer protection against the coronavirus. Instead, it is designed to be worn over a standard face mask, explains Donut Robotics CEO Taisuke Ono. Made of white plastic and silicone, it has an embedded microphone that connects to the wearer's smartphone via Bluetooth. The system can translate between Japanese and Chinese, Korean, Vietnamese, Indonesian, English, Spanish and French."

So if you're Japanese, you can speak in English, but you're on your own when it comes to understanding the reply in English?

Thumbnail
"Using artificial intelligence to smell the roses." "We now can use artificial intelligence to predict how any chemical is going to smell to humans." Hmm pretty bold claim.

"The power of machine learning is that it is able to evaluate a large number of chemical features and learn what makes a chemical smell like, say, a lemon or a rose or something else. The machine learning algorithm can eventually predict how a new chemical will smell even though we may initially not know if it smells like a lemon or a rose."

"It allows us to rapidly find chemicals that have a novel combination of smells."

"Anandasankar Ray, a professor of molecular, cell and systems biology, and Joel Kowalewski, a student in the Neuroscience Graduate Program, showed the activity of odorant receptors successfully predicted 146 different percepts of chemicals. To their surprise, few rather than all odorant receptors were needed to predict some of these percepts. Since they could not record activity from sensory neurons in humans, they tested this further in the fruit fly (Drosophila melanogaster) and observed a similar result when predicting the fly's attraction or aversion to different odorants.'

These people obviously have jobs waiting for them in the the food, flavor, and fragrance industries. Or maybe a startup opportunity in those industries. The article notes that the University of California Riverside Office of Technology Partnerships is patenting the technology and licensing it to a company that already exists, Sensorygen Inc, founded by one of the researchers.

Anyway, this study is built on previous research on perceptual responses of humans, the biochemistry of the odorant receptors themselves that detect chemicals, and genetic studies. They started with 138 receptors, but winnowed it down to 34 because they could only predict ligands for 34 of them, using a machine learning model, the first of several that are part of this research. A "ligand" is any molecule that is capable of producing a biological signal when it binds with another molecule, usually a protein. The "binding" is reversible and comes from intermolecular forces, such as ionic bonds, hydrogen bonds, or Van der Waals forces, and does not come from a covalent molecular bond.

Of those 34 receptors, they had 54 variants in the alleles for the genes that encode them. The next machine learning model learned what chemical features would activate each of the 34 receptors, and what sensitivity the receptor would have. The next and final machine model learned how to go from activation levels for the 34 receptors to human perceptual descriptors. This was learned from a psychophysical database with 146 perceptual descriptors for 150 chemicals.

Thumbnail
Here's one for the "futurology" department. Lex Fridman estimates that a neural network with the same number of parameters as the human brain has synapses will cost the same as GPT-3 costs today in about 2032.

Thumbnail
Apparently there's a "lizard-like creature" that can live for over 100 years in New Zealand. It's genome was sequenced to figure out whether it is a lizard or what. "Tuatara have been out on their own for a staggering amount of time, with prior estimates ranging from 150-250 million years, and with no close relatives the position of tuatara on tree of life has long been contentious. Some argue tuatara are more closely related to birds, crocodiles and turtles, while others say they stem from a common ancestor shared with lizards and snakes. This new research places tuatara firmly in the branch shared with lizards and snakes, but they appear to have split off and been on their own for about 250 million years -- a massive length of time considering primates originated about 65 million years ago, and hominids, from which humans descend, originated approximately six million years ago."

The genome is going to be further studied to see if people can figure out how it achieves its longevity and for clues as to what it can tell us about the world of hundreds of millions of years ago. This research is also notable for its close collaboration with the Māori, the indigenous people of New Zealand.

Thumbnail
"The 'invisible' words that shaped Dickens classics also lead audiences through Spielberg dramas. And according to new research, these small words can be found in a similar pattern across most storylines, no matter the length or format."

"When telling a story, common but invisible words -- a, the, it -- are used in certain ways and at certain moments."

"We all have an intuitive sense of what defines a story. Until now, no one has been able to objectively see or measure a story's components."

"In a computer analysis of nearly 40,000 fictional narratives, including novels and movie dialogues, the researchers tracked authors' use of pronouns (she, they), articles (a, the), and other short words, unveiling a consistent 'narrative curve:'

"Staging: Stories begin with a lot of prepositions and articles like 'a' and 'the.' For example, 'The house was next to the lake, below a cliff.' These words help authors set the scene and convey the most basic information the audience needs to understand concepts and relationships throughout the story."

"Plot progression: Once the stage is set, authors incorporate more and more interactional language, including auxiliary verbs, adverbs and pronouns. For example, 'the house' becomes 'her home' or 'it.'"

"Cognitive tension: As a story progresses toward its climax, cognitive-processing words rise -- action-type words, such as 'think,' 'believe,' 'understand' and 'cause,' that reflect a person's thought process while working through a conflict."


"The research team compared the established fictional story structure to more than 30,000 factual texts, including 28,664 New York Times articles, 2,226 TED Talks and 1,580 Supreme Court opinions. Though many shared striking similarities, each genre had unique structures that reflected the different relationships between the authors and their audiences."

Not mentioned in the article is that they researched a third question, which is whether popular stories have a different structure from unpopular stories. The perhaps surprising answer is no. To determine this, they looked at romance novels for which they had user ratings, and movie scripts. They considered the movie script data to be the more reliable. IMDB granted them access to movie ratings for all the movies for which they had screenplays.

The fact that there was no difference implies that it is not the structure of the stories, but the content that determines popularity -- the underlying themes rather than the staging or plot progression.

Thumbnail
LinkedIn has open-sourced DeText, their deep learning toolkit for text analysis. "DeText is analogous to the accessories and attachments that come with power tools, such as a cordless drill. While the drill's 'engine' may be inherently powerful, it is important to use the right attachment to achieve your desired result."

"Similarly, with DeText, users can swap NLP models depending on the type of task and leverage the models to make search and recommendation systems better than ever before. For example, at LinkedIn, we might use LiBERT (BERT trained on LinkedIn data) to better understand the meaning behind a text query and capture user intention in search (e.g., a search for 'sales consultant at Insights' is looking for sales consultant jobs at the company, Insights)."

"DeText has been applied in various applications at LinkedIn, including search/recommendation ranking, query intent classification, and query autocompletion."

Thumbnail
A machine-readable dataset of all 1.7 million arXiv papers, with titles, authors, categories, abstracts, and other data fields is available from Kaggle for free. In addition all the full text PDFs are available as a separate dataset. They say it will be updated weekly.

Thumbnail
"YogaDL: a better approach to data loading for deep learning models". "YogaDL provides both a random-access layer and a sequential-access layer." The random-access part helps with shuffling the data and sharding the data into pieces for different machines to do at the same time. The sequential-access part does prefetching and parallelization to boost performance. No idea what any of this has to do with yoga.

Thumbnail
Polygrid: A "revolutionary" way to browse large image catalogs. I had a look at the "Wikiart" demo. It looks like it puts images next to each other if they are visually similar, even if the subject matter is different. For example it will put two swirly images next to each other, even though one has a swirly sky and the other has swirly trees. Or it'll put two blue images next to each other, even though one is a portrait and another is blue sky and clouds, though it does seem to usually put portraits together. It put a couple with roads going off to infinity next to each other. There's a couple odd images like M.C. Escher where it seems to stick them by themselves because it doesn't know what to put them with. And no, they don't say how the system works, other than "deep learning".

If I were to guess, they're running the images through some kind of encoder, and then they're calculating "distances" between the encodings. But what do I know.

Thumbnail
"Prof. Vijay Janapa Reddi of Harvard, the TensorFlow Lite Micro team, and the edX online learning platform are sharing a series of short TinyML courses this fall that you can observe for free, or sign up to take and receive a certificate."

"TinyML is one of the fastest-growing areas of Deep Learning. In a nutshell, it's an emerging field of study that explores the types of models you can run on small, low-power devices like microcontrollers."

"TinyML sits at the intersection of embedded-ML applications, algorithms, hardware and software. The goal is to enable low-latency inference at edge devices on devices that typically consume only a few milliwatts of battery power. By comparison, a desktop CPU would consume about 100 watts (thousands of times more!). Such extremely reduced power draw enables TinyML devices to operate unplugged on batteries and endure for weeks, months and possibly even years --- all while running always-on ML applications at the edge/endpoint."

Thumbnail
"Scientists inspired by Star Wars create artificial skin able to feel." It's a 1 square cm device with 100 sensors. The researchers "say it can process information faster than the human nervous system, is able to recognise 20 to 30 different textures and can read Braille letters with more than 90% accuracy."

"A demonstration showed the device could detect that a squishy stress ball was soft, and determine that a solid plastic ball was hard."

Thumbnail
Giant 60-foot-tall 'Gundam' robot takes its first steps in Japan alrighty then.

Thumbnail
Ginormous list of tools for your robotics research. Not just to do with the robots themselves but the entire development process. Grouped into "communication and coordination", "documentation and presentation", "requirements and safety", "architecture and design", "frameworks and stacks", "development environment", "simulation", "electronics and mechanics", "sensor processing", "prediction", "behavior and decision", "planning and control", "user interaction", "operation system", and "datasets" categories.

Thumbnail
"Open problems in robotics." Oh, I have a feeling this is going to be a long list. Let's jump in.

Motion planning, multiaxis singularities (singularities in the equations of motion for multiaxis robotic arms), simultaneous location and mapping, the lost robot problem (if you turn it off in one room, move it to a different room, and turn it on, it can't figure out where it is), object manipulation, depth estimation, position estimation, affordance discovery (predicting how an object will react when you manipulate it), and scene understanding.

Ok, so not a huge list -- 9 things. but 9 big things.