Boulder Future Salon

Thumbnail
"How developing AI products is different from traditional software." Rather than doing this as a single article, Andrew Ng broke it up into 5 short "letters" (I'll give you links to all 5 below).

So, what are the ways developing AI products is different from traditional software? 1. Unclear technical feasibility, 2. complex product specification, 3. need for data, and 4. additional maintenance cost.

First for "unclear technical feasibility, it's relatively well understood what a traditional web app or mobile app can do, and if you can make a wireframe or some other reasonable specification, you can probably build it. But for AI, you have to gather data and run experiments before you even have a clue as to whether your idea will work or not. "AI startups bring higher technical risk than traditional software startups because it's harder to validate in advance if a given technology proposal is feasible."

Related to that, for "complex product specification", you can make a specification for a traditional app, such as a wireframe, that's precise enough, but for an AI product, it's hard to specify exactly what it needs to do. How do you write an accurate specification for "acceptably safe self-driving car"?

The approach Andrew Ng uses it to break the product up into "slices" and for each "slice", define an acceptable performance metric.

For "need for data", with traditional software, you don't usually need a large amount of data -- instead you talk to users and figure out what they want. But for AI startups, you often don't have the data you need, whether it's online shopping data or medical data. "To work around this chicken-and-egg problem, some AI startups start by doing consulting or NRE (non-recurring engineering) work."

And for "additional maintenance cost", for traditional software, the boundary conditions, which is to say, the range of valid inputs, is known and indeed, traditional software often checks the input to make sure, for example, it's getting an email address in an email address input field. "But for AI systems, the boundary conditions are less clear. If you have trained a system to process medical records, and the input distribution gradually changes (data drift/concept drift), how can you tell when it has shifted so much that the system requires maintenance?"

Thumbnail
Starlink satellite tracker. Real-time animation of Starlink satellites. Not a resource affiliated with SpaceX or Starlink. Data comes from space-track.org.

Thumbnail
Generators that extrapolate beyond their training data. In this case generators for sequence data, such as natural language text in the form of movie reviews, and protein sequences, specifically the ACE2 protein, which you may have heard of (it's the protein the SARS-CoV-2 virus binds to to gain entry into human cells).

They're using a generator + discriminator setup like standard generative adversarial networks (GANs), but here the generator is an encoder-decoder pair and the trick is learning how to modify the encoding. They call this encoding the "latent space", just so you know, so you don't get confused by terminology.

So the idea is you want to "extrapolate" some attribute -- in the case of movie reviews, they chose to make them more positive, and in the case of the ACE2 protein, they chose to make it have a lower delta delta G (ddG), which relates to the stability of the protein -- in essence they are trying to make a more stable ACE2. The key to doing this is learning what modifications to the latent space result in an extrapolation of the attribute of interest.

For language, it was able to tranform movie reviews to "You'll laugh for not quite and hour and a half, but come out feeling strangely unsatisfied" to "You will laugh very well and laugh so much you will end up feeling great afterwards." I'm sure the studio heads will love this for making reviews for movie ads. Human says, "A dark, dull thriller with a parting shot that misfires" and the ad says, "A dark and compelling thriller that ends with a bitter, compelling punch. -- GENhance AI."

For evaluating the protein system, they generated generate 250,000 sequences, then computed top sequences' ddG values are with software called FoldX. GENhance outperformed all baselines on all metrics from competing models that they chose to compare the system to.

Thumbnail
Special announcement from Jensen Huang, CEO of Nvidia: The inauguration of Cambridge-1. Cambridge-1 is a supercomputer dedicated to biology research, located where DNA was discovered. The system has 80 Nvidia DGX A100 systems, each of which contains Nvidia A100 Tensor Core GPUs, BlueField-2 DPUs, and NVIDIA HDR InfiniBand networking.

The system will be used by AstraZeneca, the NHS, UK Biobank, Genomics England, King's College London, Guy's, St Thomas' NHS Foundation Trust, Oxford Nanopore, and more than 80 UK healthcare startups.

Thumbnail
"'China's 'Sputnik Moment' is what Kai-Fu Lee, author of the famous book AI Superpowers, likes to call it. Five years ago, when AlphaGo -- an artificial intelligence -- based program developed by DeepMind, a startup that Google acquired in 2014 -- defeated two of the world's best human exponents "of the board game Go, it came as an eye-opener to China and its AI community.'"

The authors go on to say the way they study the comparative dynamics of AI ecosystems in the US, China, and Europe is through what they call "the triple helix", which means the interaction between governments, universities that do fundamental research, and companies.

"Chinese universities have set up AI research departments, and the number of bachelor's and master's degree programs related to AI, which added up to about 64 in 2016, jumped sixfold to 392 in 2017 and to 902 -- or 14 times more -- in 2018. By 2017, venture capital investments in Chinese AI firms accounted for 48% of the global total, surpassing those in the US for the first time. In 2020, China filed more AI patents than any other country in the world while the number of AI startups in the country had crossed 1,100 -- second only to the number in the US."

"86% of users in China trust AI-made decisions, while only 39% of Americans and 45% of Europeans do so. That's partly because cultural and political attitudes to data and privacy in China are different from those in the West."

The go on to say that in China, AI libraries and frameworks are business-oriented rather than research-oriented, and often not free. Also the Chinese government hand-picks "national champions," a practice not followed in the US or Europe. For example the Chinese government hand-picked Tencent for medical imaging, Baidu for autonomous driving, Alibaba for smart cities, SenseTime for facial recognition, iFlytek for voice intelligence.

Thumbnail
Qell Acquisition Corp (QELL) is being heavily shorted. Qell is a SPAC, and if you're wondering why someone would short a company that only exists to acquire some other unknown company in the future, well, in this case, the company it's going to merge with has been announced, and it's Lilium, a company making an electric vertical-take-off-and-landing (VTOL) jet. So basically, Lilium is being shorted before it's even gone public.

Lilium's plane is basically a 7-seat (6 passengers + 1 pilot) plane with 36 ducted electric vectored thrust engines. Interestingly, it does not use lithium-ion batteries, it uses silicon-anode batteries... despite the similarity between the name "Lilium" and "lithium" -- I think "Lilium" was supposed to make you think of "helium".

Thumbnail
On the stock market, there's all these companies with names like "Such-and-such Acquisition Corp". There's a huge number of them and huge amounts of money flow in to them. What's going on?

It turns out these things are things called "special purpose acquisition companies", or SPACs for short. They are basically a way of short-circuiting the initial public offering (IPO) process. SPACs, also known as "blank check companies" or "shell corporations", are companies that get listed on the stock exchange, only to later merge with a second company, and that merger with the second company is the whole point -- the merger takes the second company public, getting it listed on the stock market. The combined company adopts the name of the second company and the ticker symbol is changed.

SPACs are being used more and more instead of IPOs. They are considered a less risky way for a company to go public than a traditional IPO. But what about for investors? At this point, you might be wondering, is investing in SPACs a good investment strategy? Uh, no.

Thumbnail
"Neural materials". A neural network generating embroidery, victorian fabric, twisted wool, straight wool, beaded fabric, and materials that are not fabrics such as turtle shell, cliff rock, and insulation foam.

Thumbnail
The Dominator Domino robot sets up 100,000 dominoes in slightly over 24 hours. 5 years in the making. About 50x faster than human domino setup champion Lily Hevesh.

Thumbnail
Robot-assisted dressing. Normally robots are programmed for collision-avoidance, but when you program a robot that way to help a person with disabilities or limited mobility get dressed, it's too slow. This robot uses an algorithm for safe, low-impact contact, plus an ability to anticipate human movements.

Thumbnail
"EvilModel: Hiding malware inside of neural network models". What they did here was devise a way of modifying whole neurons in one layer of a neural network to encode malware.

They compare their technique with a technique devised by Tencent that uses low-order bits in all the parameters of the entire model to encode malware. In contrast, their system breaks the bytes of the malware to be hidden into 3-byte pieces and encodes those into floating-point values with no regard to high or low-order bits. So it is a simpler system.

They use AlexNet for the comparison, which is an old and small, by today's standards, neural network with 8 layers, 5 of which are convolutional, and 3 of which are fully connected layers. With today's much larger networks, much larger viruses could be hidden.

Anyway, they found the two methods were roughly equivalent in that they could be used to embed a 22MB virus into AlexNet and keep the accuracy above 93%, however their method could do the encoding in 2.7 minutes while the Tencent method took 23.4 minutes.

To maintain the accuracy they had to add batch normalization to the network, which wasn't part of the original model, and they found changing layers close to the output worked best while changing layers close to the input messed up the accuracy more. They also added extra neurons to the layer before the layer that had the malware embedded in it. They also retrained the model, which requires access to the original training data, something a malware attacker in the real world might not have access to.

What they didn't do was actually show how their malware could be executed in the real world. If you got one of these malware-embedded neural networks on your computer, the neural network inference engine would simply treat it as data and you'd get a slightly less efficient neural network. In order for the malware to be extracted and turned into executable code, there already has to be some malware on your system with privileged access to system resources that is in a position to extract the malware from its encoding in the neural network and run it as executable code.

Apparently the thinking is that such an "extractor" could be small while a very large virus with a lot of sophisticated functionality could be embedded within a very large neural network.

Thumbnail
The big antivirus lie of 2021. My first thought was that this would be some coronavirus conspiracy theory, but no, he's talking about computer viruses. And antivirus software on computers. Apparently the virus makers have figured out to target the antivirus software systems themselves, and because they have privileged access -- they can read emails before they hit your inbox, read network input before it comes into your computer, read data before it gets written to or read from files, and so on -- being able to infect the antivirus system itself means the hacker gains privileged access to your computer. Apparently after decades of being trained antivirus software is essential for security, now the world has flipped upside down and the best security practice is to *not* have any antivirus software on your computer. Instead take other steps to lock down your machine -- starting with not using an account with administrator privileges (which is the default on Windows and Mac machines -- most Linux systems default to having a separate root account). Make a separate generic user account for yourself and do all your day-to-day work on that account. If your machine gets infected, the virus will only have user account access and not administrator privileges, limiting the damage it can do. If you want to take it a step further you can work in virtual machines or routinely wipe and reinstall your OS, setting it back to the "factory" settings. Make sure you keep your data on a separate disk from the OS so it doesn't get wiped out as well.

Thumbnail
BlenderBot 2.0, "the first chatbot that can simultaneously build long-term memory it can continually access, search the internet for timely information, and have sophisticated conversations on nearly any topic" has been open-sourced by Facebook AI Research.

"It's a significant update to the original BlenderBot, which we open-sourced in 2020 and which broke ground as the first to combine several conversational skills -- like personality, empathy, and knowledge -- into a single system."

"The model takes pertinent information gleaned during conversation and stores it in a long-term memory so it can then leverage this knowledge in ongoing conversations that may continue for days, weeks, or even months. The knowledge is stored separately for each person it speaks with, which ensures that no new information learned in one conversation is used in another."

"During conversation, the model can generate contextual internet search queries, read the results, and incorporate that information when responding to people's questions and comments. This means the model stays up-to-date in an ever-changing world."

"Today we're releasing the complete model, code, and evaluation setup, as well as two new conversational data sets -- human conversations bolstered by internet searches, and multisession chats with people that reference previous sessions -- used to train the model, so other researchers can reproduce this work and advance conversational AI research."

What they don't mention but is obvious from the conversation examples, is that the internet search aspect enables the chatbot to lie about itself. For example, you say, "My favorite TV show is Wandavision", and it says, "Who is your favorite character on WandaVision? Mine is Elizabeth Olsen." As soon as you said "Wandavision" it does an internet search for "Wandavision" and basically makes up a favorite character on the spot.

You say, "What do you do?" and it says, "I'm a financial analyst for a large company. How about you? Are you still in school?" You say, "What company? I am an AI Researcher, and I helped create you actually." It says, "That's awesome! I work for one of the big four firms. It's a great company to work for." Obviously BS but it's doing internet searches in the background to help it make up BS. Takes BSing to the next level.

Thumbnail
"Recent breakthroughs in deep learning have made it possible to replicate many of the subtleties of human speech. These voices pause and breathe in all the right places. They can change their style or emotion. You can spot the trick if they speak for too long, but in short audio clips, some have become indistinguishable from humans."

The voice engine WellSaid Labs constructed "uses two primary deep-learning models. The first predicts, from a passage of text, the broad strokes of what a speaker will sound like -- including accent, pitch, and timbre. The second fills in the details, including breaths and the way the voice resonates in its environment."

"The process requires at least an hour or two of audio and a few weeks of labor to develop a realistic-sounding synthetic replica."

Make sure you play the audio samples.

Thumbnail
If you thought the "cloud native" world was simple, check out this "interactive map" of the "cloud native landscape". Any questions?

Thumbnail
"More than 700 imaging satellites are orbiting the earth, and every day they beam vast oceans of information -- including data that reflects climate change, health and poverty -- to databases on the ground. There's just one problem: While the geospatial data could help researchers and policymakers address critical challenges, only those with considerable wealth and expertise can access it."

"Now, a team based at UC Berkeley has devised a machine learning system to tap the problem-solving potential of satellite imaging, using low-cost, easy-to-use technology that could bring access and analytical power to researchers and governments worldwide."

So, what this is really all about is a "one-time, task agnostic encoding that transforms each satellite image into a vector of features".

The idea is you just download a table of "features" and spare yourself the trouble of getting and processing the original images, which amounts to 80 terabytes (also known as terror-bytes) of data per day.

If you're wondering what the features are, they're things like forest cover, nighttime lights, population density, road length, and even a few things neural networks have been trained to guess that you might not think of like average house price and average income of the neighborhood.

However, beyond that, the researchers want a system where people using the data can tell them additional features tha they want, if they can't derive the features they want themselves from the data provided or by combining the data provided with their own data. So it's possible more features could get added in the future.