Boulder Future Salon

Thumbnail
A New Zealand company called Futureverse Platform Ltd published a 'Futureverse' whitepaper. The idea is to make a 'metaverse' that's decentralized. How to decentralize it? Blockchain everything. The idea is to make a virtual machine that handles non-fungible tokens (NFTs), a "GAS" economy (don't ask me why they use all caps), fungible assets, a decentralized (DeFi) exchange, oracles, and other blockchain services.

The NFT system provides services for minting, royalties, and structuring of metadata and content formats for asset interoperability. "The NFT runtime also has unique features like: network-wide royalty enforcement, native multi-wallet split and tiered royalties for creators, native NFT to NFT swaps, native Static and cold minting options, and build NFT DApps without needing to develop or deploy smart contracts."

The "GAS" economy means paying "gas" fees (a la Ethereum) for smart contract services, and the native token of their network used for paying the gas fees is Mycelium. Mycelium is a proof-of-stake system. They want to allow every app to have its own native token, though. These tokens and NFTs can be exchanged through a decentralized exchange (DEX).

They want the initial outlay of Mycelium to fund such metaverse-related things as land asset and character development, and "community" development.

The system includes a blockchain-based Digital ID system to allow decentralized access to the metaverse. There is a "Self-Sovereign Data Store" for managing account data.

They have a system they call "Doughnuts" (aka Decentralized Cookies), which allow apps in the metaverse to delegate access to assets to other apps.

They have a "real-time" services system called the "Sylo Network" which provides real-time events, asynchronous messaging, notifications, state management, real time calling, and real time data exchange. Real-time provision of data from the real world is called "oracles". The Sylo Network runs off-chain so it can be fast and provide real-time services, but it is also decentralized. The system will be paid for with its own Sylo Tokens.

They have a decentralized protocol for AI they call the Altered State Machine (ASM). Like everything, it will have its own token, this time called ASTO. They say ASM will provide apps the capability to define interoperable specifications for AI models that all apps across the metaverse can use, provide cloud GPU processors they call "training gyms" for training of AI models, attach AI models to a "Brain" NFT, and pair these "Brain"s with an NFT to create an "agent" that can act autonomously in the metaverse (like a player, presumably).

From the whitepaper, I don't know how much of this is already developed, and if it is, how much of a realistic chance of success it has. It's obviously a pretty ambitious project. It offers an alternate vision for a metaverse from the current centralized systems like Facebook, I mean Meta, and Roblox. I know some of you don't like blockchain technologies, but it looks it may come down to a choice between centralized or blockchain-based.


Thumbnail
"Solend invalidates Solana whale wallet takeover plan with second governance vote." The latest whackiness from the cryptocurrency world.

"On Sunday, the crypto lending platform launched a governance vote titled 'SLND1 : Mitigate Risk From Whale.' It allowed Solend to reduce the risk the whale's liquidation poses to the market by letting the lending platform access the whale's wallet and letting the liquidations happen over-the-counter."

"After the community condemned the move, calling it the opposite of what DeFi should be and outright illegal, the Solend team initiated a second governance proposal vote to invalidate the previously-approved proposal. The proposal ended with 1,480,264 votes in favor of disregarding the SLND1 proposal."

Ok, so apparently what is going on here is that this all has to do with someone who deposited a large amount of SOL to a decentralized (DeFi) lending platform called Solend in exchange for a loan of two stablecoins, USDC and USDT. This someone has become known as "the whale". Apparently the smart contracts that the Solend platform is made of auto-liquidate funds if the price of SOL goes below a certain point, and people fear this will make the price of SOL crash because the amount of SOL is sufficiently large. Hence the proposal to take control of the "whale"'s account and prevent the liquidation. The problem is that "DeFi" stands for "decentralized finance", but, even though the physical computers that make up the network are decentralized, there is apparently still a small group of people who control the code for the smart contracts, so control is still "centralized" in that regard, and those people apparently have the power to seize control of accounts. Not SOL accounts, which are the underlying blockchain, but Solend accounts, accounts that are part of the lending platform.

I've seen the amount cited as $108 million, which to me doesn't seem like enough to crash SOL, but maybe SOL is still thinly enough traded compared with other coins for this to be sufficient to crash the price of SOL.

Thumbnail
Animusic's Pipe Dream, recreated in Roblox. Some metaverse music for you. Looks astonishingly close to the original. Took the creator 3 years to make (age 16 to 19).

"Every frame of animation was rendered in Roblox Studio, at a 20x slowdown. Then sped up in Adobe Premiere. The 5 instruments were all coded from scratch in Lua, helped by the representation of some Desmos graphs. Every arc primitive or circular feature was made using custom Lua code. Also from scratch was the system to play overlapping animations & without imprecision errors. In 2021 I developed a custom plugin and GUI for Roblox Studio to create the camera keyframes. I developed a rough system for inserting any custom MIDI file into the model."

Thumbnail
This is from last September but I didn't find out about it until now. SunGlacier Technology claims to "have designed a new and highly-efficient technology that produces volumes of clean water from the air, almost anywhere in the world."

"We condense water vapor in cold falling water: 'the growing waterfall' principle. The volume of the waterdrops grow, as the water falls down. The total volume of the water increases rapidly with this method."

"The technology is simple and easy to maintain." "Each cubic meter of air entering the system is dehumidified to the maximum possible amount." "This is an energy-friendly method: smaller units are designed to run on solar energy and/or hybrid." "A key advantage of this system is that no water evaporates during the process." "This system can be scaled up easily." "We have greatly expanded the working climate/temperature range of ​​previous conventional dehumidification methods."

They claim to have been able to produce water in Dubai at temperatures over 50 C (122 F).

"The SunGlacier system is already in operation when the dew point temperature exceeds 5 Celsius" (41 F). The dew point is the temperature the air needs to be cooled to get a relative humidity of 100%, at which point the air cannot hold any more water in the gas form, and any additional cooling will result in water vapor condensing out as liquid water. Higher dew points correspond to subjective "mugginess" (humidity).

As for how it works, it's partly the obvious -- sucking in outside air and then cooling it below its dew point, getting water to precipitate out -- and partly some technique where they create a cold "waterfall" inside the machine which new water condenses into and which, I guess, keeps the water in liquid form and prevents it from evaporating again before it gets into the water bottles.

The technology doesn't seem to be getting much attention which makes me wonder if it works. Maybe the energy efficiency isn't good enough for it to be widely adopted? They claim to be able to produce 3 liters of water per kWh.

Thumbnail
"What does craiyon/DALL-E Mini 'think' mathematics and mathematicians look like?" (Made with craiyon but before it was rebranded "craiyon" and was called DALL-E Mini.) An exploration into human stereotypes. And the stereotype is manipulation of meaningless symbols by historical (and black-and-white, even) figures. It seems AI systems are excellent for exploration of human stereotypes. Making AI systems without stereotypes, on the other hand -- pretty hard. Any AI system trained on human data inherits human biases.

I did get a kick out of "Darth Vader presenting mathematics", though.

Thumbnail
DALL-E Mini has been rebranded as craiyon (crAIyon, get it?) DALL-E Mini wasn't made by OpenAI (it's made by other people using the techniques published in OpenAI's research papers) and OpenAI asked them to change the name.

Thumbnail
Artistic Radiance Fields. The idea here is to do "style transfer", but to neural radiance fields (aka "NeRF"), which can render a scene in 3D. Style transfer is when you take, for example, a photograph, say of houses in San Francisco, and render it "in the style" of an artist, for example Van Gogh. The "style" of Van Gogh is "transferred" to the photograph.

Going to neural radiance fields, however, involves an additional challenge: your "style" image is 2D, but you're "transferring" the style to a 3D object. It's not at all clear how one would go about this.

"We formulate the stylization of radiance fields as an optimization problem; we render images of the radiance fields from different viewpoints in a differentiable manner, and minimize a content loss between the rendered stylized images and the original captured images, and also a style loss between the rendered images and the style image. While previous methods apply the commonly-used Gram matrix-based style loss for 3D stylization, we observe that such a loss leads to averaged-out style details that degrades the quality of the stylized renderings."

At this point I have to jump in and mention that in Andrew Ng's Deeplearning.AI courses, he teaches style transfer and using the Gram matrix-based style loss mentioned here (which is the original style transfer technique invented by Gatys et al). The basic idea of style transfer is that you take the high-level "features" of one image (photograph of houses in San Francisco) and combine them with the low-level "features" of a different image (Van Gogh's Starry Night). To do this, you rip open a convolutional neural network, which is the type of neural network used for image processing (well, until recently, now there's such a thing as "vision transformers"), and use the low-level layers for the style and the high-level layers for the high-level features. But another peculiarity of the style-transfer system, is that the loss function that you are trying to minimize isn't a standard means-squared error (used for numerical features) or cross-entropy (used for on/off or other discrete features). Instead it uses the Gram matrix system, named after Danish mathematician Jørgen Pedersen Gram. For those of your on top of your math lingo, it means taking the determinant of the inner product of a matrix with itself. But we won't dwell on that, as this system doesn't use it. They go on to say:

"This limitation motivates us to apply a novel style loss based on Nearest Neighbor Feature Matching that is better suited to the creation of high-quality 3D artistic radiance fields. In particular, for each feature vector in the VGG feature map of a rendered image, we find its nearest neighbor feature vector in the style image's VGG feature map and minimize the distance between the two feature vectors."

First to understand this, it helps to know VGG is a convolutional neural net model, and doesn't stand for anything. Well, it does, it stands for Visual Geometry Group, the name of the group at Oxford who invented it in 2014, but it doesn't tell you anything about the model. They are today a family of models, but basically they are a straightforward combination of convolutional layers with ReLU and max pooling layers that reduce the pixels of an image to high-level features. The original VGG was designed for the ImageNet competition, which was an image labeling competition.

So basically what they are doing here is rendering images from their 3D radiance field, running those renditions through a convolutional neural network, a a convolutional neural network which has also had the style image run through it, and they're able to compare features at a certain level in the network. This comparison isn't done with a Gram matrix, though, it's done by finding a "nearest neighbor".

"Unlike a Gram matrix describing global feature statistics across the entire image, NN feature matching focuses on local image descriptions, better capturing distinctive local details. Coupled with our style loss, we also enforce a VGG feature-based content loss -- that balances stylization and content preservation -- and a simple color transfer technique -- that improves the color match between our final renderings and the input style."

Note that even though they compute the loss on 2D images, it is the 3D radiance field that ultimately gets stylized, not 2D images. In this regard, what they are doing is fundamentally different from the video style transfer techniques that have been invented so far. Those work by stylizing 2D video frames, but because that results in flickering, as each 2D frame is stylized independently, they enforce an additional "temporal coherency loss" across frames. The researchers here believe stylization should be done in 3D rather than 2D image space.

The paper has a math formula for their "nearest neighbor feature matching" loss function. Basically, you render your two images that you want to compare, rip open the neural networks to get your desired feature vectors, and then for each pixel of the feature maps, you add up a number you calculate that compares those two pixels of the feature maps -- and that number in turn is the *minimum* of something called the "cosine distance".

They found this "overly stylized" images and wanted to make the content more recognizable, so added a "content-preserving" penalty to the loss function.

They also talk quite a bit in the paper about a "deferred back-propagation" technique that they invented. This isn't critical to the idea of style transfer, but a helpful optimization that enabled them to make high-resolution neural radiance field renders. For those of you who understand the intricacies of auto-differentiation, the idea is basically to render a full-resolution image with auto-differentiation disabled, then go back through the image in a patch-wize manner, re-render the pixels for that patch only with the auto-differentiation turned on, at which point you can calculate the loss function on that patch. And you go through all the patches and accumulate the loss function results, and apply your backpropagation for training. They key is to make the patches small enough that they fit easily in GPU memory.

When you look at the examples on the website, make sure you notice the little dots under the videos that enable you to cycle through all of the video clips in that section. There are a lot of examples in the gallery but you need to click on the little dots to see them all.

Thumbnail
Exercise in a pill. At least that's the idea. "A molecule in the blood that is produced during exercise and can effectively reduce food intake and obesity in mice" has been identified.

"Yong Xu, professor of pediatrics, nutrition, and molecular and cellular biology at Baylor University, Jonathan Long, assistant professor of pathology at Stanford Medicine and an Institute Scholar of Stanford Chemistry, Engineering & Medicine for Human Health, and their colleagues conducted comprehensive analyses of blood plasma compounds from mice following intense treadmill running. The most significantly induced molecule was a modified amino acid called Lac-Phe. It is synthesized from lactate (a byproduct of strenuous exercise that is responsible for the burning sensation in muscles) and phenylalanine (an amino acid that is one of the building blocks of proteins)."

"In mice with diet-induced obesity (fed a high-fat diet), a high dose of Lac-Phe suppressed food intake by about 50% compared to control mice over a period of 12 hours without affecting their movement or energy expenditure. When administered to the mice for 10 days, Lac-Phe reduced cumulative food intake and body weight (owing to loss of body fat) and improved glucose tolerance."

On the flip side, mice lacking an enzyme involved in making Lac-Phe called CNDP2 didn't lose weight as easily.

That's mice. The researchers also found elevated Lac-Phe in humans and racehorses.

"Data from a human exercise cohort showed that sprint exercise induced the most dramatic increase in plasma Lac-Phe, followed by resistance training and then endurance training."

Thumbnail
Weird Dall-E Generations (@weirddalle) Twitter account. Collects weird Dall-E generations. Most of them look like they're actually Dall-E Mini, which is a smaller neural network that makes low-res images with noticeable flaws. Even so, some of the pictures are pretty funny. Donald Trump in Mario Cart 8. Tactical Crocs. Spaghetti printer. Nuclear explosion broccoli. Dashcam footage of hamster Godzilla wearing a giant sombrero attacking Tokyo. Yoshi in the dishwasher. Rusted barnacle covered Teletubby at the bottom of the ocean. Elon Musk police sketch. Greek philosophers playing Jenga. Babies operating guillotine at daycare. Sonic guest stars in an episode of Friends. Dumpster fire painted by Monet.

Thumbnail
El Salvador made Bitcoin legal tender 9 months ago, but Bitcoin has tumbled in value.

"In Bitcoin Beach just over half the businesses I came across accepted Bitcoin, but drive north 80 minutes to the capital, San Salvador, and it's more like a quarter."

"Cash is still very much king here, with more than half of Salvadoreans not owning a bank account."

Thumbnail
Tesla Autopilot accounts for 70% of driver assist crashes, says the National Highway Traffic Safety Association (NHTSA), however, "the NHTSA says the data it contains isn't normalized because companies required to log it are only reporting accidents, not the total number of vehicles produced or on the road, nor the mileage driven by each vehicle. In addition, the research said that some crashes can be reported multiple times due to submission requirements, and that data may be incomplete or unverified."

Thumbnail
"A startup called Able has built an engine to speed up the processing of documents and other data required for commercial loans (typically $100,000 but sometimes up to $100 million in value), which it sells as a service to banks and other lenders. Today, it's coming out of stealth mode with $20 million in funding and a launch into the wider market." Able's technology "involves RPA, computer vision and other forms of AI to ingest and process data related to loans as part of their evaluation process."

Thumbnail
"There are 578 fluorescent proteins in the protein databank, of which 10 are 'broken' and don't fluoresce."

"Only a chemist with a significant amount of fluorescent protein knowledge would be able to use the amino acid sequence to find the fluorescent proteins that have the right amino acid sequence to undergo the chemical transformations required to make them fluorescent. When we presented AlphaFold2 with the sequences of 44 fluorescent proteins that are not in the protein databank, it folded the fixed fluorescent proteins differently from the broken ones."

"The result stunned us: AlphaFold2 had learned some chemistry. It had figured out which amino acids in fluorescent proteins do the chemistry that makes them glow. We suspect that the protein databank training set and multiple sequence alignments enable AlphaFold2 to 'think' like chemists and look for the amino acids required to react with one another to make the protein fluorescent."

The article also notes that DeepMind has already calculated the 3D structures of nearly all human proteins, nearly all proteins in mice and 20 other species, and the structure of about half of all known proteins is likely to be known by the end of 2022.

Thumbnail
"Microsoft is developing an AI system that analyzes data from a range of sources and generates alerts for data center construction and operations teams to 'prevent or mitigate the impact of safety incidents." "A complementary but related system, also under development, attempts to detect and predict impacts to data center construction schedules."

"Meta also claims to be investigating ways AI can anticipate how its data centers will operate under 'extreme environmental conditions' that might lead to unsafe work environments. The company says that it has been developing physical models to simulate extreme conditions and introducing this data to the AI models responsible for optimizing power consumption, cooling, and airflow across its servers."

Article goes on to describe how major outages are frequent, Facebook I mean Meta has 20 data centers, Microsoft has over 200, which seems like a lot (are they small?) and how AI is also used for things other than safety like energy savings and anomaly detection.

Thumbnail
Review of the Codeball AI Code Review system.

"The Codeball result can be expressed with a confusion matrix like so:"

Chart shows 14 true positive, 22 false negative, 0 false positive, and 14 true negative.

"This result gives Codeball a recall of 38% and a precision of 100%!

"Aka, it's not approving everything that it could have approved, but when it does approve something, it was never wrong."

"This is a very good thing. Codeball is able to flag safe pull requests with an extremely high precision, and when it's unsure about what to do, it leaves the remaining work to the humans (just like before)."

I'm guessing the system was trained with a cost function that highly penalized false positives.

Thumbnail
"I used GPT-3 to write a Jerry Seinfeld stand-up routine about cats -- and then used DeepFake voices to perform it." About cat food, not cats.