Boulder Future Salon

Thumbnail
I don't have a VR headset and I checked and my Android device does not support ARCore, so this is for those of you who can do VR.

"AR Atom Visualizer is an app that allows you to view and explore atomic models in Augmented Reality with Google ARCore on your smartphone. Many of us understand the basic structure of an atom: a nucleus containing protons and neutrons, surrounded by electrons - but how are those electrons organized? How do they move? What do they look like?"

"The Bohr model presents the atom as a shell containing a nucleus and the electrons in orbit around it. It helps us understand the energy level of electrons and how they are organised in relation to the nucleus."

"The quantum mechanical model presents the atom as an electron cloud. This helps us understand the possible location of the electrons in relation to the nucleus."

"AR Atom Visualizer uses Augmented Reality to create 3D animated visualizations of both these models of any atom in real space, just by using the camera on your smartphone."

Thumbnail
sqlite-vec an SQLite extension for vector search under development.

"I'm working on a new SQLite extension! It's called sqlite-vec, an extension for vector search, written purely in C. It's meant to replace sqlite-vss, another vector search SQLite extension I released in February 2023, which has a number of problems. I believe the approach I'm taking with sqlite-vec solves a number of problem it's predecessor has, will have a much nicer and performant SQL API, and is a better fit for all applications who want an embed vector search solution!"

"sqlite-vec will be a SQLite extension written purely in C with no dependencies. It will provide custom SQL functions and virtual tables for fast vector search, as well as other tools and utilities for working with vectors (quantization, JSON/BLOB/numpy conversions, vector arithmetic, etc.)."

Thumbnail
The full text of Dimitri P. Bertsekas's book A Course in Reinforcement Learning is available online for free. It's also available for purchase in print form. About 450 pages. It's the textbook for his course at Arizona State University "Reinforcement Learning and Optimal Control".

I've gone through more than half of Richard Sutton and Andrew Barto's book Reinforcement Learning: An Introduction (though I confess to have 'cheated' and not done all the exercises). It might be worth reading this book, too, to see the same material from an alternate point of view.

"Reinforcement learning can be viewed as the art and science of sequential decision making for large and difficult problems, often in the presence of imprecisely known and changing environment conditions. Dynamic programming is a broad and well-established algorithmic methodology for making optimal sequential decisions, and is the theoretical foundation upon which reinforcement learning rests. This is unlikely to change in the future, despite the rapid pace of technological innovation. In fact, there are strong connections between sequential decision making and the new wave of technological change, generative technology, transformers, GPT applications, and natural language processing ideas, as we will aim to show in this book."

"In dynamic programming there are two principal objects to compute: the optimal value function that provides the optimal cost that can be attained starting from any given initial state, and the optimal policy that provides the optimal decision to apply at any given state and time. Unfortunately, the exact application of dynamic programming runs into formidable computational difficulties, commonly referred to as the curse of dimensionality. To address these, reinforcement learning aims to approximate the optimal value function and policy, by using manageable off-line and/or on-line computation, which often involves neural networks (hence the alternative name Neuro-Dynamic Programming)."

"Thus there are two major methodological approaches in reinforcement learning: approximation in value space, where we approximate in some way the optimal value function, and approximation in policy space, whereby we construct a suboptimal policy by using some form of optimization over a suitably restricted class of policies."

"The book focuses primarily on approximation in value space, with limited coverage of approximation in policy space. However, it is structured so that it can be easily supplemented by an instructor who wishes to go into approximation in policy space in greater detail, using any of a number of available sources."

"An important part of our line of development is a new conceptual framework, which aims to bridge the gaps between the artificial intelligence, control theory, and operations research views of our subject. This framework, the focus of the author's recent monograph 'Lessons from AlphaZero ...',, centers on approximate forms of dynamic programming that are inspired by some of the major successes of reinforcement learning involving games. Primary examples are the recent (2017) AlphaZero program (which plays chess), and the similarly structured and earlier (1990s) TD-Gammon program (which plays backgammon)."

Thumbnail
The full text of Simone Scardapane's book Alice's Adventures in a Differentiable Wonderland is available online for free. It's not available in print form because it's being written and this is actually a draft. But it looks like Volume 1 is pretty much done. It's about 260 pages. It introduces mathematical fundamentals and then explains automatic differentiation. From there it applies the concept to convolutional layers, graph layers, and transformer models. A volume 2 is planned with fine-tuning, density estimation, generative modeling, mixture-of-experts, early exits, self-supervised learning, debugging, and other topics.

"Looking at modern neural networks, their essential characteristic is being composed by differentiable blocks: for this reason, in this book I prefer the term differentiable models when feasible. Viewing neural networks as differentiable models leads directly to the wider topic of differentiable programming, an emerging discipline that blends computer science and optimization to study differentiable computer programs more broadly."

"As we travel through this land of differentiable models, we are also traveling through history: the basic concepts of numerical optimization of linear models by gradient descent (covered in Chapter 4) were known since at least the XIX century; so-called 'fully-connected networks' in the form we use later on can be dated back to the 1980s; convolutional models were known and used already at the end of the 90s. However, it took many decades to have sufficient data and power to realize how well they can perform given enough data and enough parameters."

"Gather round, friends: it's time for our beloved Alice's adventures in a differentiable wonderland!"

Thumbnail
The full text of Simon J.D. Prince's book Understanding Deep Learning is available online for free -- though the author asks that you buy the book and write a (positive, one would hope) review on Amazon. He will make a 2nd edition if sales are good.

The book is around 500 pages and a glance at the table of contents shows it goes from fundamentals to very advanced topics: Supervised learning, shallow neural networks, deep neural networks, loss functions (maximum likelihood, univariate regression, classification, cross-entropy, etc), gradient descent, stochastic gradient descent, initialization, the backpropagation algorithm, hyperparameters, regularization, convolutional neural networks, residual networks, transformers, graph neural networks, unsupervised learning, generative adversarial networks (styleGAN, etc), normalizing flows, variational autoencoders, diffusion models, reinforcement learning, why does deep learning work? and ethics. Appendices for notation, mathematics, and probability.

Thumbnail
PyTorch documentary. This documentary isn't about the technology, it's about the people behind PyTorch. Meet the original creators like Adam Paszke, Soumith Chintala, and Yangqing Jia. Meet early adopters like Jeremy Howard. Meet people who brought PyTorch from a research tool to a production deployment like Lin Qiao. PyTorch is the invisible tool behind ChatGPT, Stable Diffusion, and many other AI products.

Thumbnail
The harms of meditation. Wha? Meditation has harms? I never heard this.

Willoughby Britton started off as a meditation enthusiast and did her dissertation on the effects of meditation on sleep as measured by brain waves in a lab. She "knew" without any hesitation that meditation was going to improve sleep, but no, the data showed it basically caused cortical arousal and insomnia.

She didn't publish the data. "This is the wrong answer."

She went on a meditation retreat and told a meditation instructor about her lab findings, and the instructor kind of chastised her and was like, I don't know why all you clinical psychologists are trying to make meditation into a relaxation technique -- everyone knows if you meditate enough you stop sleeping.

What else do meditation teachers know that they're not telling researchers?

During one residency at an impatient psychiatric hospital there were two yogis who came off a meditation retreat completely psychotic. She went back to the same teacher and asked, have you ever seen this before? "And I just remember that I never got a verbal answer -- I just got this look that was kind of like oh... yeah we know about that... and I wish you hadn't asked..."

From there, she (Willoughby Britton) documented 59 categories of meditation related issues across seven different "domains". Here she runs through the most common. Things like making your thoughts speed up to an insane pace, or conversely, they disappear altogether so there's no thoughts at all (this is called "mind emptiness").

You can lose the ability to form concepts -- this is called "concept loss". There was a woman driving home from a meditation retreat who couldn't remember what a "red light" means.

There's perceptual hypersensitivity -- colors get brighter, sounds get louder, you can hear clocks ticking, you can hear your own breathing and the blood in your ears. Maybe in the retreat that's not so bad but you get back to the city and suddenly every car door slamming feels like it slams right through your body.

Sometimes perceptual hypersensitivity leads to hallucinations in any modality -- visual, auditory, or motor.

Meditation can amplify emotions such as fear (anxiety, paranoia), can increase rather than decrease re-experiencing of past traumas or stressful events. It can conversely lead to the loss of emotion -- that's called "affective blunting".

It can lead to bodily sensations described as "electricity energy pressure movement". It can be sufficiently overwhelming that people are unable to work.

There can be changes in sense of self. Loss of self is called "ego death" or "ego dissolution". This is actually actively sought after by certain spiritual practices.

A lot of these changes can be positive in one person but negative in another -- or can even be positive in the same person in one context and negative in another.

In North India one of her students on a Tibetan meditation retreat at um an committed suicide, and the Buddhist concept of bodhisattva was apparently integral to it. Suicide is one of the most sensitive and least talked about issues in meditation. Meditation may reduce the likelihood of suicide in people who are suicidal to start with, but it may also increase odds of suicide in others, and it may depend on how "intensive" the meditation is.

Kundalini Awakening is an experience where sensations go up the spine, but she feels, beyond that the label is too nonspecific and can be attached to any kind of sensations. "Everything is Kundalini" so the concept is not helpful.

She feels there's this kind of bait-and-switch where people get into meditation for secular health-oriented reasons -- to manage their stress, to help them manage their emotions, to work through grief or loss -- then they experience these weird sensations and somebody says oh, it's Kundalini Awakening, and now they have to deal with chakras and all these spiritual interpretations of their experience. Meditation is marketed as "scientific" and "neuroscientific" and now suddenly people find themselves in this "magical sacred practice".

A lot of the discussion concerns how meditation is a large, for-profit industry.

Some of the hype surrounding meditation has waned because psychedelics are "the new guy on the scene". The last half hour or so of the conversation is about how she feels both should be approached as having risks and not being free of side effects, and they are not things that magically turn people into "enlightened beings of endless compassion."

Thumbnail
Leaders at Humane, the "AI Pin" company, allegedly prohibited criticism within the company, and this may have played a role in their launch of a "dead-on-arrival" product.

"The Times interviewed '23 current and former employees, advisers and investors,' and their anecdotes shed a lot of light on how a company makes it all the way to market with an out-of-touch, poorly performing product. The two founders apparently 'preferred positivity over criticism, leading them to disregard warnings about the AI Pin's poor battery life and power consumption. A senior software engineer was dismissed after raising questions about the product, they said, while others left out of frustration.' After that software engineer was fired for questioning if the AI pin would be ready for launch, the report describes a staff meeting where the founders 'said the employee had violated policy by talking negatively about Humane.' It's hard to make a good product if you can't honestly talk about the negatives and positives for fear of retaliation."

Thumbnail
Somehow the "system prompt" for "artifacts" (for example those code sections) from Claude 3.5 Sonnet somehow got leaked.

These peaks behind the curtain are always interesting. The prompts are always bigger, more complicated, and stranger than you'd expect. Well, I don't know that this one is that strange but it's definitely big and complicated.

Thumbnail
"Quieting the Global Growl" aka noise pollution -- in the ocean.

"Globally, shipping noise in the ocean has doubled every decade from 1960 to 2010. Piercing sonar, thudding seismic air guns for geological imaging, bangs from pile drivers, buzzing motorboats, and shipping's broadband growl can also disrupt natural soundscapes. Such human sounds are not universally problematic, but they become noise when they're unwanted. And while noise can cause acute injury and even death in marine mammals -- for example, by sending animals fleeing to the surface from great depths too quickly -- it also impacts communication, mating, fighting, migrating, and bonding in subtle and wide-ranging ways. Underwater, acoustic space is valuable, and noise is a trespass."

"In 1980 the world's merchant shipping fleet (meaning all ships, not just containers) numbered just under 700,000. In 2020, it's more than two million. Globally, container ships -- with their colorful, Lego-like boxes -- make up only about 10 percent of commercial ships. The rest are vehicle carriers, cruise ships, oil tankers, or bulk carriers toting ore, coal, grain, or other commodities."

"Invisible below the water, Busan's propeller is several meters across. The largest container-ship props can be nine meters. These propellers shed bubbles that flash-boil seawater and oscillate, the vibration becoming a sound. Bubbles vary in size, so collectively they make many frequencies, most in the low hundreds of hertz. Ship noise, then, is broadband, with most of the energy in lower tones."

Thumbnail
Cheatography: Over 6,000 free cheat sheets, revision aids and quick references

6,512 cheat sheets to be exact. Apparently the reason this site has so many cheat sheets is they made it easy to make cheat sheets on the site. A lot of the cheat sheets look like they're made for specific exams for specific classes.

I was wondering what cheat sheets might be useful to me. I decided I wouldn't want a cheat sheet for Go because the Go spec is so good and the Go standard library site is so good it's basically its own cheat sheet and more authoritative than any cheat sheet could ever be. Maybe I should look for cheat sheets for technologies I despise instead of technologies I really like, so I looked for PHP. Hmm, maybe the date formatting parameters or something. I don't know. I don't see new stuff like traits. I looked at some math cheat sheets. Discrete math, linear algebra, geometry. No, doesn't look helpful. Chemistry? Carbon root name and b­ranch prefixes. Hmm, interesting.

I feel like I need to learn all the fundamental concepts first before I could use any cheat sheets. Or maybe if I learn all the fundamental concepts, I would never need the cheat sheets? This reminds me, one time I heard about this teacher in Hungary who founded a school on the philosophy that memorization was forbidden, and students had to learn how to derive everything themselves when they needed to from understanding the fundamental underlying relationships. A bunch of students went on to work on the Manhattan Project and win Nobel Prizes and stuff.

Ok, so I tracked down the guy who founded the school in Hungary that forbade memorization and required people to be able to derive everything themselves by having deep understanding of how things were interconnected instead of memorizing. The guy, eh, his name is sometimes Mór von Kármán, sometimes just Mór Kármán, and sometimes Maurice Kármán. Anyway, he founded a school in Budapest, Hungary, called Minta Gymnasium. Down the street, another school, Fasori Gymnasium, adopted the his same teaching philosophy. ("Gymnasium" was apparently their word for what we call "high school"). Edward Teller, known as "father of the hydrogen bomb", went to Fasori Gymnasium. Or maybe Minta. Or maybe both. People seem to not be sure. Leó Szilárd, who founded the Manhattan Project (and co-wrote with Albert Einstein the famous letter to Franklin D. Roosevelt that launched the Manhattan Project -- only Einstein signed the letter because he was already known to the US public), also went to Fasori Gymnasium. He was the first person to conceive of the chain reaction where nuclear fission releases neutrons that causes more nuclear fission. Eugene Wigner, who led the team on the Manhattan Project to convert uranium into weapons grade plutonium, also went to Fasori Gymnasium. He went on to win the Nobel Prize in Physics in 1963 for work on the structure of the atomic nucleus and structure of elementary particles, and there is a mathematical theorem used in quantum physics named after him called Wigner's Theorem. In his Nobel acceptance speech, he praised is high school math teacher, Laszlo Ratz. Rounding out the "Manhattan Project" people we also have Nicholas Kurti and Theodore von Kármán. Theodore von Kármán was the son of Mór (or Maurice) von (or not von) Kármán, and went on to found the Jet Propulsion Laboratory. Both went to Minta. No wait, one more, how could I forget John von Neumann. Went to Fasori. On the Manhattan Project, he developed the mathematical theory of "explosive lenses" used to compress plutonium for the Nagasaki bomb. He went on to produce foundational work in computer science. The "von Neumann architecture" is named after him.

In addition to all these people, at the Manhattan Project, the Hungarians became known as "The Martians", apparently a joke both about their Hungarian accents and on how they seemed so smart they must have come from outer space, and later on, a number of other Hungarians who didn't go to the Minta or Fasori Gymnasiums (but went to nearby schools in Budapest) or work on the Manhattan Project got included as honorary "Martians". Those include mathematician Paul Erdős, mathematician George Pólya, John (János) Kemeny who actually did work on the Manhattan Project, physicist Egon Orowan, physicist Valentine Telegdi, mathematician and physicist Cornel Lanczos, John Harsanyi who won the Nobel Prize in Economics, George Olah who won the Nobel Prize in Chemistry, and John Polanyi who won the Nobel Prize in Chemistry.

Also of note, most of these people were not just Hungarian, but also Jews. John von Newmann said the Jews in Hungary had "a subconscious feeling of extreme insecurity in individuals, and the necessity of producing the unusual or facing extinction." He said they were attracted to fields like math and physics, because their ability could be judged objectively, and those subjects were considered politically uncontroversial at the time.

However, the wikipedia page for Minta Gymnasium says the same teaching philosophy was applied to all subjects. For example, grammar rules were not memorized from a book, instead students were tasked with analyzing phrases from signs and monuments around the city and figuring out what the underlying grammar rules were and why those specific phrases were chosen.

Anyway, it seems like a combination of things came together at the same time in the 1900-1920 time period: a combination of high intelligence, a particular ethnic group that was at the same time relatively well off, comfortable, and high status (in Budapest) but insecure (because WWII was coming and they could sense impending danger), and a unique approach to teaching -- all came together to produce something remarkable.

Circling back to where I started: cheat sheets -- do they encourage memorization, or do they by offloading memorization actually help with deep fundamental understanding? Did Manhattan Project or Nobel Prize winning scientists use cheat sheets or whatever the equivalent was in their day -- what would that be, reference books? Maybe one should focus totally on deep understanding of fundamental concepts.

Thumbnail
"Cosplay Trend Report 2024"

"The market size for cosplay costumes was 4.8B USD in 2023. This only includes costumes, and not craft materials, conventions and merchandise. Since 70% of cosplayers make costumes themselves, the cosplay market is estimated to be far greater. The global handicrafts market was 1000B in 2023, and most cosplay purchases falls under that category but are hard to identify so it's safe to assume there are dark numbers."

"82% of cosplayers are between 16 and 27 years old."
"64% of cosplayers are female."
"$241 average spend on costumes yearly."
"$262 average spend on events and merchandise."
"70% attend 2 or more conventions a year."
"87% of cosplayers speak with strangers online about cosplay."
"33% of cosplayers work on their costumes at least weekly."

Thumbnail
"Eyepopping factory construction boom in the US"

"Companies plowed $18.4 billion in April into the construction of manufacturing plants in the US, a seasonally adjusted annual rate of construction spending of a record $212 billion, according to the Census Bureau today. This was up by 140% to 200% from the range in 2015 through mid-2021."

"TSMC has announced over $65 billion in investments, including $40 billion for two fabs that are now under construction near Phoenix."

"Intel has rolled out $100 billion in investment plans, including $43 billion for facilities in Ohio, New Mexico, Oregon, and Arizona."

"Texas Instruments is investing $30 billion in Texas."

"Samsung..." "Micron..." "Toyota..." "Kohler..." "Crystal Window & Door Systems..." "ION Storage Systems..." "GAF Energy..." "GF Casting Solutions..." "Boviet Solar..." "Green New Energy Materials..."

Thumbnail
"From toad toxin to medicine: The promise of 5-MeO-DMT"

So apparently there's a variant of DMT called 5-MeO-DMT (which, chemically, is DMT with an additional 5-methoxy group attached to it -- the DMT molecule has 2 rings and a sort of tail made with a string of carbons with a nitrogen and some hydrogens on the end, and the 5-MeO adds another little tail on the other side of the rings with an oxygen, carbon, and some hydrogens) (of course, none of this tells you what the chemical does when it interacts with your brain, which all has to do with interaction with serotonin receptors). With regular DMT, people typically see "beings" or "entities", while 5-MeO-DMT tends to be non-visual. Nonetheless, it is reported to be effective at effecting personality transformations including relief from otherwise untreatable chronic PTSD.

Also, synthetic versions are pure 5-MeO-DMT, while toad toxin contains other chemicals, including another hallucinogen called bufotenine.

Thumbnail
Goldman Sachs' take on AI: "Gen AI: Too much spend, too little benefit?"

"Tech giants and beyond are set to spend over $1tn on AI capex in coming years, with so far little to show for it. So, will this large spend ever pay off? MIT's Daron Acemoglu and Goldman Sachs' Jim Covello are skeptical, with Acemoglu seeing only limited US economic upside from AI over the next decade and Covello arguing that the technology isn't designed to solve the complex problems that would justify the costs, which may not decline as many expect. But Goldman Sachs' Joseph Briggs, Kash Rangan, and Eric Sheridan remain more optimistic about AI's economic potential and its ability to ultimately generate returns beyond the current 'picks and shovels' phase, even if AI's 'killer application' has yet to emerge. And even if it does, we explore whether the current chips shortage (with Goldman Sachs' Toshiya Hari) and looming power shortage (with Cloverleaf Infrastructure's Brian Janous) will constrain AI growth. But despite these concerns and constraints, we still see room for the AI theme to run, either because AI starts to deliver on its promise, or because bubbles take a long time to burst. Generative AI has the potential to fundamentally change the process of scientific discovery, research and development, innovation, new product and material testing, etc. as well as create new products and platforms. But given the focus and architecture of generative AI technology today, these truly transformative changes won't happen quickly and few -- if any -- will likely occur within the next 10 years. Over this horizon, AI technology will instead primarily increase the efficiency of existing production processes by automating certain tasks or by making workers who perform these tasks more productive."

Some choice quotes (these may seem like a lot but are a small fraction of the 31-page document):

Daron Acemoglu:

"I began with Eloundou et al.'s comprehensive study that found that the combination of generative AI, other AI technology, and computer vision could transform slightly over 20% of value-added tasks in the production process. But that's a timeless prediction. So, I then looked at another study by Thompson et al. on a subset of these technologies -- computer vision -- which estimates that around a quarter of tasks that this technology can perform could be cost-effectively automated within 10 years. If only 23% of exposed tasks are cost effective to automate within the next ten years, this suggests that only 4.6% of all tasks will be impacted by AI. Combining this figure with the 27% average labor cost savings estimates from Noy and Zhang's and Brynjolfsson et al.'s studies implies that total factor productivity effects within the next decade should be no more than 0.66% -- and an even lower 0.53% when adjusting for the complexity of hard-to-learn tasks. And that figure roughly translates into a 0.9% GDP impact over the decade."

Joseph Briggs:

"We are very sympathetic to Acemoglu's argument that automation of many AI-exposed tasks is not cost effective today, and may not become so even within the next ten years. AI adoption remains very modest outside of the few industries -- including computing and data infrastructure, information services, and motion picture and sound production -- that we estimate will benefit the most, and adoption rates are likely to remain below levels necessary to achieve large aggregate productivity gains for the next few years. This explains why we only raised our US GDP forecast by 0.4pp by the end of our forecast horizon in 2034 (with smaller increases in other countries) when we incorporated an AI boost into our global potential growth forecasts last fall. When stripping out offsetting growth impacts from the partial redirection of capex from other technologies to AI and slower productivity growth in a non-AI counterfactual, this 0.4pp annual figure translates into a 6.1% GDP uplift from AI by 2034 vs. Acemoglu's 0.9% estimate."

"We also disagree with Acemoglu's decision not to incorporate productivity improvements from new tasks and products into his estimates, partly given his questioning of whether AI adoption will lead to labor reallocation and the creation of new tasks."

This section has interesting charts and graphs on which industries have been adopting AI and which haven't.

Jim Covello:

"What $1tn problem will AI solve? Replacing low-wage jobs with tremendously costly technology is basically the polar opposite of the prior technology transitions I've witnessed in my thirty years of closely following the tech industry."

Kash Rangan and Eric Sheridan:

"We have yet to identify AI's 'killer application', akin to the Enterprise Resource Planning (ERP) software that was the killer application of the late 1990s compute cycle, the search and e-commerce applications of the 2000-10 tech cycle that achieved massive scale owing to the rise of x86 Linux open-source databases, or cloud applications, which enabled the building of low-cost compute infrastructure at massive scale during the most recent 2010-20 tech cycle."

"Those who argue that this is a phase of irrational exuberance focus on the large amounts of dollars being spent today relative to two previous large capex cycles -- the late 1990s/early 2000s long-haul capacity infrastructure buildout that enabled the development of Web 1.0, or desktop computing, as well as the 2006-2012 Web 2.0 cycle involving elements of spectrum, 5G networking equipment, and smartphone adoption. But such an apples-to-apples comparison is misleading; the more relevant metric is dollars spent vs. company revenues. Cloud computing companies are currently spending over 30% of their cloud revenues on capex, with the vast majority of incremental dollar growth aimed at AI initiatives. For the overall technology industry, these levels are not materially different than those of prior investment cycles that spurred shifts in enterprise and consumer computing habits. And, unlike during the Web 1.0 cycle, investors now have their antenna up for return on capital. They're demanding visibility on how a dollar of capex spending ties back to increased revenues, and punishing companies who can't draw a dotted line between the two."

Brian Janous:

"Utilities are fielding hundreds of requests for huge amounts of power as everyone chases the AI wave, but only a fraction of that demand will ultimately be realized. AEP, one of the largest US electric utility companies, has reportedly received 80-90 gigawatts (GW) of load requests. Only 15 GW of that is likely real because many of the AI projects that companies are currently envisioning will never actually see the light of day. But 15 GW is still massive given that AEP currently owns/operates around 23 GW of generating capacity in the US. And even if overall grid capacity grows by only 2% annually -- which seems like a reasonable forecast -- utilities would still need to add well in excess of 100 GW of peak capacity to a system that currently handles around 800 GW at peak. The increase in power demand will also likely be hyperlocalized, with Northern Virginia, for example, potentially requiring a doubling of grid capacity over the next decade given the concentration of data centers in the area."

Carly Davenport:

"After stagnating over the last decade, we expect US electricity demand to rise at a 2.4% compound annual growth rate (CAGR) from 2022-2030, with data centers accounting for roughly 90bp of that growth. Indeed, amid AI growth, a broader rise in data demand, and a material slowdown in power efficiency gains, data centers will likely more than double their electricity use by 2030. This implies that the share of total US power demand accounted for by data centers will increase from around 3% currently to 8% by 2030, translating into a 15% CAGR in data center power demand from 2023-2030."

Toshiya Hari, Anmol Makkar, David Balaban:

"AI applications use two types of dynamic random-access memory (DRAM): HBM and DDR SDRAM. HBM is a revolutionary memory technology that stacks multiple DRAM dies -- small blocks of semiconducting material on which integrated circuits are fabricated -- on top of a base logic die, thereby enabling higher levels of performance through more bandwidth when interfacing with a GPU or AI chips more broadly. We expect the HBM market to grow at a ~100% compound annual growth rate (CAGR) over the next few years, from $2.3bn in 2023 to $30.2bn in 2026, as the three incumbent suppliers of DRAM (Samsung, SK Hynix, and Micron) allocate an increasing proportion of their total bit supply to meet the exponential demand growth."

"Despite this ramp-up, HBM demand will likely outstrip supply over this period owing to growing HBM content requirements and major suppliers' supply discipline. We therefore forecast HBM undersupply of 3%/2%/1% in 2024/2025/2026. Indeed, as Nvidia and AMD recently indicated, updated data center GPU product roadmaps suggest that the amount of HBM required per chip will grow on a sustained basis. And lower manufacturing yield rates in HBM than in traditional DRAM given the increased complexity of the stacking process constrains suppliers' ability to increase capacity."

"The other key supply bottleneck is a specific form of advanced packaging known as CoWoS, a 2.5-dimensional wafer-level multi-chip packaging technology that incorporates multiple dies side-by-side on a silicon interposer to achieve better interconnect density and performance for high-performance computing (HPC) applications. This advanced packaging capacity has been in short supply since the emergence of ChatGPT in late 2022."

"We outline four phases of the AI trade. 'Phase 1', which kicked off in early 2023, focuses on Nvidia, the clearest near-term AI beneficiary. 'Phase 2' focuses on AI infrastructure, including semiconductor firms more broadly, cloud providers, data center REITs, hardware and equipment companies, security software stocks, and utilities companies. 'Phase 3' focuses on companies with business models that can easily incorporate AI into their product offerings to boost revenues, primarily software and IT services. 'Phase 4' includes companies with the biggest potential earnings boost from widespread AI adoption and productivity gains."

Summary of key forecasts is on page 25.

Thumbnail
"Opinion: It's time for the Biden Campaign to embrace AI"

"By Kaivan Shroff, Guest Writer"

"The stakes of the 2024 presidential election cannot be overstated. With Donald Trump promising to act as a dictator 'on day one,' it is not hyperbolic to say the future of American democracy hangs in the balance. Against this backdrop, the Biden campaign faces a critical challenge: conveying a strong and effective image of President Joe Biden to a population and media ecosystem increasingly focused on optics over substance. Given the president's concerning performance last week, it's time for the Biden campaign to consider leveraging artificial intelligence (AI) to effectively reach the voting public."

"Reasonably, some may challenge the use of AI as dishonest and deceptive, but the current information ecosystem is arguably no better." "We must ask the question, are augmented AI videos that present Biden in his best form -- while sharing honest and accurate information -- really more socially damaging than our information ecosystem's current realities?"

"AI-generated content can be tailored to highlight President Biden's accomplishments, clearly articulate his policies, and present a consistent, compelling message. In an era where visual mediums and quick, digestible content dominate public perceptions, AI offers an opportunity for more effective communication. These AI-enhanced videos could ensure that the public does not make decisions about the future of our democracy based on an inconveniently timed cough, stray stutter, or healthy but hobbled walk (Biden suffers from a 'stiff gait')."

"The use of AI renderings in political campaigns is becoming increasingly common, and the Republican Party has already embraced this technology and is using AI in their attack ads against the president. Instead of a race to the bottom, the Biden campaign could consider an ethical way to deploy the same tools."