Boulder Future Salon

Thumbnail
The price of gold is over $3,000. In fact, it's over $3,100... and $3,200, and $3,300... it's $3,351.30 at this moment. Up 16% in the last 3 months, up 25% in the last 6 months, up 44% in the last year, up 68% in the last 2 years, up almost 2x in the last 5 years.

Does the price of gold indicate inflation? If you don't trust the Consumer Price Index (CPI), you might look to gold as an indication of the true inflation rate. The price of gold should go down with time, as, over time, gold mining companies find more gold, and the supply of gold is increased. So if the price of gold goes up, it makes sense to interpret that as a decline in value of the currency the gold is being denominated in.

Is that really what the price of gold represents, though? According to "Understanding the dynamics behind gold prices":

"Key takeaways:"

"Gold's price is influenced by central bank reserves and their purchasing trends."

"Economic and political instability increase demand for gold as a safe haven."

"Global gold production and mining challenges affect gold's supply and price."

"Demand for gold in jewelry and technology sectors also impacts its price."

Thumbnail
I wasn't going to say anything about the all-female Blue Origin space flight... but...

I visited the Smithsonian Air & Space Museum in Washington DC a few years ago, and I learned that the first woman in space was Valentina Tereshkova on Vostok 6 in 1963. It wasn't Sally Ride (in 1983, on a Space Shuttle) like I had thought. (In fact, Sally Ride wasn't even 2nd -- the 2nd woman was Svetlana Savitskaya on Soyuz T-7 in 1982). Apropos to the current news about the Blue Origin flight, this flight wasn't the first where the crew was all women -- Valentina Tereshkova's 1963 flight was a solo flight -- just her -- which means the crew was all women. And she was crew, not a tourist (she used the flight computer to change the orbit). She orbited Earth 48 times across 3 days in space. The spacecraft had the highest orbital inclination of any crewed spacecraft at the time (65.09 degrees) and that record was not broken for 62 years (by SpaceX's Fram2). Since she was the first woman in space, the primary purpose of the mission was health monitoring to see how her body would react.

This interview with Valentina Tereshkova happened at University College London's Science Museum in 2015.

Thumbnail
China is working on their own version alternative to Nvidia's CUDA, which they call "MUSA", or "Moore Threads". "Moore Threads", ha, that's pretty good -- like "more threads", but with "Moore" as in "Moore's Law". I'm not used to seeing clever English expressions coming out of China, which has a completely different language.

Also, this article introduced me to the term "tech-autarky". A term that doesn't come from inside China but from the article writer. Vocabulary word for today. An "autarky" is a society that is economically independent. China is pushing to become economically independent with respect to technology.

Thumbnail
"How Ukraine's drones are beating Russian jamming"

The Russians attached optical fiber spools to drones, enabling them to fly like a kite with the hair-thin fiber unspooling behind them, providing a completely unjammable connection. This article claims they can fly 20 or more kilometers away from the controller. It made me wonder what happens when the fibers get crossed.

The Ukrainians decided instead of the extra weight for a fiber spool, at the cost of explosives, cameras, sensors, and computers for AI, they would invest in making their drones unjammable. This starts with frequency-hopping radios, receivers for all 4 satellite positioning services: the US GPS system, the European Galileo system, China's Beidou system, and Russia's GLONASS, and AI systems that can navigate terrain visually. Apparently the visual navigation systems are good enough to make it through a "jamming bubble" and reestablish a satellite navigation on the other side, but not good enough to carry out the entire mission with visual navigation.

At least that's the impression I get from this article. The article also implies drones are not making independent kill decisions, without a human operator, but I thought that line had already been crossed some time ago. Maybe this article does not want to reveal the current state of the art in Ukrainian drone technology.

Some of you may recall I told you before that the Ukraine war is a drone war -- the world's first. (I said this when sharing a video from a veteran of the Ukraine war who estimated 90% of deaths were from drones.) This article has a quote that really underscores that:

"We have much less artillery than Russia, so we had to compensate with drones. A missile is worth perhaps a million dollars and can kill maybe 12 or 20 people. But for one million dollars, you can buy 10,000 drones, put four grenades on each, and they will kill 1,000 or even 2,000 people or destroy 200 tanks."

Thumbnail
AI meets "true crime": the case of Qinxuan Pan. Actually, this case doesn't have much to do with AI -- it's (spoiler) really (apparently) an unrequited relationship fantasy that led to homicide, and the only connection with AI is that it was a super smart AI PhD student who did it.

Thumbnail
debug-gym is an environment for AI coding tools to learn how to debug code like human programmers

"Most LLM-based code-repairing systems rely on execution feedback. Given a piece of buggy code, they execute it (e.g., with a Python interpreter) and obtain some error message. Conditioned on this message, the system rewrites the code to fix the bugs. This loop is iterated until the error message is empty, or the agent has exhausted some pre-defined budget (measured in steps or tokens). While this iterative approach improves repair performance, it might fail when bugs appear in complex real-world software projects, where the error messages can be nested or non-crashing, making them harder to detect and interpret."

"In addition to talk to a rubber duck friend, or to insert arbitrary numbers of print() calls into the code, expert developers also rely on interactive debugging tools that are specifically designed to assist in debugging. In the Python programming language, pdb is such a tool. pdb allows users to navigate the codebase through breakpoints and other granular stepping functions, they can inspect stack frames, list source code chunks of interest, and execute arbitrary Python code in the context of any stack frame. This enables developers to verify their hypothesis about their code's underlying logic, and thus gain a much more comprehensive understanding of potential bugs. A natural research question we ask is: to what degree can LLMs use interactive debugging tools such as pdb?"

"debug-gym is an interactive coding environment that allows code-repairing agents to access a collection of tools designed to support active information-seeking behavior, such as pdb. debug-gym expands a debugging agent's action space with a toolbox, which consequently expands the agent's observation space with feedback messages returned from using a tool. The toolbox is designed to facilitate debugging: for example, the agent can make use of the Python debugger pdb to set breakpoints, navigate the code space, print variable values, and even create test functions on the fly. At each step, the agent can either decide to interact with a tool to further investigate the code and gather necessary information, or perform a code rewrite if it is confident in doing so."

"debug-gym is a Python library, which essentially encompasses the interaction loop between an agent and a repository-specific environment. The environment is an encapsulation of an interactive terminal, a set of tools, a code repository, and optionally a set of test cases to evaluate the correctness of the code repository. In which, debug-gym provides the terminal and a preset of tools, the users are required to specify the code repositories they want to investigate, as well as the test cases if applicable."

"The pdb tool interfaces the agent with the full suite of pdb commands that can ordinarily be used in the terminal, allowing the agent to insert breakpoints, inspect local variables, and so on."

"Tools are highly modular, and users can introduce their own custom tools to debug-gym."

"Although the majority of this technical report assumes an LLM-based agent, the implementation of an agent can take many different forms, including rule-based programs, LLM-based chatbots, or even systems that have humans-in-the-loop."

They tested on a variety of benchmarks, most notably Mini-nightmare and SWEBench.

"Mini-nightmare is a set of 10 hand-crafted buggy Python code examples with an average length of 40 lines. The code presents different types of scenarios where human developers would tend to use interactive tools (such as pdb) to assist in the debugging process. Such scenarios include race conditions in multi-threading, complex or unknown data structures, boundary issues, condition coverage, and string management. Each data point is paired with a test file so unit tests can be used to verify the correctness of the code."

"SWE-bench is a widely adopted benchmark that tests AI coding systems' ability to solve GitHub issues automatically. The benchmark consists of more than 2,000 issue-pull request pairs from 12 popular Python repositories."

The models tested were: OpenAI GPT-4o, GPT-4o-mini, o1-preview, o3-mini, Claude 3.7 Sonnet, Llama-3.2-3B-Instruct, Llama-3.3-70B-Instruct, DeepSeek-R1-Distill-Llama-70B, and DeepSeek-R1-Distill-Qwen-32B.

Open AI's o1-preview did the best of the OpenAI models, and Claude 3.7 Sonnet looks like it performed the best overall. There are some newer reasoning models like Grok 3 and Gemini 2.5 that were not part of the test.

However:

"Results suggest that while using strongest LLMs as backbone enables agents to somewhat leverage interactive debugging tools, they are still far from being proficient debuggers, this is especially the case for the more affordable choices of LLMs."

My commentary: Sometimes when a new benchmark is created, AI systems perform so laughably bad at it, you wonder why it was even made -- yet within a few years, they start performing well on it. Sometimes the first step to progress is to come up with a way to measure progress. Maybe debug-gym is the first step in making AI systems that are good debuggers.

Thumbnail
Funding was pulled on the CVE database -- the database of security vulnerabilities that everyone in the computer security field depends on and that all those "CVE numbers" that you see all the time refer to -- but reinstated at the last second. I knew CVE stands for "Common Vulnerabilities and Exposures", but I never gave any thought to who exactly runs it. It turns out it's run by MITRE corporation with funding from the US government's Cybersecurity and Infrastructure Security Agency (CISA).

Thumbnail
4chan got hacked. They were using an old version of PHP that hadn't gotten security patches since 2016.

[Urge to insert sarcastic comment about how nobody should have patched PHP and the world should have let this horrible language die resisted.]

It seems possible 4chan might not come back from this and might be gone forever.

Thumbnail
The percentage of people who say they think the impact of artificial intelligence on the US over the next 20 years will be negative is 35%, positive is 17%. But, for "AI experts" (whoever those are), the numbers are 15% negative, 56% positive. (The numbers don't add to 100 because people could answer "equally positive and negative" and "not sure".) So "AI experts" are tremendously more positive about AI than the general public.

"Who did we define as 'AI experts' and how did we identify them?"

"To identify individuals who demonstrate expertise via their work or research in artificial intelligence or related fields, we created a list of authors and presenters at 21 AI-focused conferences from 2023 and 2024."

"The conferences covered topics including research and development, application, business, policy, social science, identity, and ethics."

"To be eligible for the survey, experts had to confirm 1) their work or research relates to AI, machine learning or related topics and 2) that they live in the US."

Continuing on... The percentage who say they think the increased use of AI is more likely to harm them is 43%. The percentage who say it will benefit them is 24%. This is for US adults. For AI experts, the "harm them" percentage is 15% and the "benefit them" percentage is 76%.

"The percentage who say the impact of AI on each of the following in the US over the next 20 years will be very or somewhat positive:"

"How people do their jobs": US adults 23%, AI experts 73%,

"The economy": US adults 21%, AI experts 69%,

"Medical care": US adults 44%, AI experts 84%,

"K12 education": US adults 24%, AI experts 61%,

"Arts and entertainment": US adults 20%, AI experts 48%,

"The environment": US adults 20%, AI experts 36%,

"Personal relationships": US adults 7%, AI experts 22%,

"The criminal justice system": US adults 19%, AI experts 32%,

"The news people get": US adults 10%, AI experts 18%,

"Elections": US adults 9%, AI experts 11%.

Seems noteworthy that there isn't a single aspect of life, at least on this survey, where the general public was more optimistic than AI experts. Also, AI experts were above 50% on 4 out of 10, while the general public was above 50% on 0 out of 10.

Interestingly, there is a gender gap, with 22% of men saying they think AI will positively impact the US, compared with 12% of women. For AI experts, the gap is even bigger: the corresponding numbers are 63% for men and 36% for women.

The percentage who say that over the next 20 years, AI will lead to fewer jobs in the US is 64% for US adults, 39% for AI expects. Almost as many AI experts say "not much difference" -- 33%. The "not much difference" number for US adults is 14%.

When asked about specific jobs, US adults and AI experts were in agreement (within a few percentage points) for cashiers, journalists, software engineers, and mental health therapists. AI experts foresee more job loss than the public for truck drivers (62% vs 33%) and lawyers (38% vs 23%). US adults foresee more job loss than AI experts for factory workers, musicians, teachers, and medical doctors.

66% of US adults and 70% of experts are "highly concerned about people getting inaccurate information from AI."

"The public is more worried about loss of human connection. While 57% of the public is highly concerned about AI leading to less connection between people, this drops to 37% among the experts we surveyed."

"75% of experts say the people who design AI take men's perspectives into account at least somewhat well -- but 44% say this about women's views."

The percentage who say that thinking about the use of AI in the United States, they are more concerned that the US government will not go far enough regulating its use is 58% for US adults, 56% for AI experts. For "go too far", those numbers were 21% for US adults and 28% for AI experts. So the public and the AI experts are pretty much in agreement on this one.

27% of US adults say they interact with AI "almost constantly" or "several times a day" was 27%, vs 79% for AI experts. I guess it would be weird for AI experts not to interact with AI all the time.

The percentage who say chatbots have been "extremely" or "very" helpful for them is 33% for the US adults, but 61% for AI experts. If you add in "somewhat" those numbers become 79% for US adults and 91% for AI experts.

The percentage who say they think they have no or not much control in whether AI is used in their life is 59% for US adults, and 46% for AI experts. I was surprised the AI experts number was so high. AI experts have what AI they use dictated to them by employers, just like regular people? [[For me, at work, I've done a lot of experimentation, but because I've failed to realize the expected 5x productivity gains, my boss is now stepping in and dictating what AI I (and the other developers) use. I've been studying AI since 2011, yet suddenly, he's the expert and I'm the dummy. I guess that's just how human social status hierarchies work.]]

The percentage who say they would like more control over how AI is used in their lives, the numbers are 55% for US adults and 57% for AI experts.

The percentage who say the increased use of AI in daily life makes them feel "more excited than concerned" is 11% and the percentage who say it makes them feel "more concerned than excited" is 51%. For AI experts, those numbers are nearly reversed: "more excited than concerned" is 47% and "more concerned than excited" is 15%.

The percentage who say that when it comes to AI, they are extremely or very concerned about:

"AI being used to impersonate people": 78% for US adults, 65% for AI experts,

"People's personal information being misused by AI": 71% for US adults, 60% for AI experts,

"People getting inaccurate information": 66% for US adults, 70% for AI experts,

"People not understanding what AI can do": 58% for US adults, 52% for AI experts,

"AI leading to less connection between people": 57% for US adults, 37% for AI experts,

"AI leading to job loss": 56% for US adults, 25% for AI experts, and

"Bias in decisions made by AI": 55% for US adults, 55% for AI experts.

The percentage who say they think AI would do better than people whose job it is to:

"Make a medical diagnosis": 26% for US adults, 41% for AI experts,

"Drive someone from one place to another": 19% for US adults, 51% for AI experts,

"Provide customer service": 19% for US adults, 42% for AI experts,

"Decide who gets a loan": 19% for US adults, 41% for AI experts,

"Write a news story": 19% for US adults, 33% for AI experts,

"Write a song": 14% for US adults, 16% for AI experts,

"Make a hiring decision": 11% for US adults, 19% for AI experts, and

"Decide who gets parole from prison": 10% for US adults, 20% for AI experts.

My commentary: I'm surprised there's anyone who thinks there won't be a lot fewer jobs. We've already just seen AI wipe out jobs for language translators (at least in text form), stock artists, various other writing jobs (JK Rowling and Stephen King are not in danger of losing their jobs, but it's hard for a new author to break in with the flood of AI-generated or AI-assisted books flooding the market now), and with YouTube announcing their automatic AI music generator, it's clear "stock musician" (musician who makes background music for TV and online videos) is a job being eliminated as we speak, and my occupation, software engineer, is clearly in the crosshairs of AI. It will take longer for AI to master jobs in the physical world, like cleaning hotel rooms -- looks to be one of the most AI-proof jobs out there -- who predicted that 20 years ago? -- but does anyone seriously think 20 years won't be a long enough time for there to be serious advances in that type of AI? Apparently so if you believe this survey. The total labor force participation rate peaked in 2000, at the height of the dot-com bubble.

Thumbnail
"Intelligence evolved at least twice in vertebrate animals."

Twice meaning birds and mammals. (Octopuses (or maybe the plural is octopi?) are not vertebrates.)

"Birds and mammals did not inherit the neural pathways that generate intelligence from a common ancestor, but rather evolved them independently."

"Birds lack anything resembling a neocortex -- the highly ordered outermost structure in the brains of humans and other mammals where language, communication and reasoning reside. The neocortex is organized into six layers of neurons, which receive sensory information from other parts of the brain, process it and send it out to regions that determine our behavior and reactions."

"Rather than neat layers, birds have 'unspecified balls of neurons without landmarks or distinctions.'"

"The brain regions thought to be involved only in reflexive movements were built from neural circuits -- networks of interconnected neurons -- that resembled those found in the mammalian neocortex. This region in the bird brain, the dorsal ventricular ridge, seemed to be comparable to a neocortex; it just didn't look like it."

"By comparing embryos at various stages of development, Luis Puelles, an anatomist at the University of Murcia in Spain, found that the mammalian neocortex and the avian dorsal ventricular ridge developed from distinct areas of the embryo's pallium -- a brain region shared by all vertebrates. He concluded that the structures must have evolved independently."

Thumbnail
"Rescale, a digital engineering platform that helps companies run complex simulations and calculations in the cloud, announced today that it has raised $115 million in Series D funding to accelerate the development of AI-powered engineering tools that can dramatically speed up product design and testing."

"The company's origins trace back to the experience of Joris Poort, Rescale's founder and CEO, working on the Boeing 787 Dreamliner more than 20 years ago. He and his co-founder, Adam McKenzie, were tasked with designing the aircraft's wing using complex physics-based simulations."

"Their challenge was insufficient computing resources to run the millions of calculations needed to optimize the innovative carbon fiber design."

"This experience led directly to Rescale's founding mission: build the platform they wished they had during those Boeing years."

"Central to Rescale's ambitions is the concept of 'AI physics' -- using artificial intelligence models trained on simulation data to accelerate computational engineering dramatically. While traditional physics simulations might take days to complete, AI models trained on those simulations can deliver approximate results in seconds."

"This thousand-fold acceleration allows engineers to explore design spaces much more rapidly, testing many more iterations and possibilities than previously feasible."

Thumbnail
"In a first, breakthrough 3D holograms can be touched, grabbed and poked"

This is "the first time 3D graphics can be manipulated in mid-air with human hands."

"At the heart of the volumetric displays that support holograms is a diffuser. This is a fast-oscillating, usually rigid, sheet onto which thousands of images are synchronously projected at different heights to form 3D graphics."

"However, the rigid nature of the oscillator means that if it comes into contact with a human hand while oscillating, it could break or cause an injury. The solution was to use a flexible material -- which the researchers haven't shared the details of yet -- that can be touched without damaging the oscillator or causing the image to deteriorate."

Looks impressive in the video. Wonder how it tells where you're touching.

Thumbnail
The People's Bank of China just announced the full integration of its digital renminbi/digital yuan into the cross-boarder settlement system of all 10 ASEAN nations (Association of Southeast Asian Nations: Indonesia, Malaysia, Singapore, Thailand, Vietnam, Cambodia, Laos, Myanmar, Brunei, and the Philippines) plus an additional 6 middle eastern nations (did she say which 6 nations?), which can settle transactions in 7 seconds, vs 3-5 days for the US-dollar-based SWIFT system...

says Lena Petrova... but...

I looked for confirmation of this. There's no announcement on Global Times, the Chinese government's Xinhua news agency's English-language news outlet. I tried searching the Chinese-language version of Xinhua, and did not find anything (though there were stories about the digital renminbi being used in Chongqing). I decided to track down the website of the People's Bank of China, which is claimed to be the origin of the story. The English-language search was broken, so I gave up and tried searching the Chinese-language site. Recent articles about the digital renminbi suggested it was not ready for what Lena Petrova claims in the YouTube video.

(From Google Translate:)

"In order to deepen the digital RMB pilot work, the Wenzhou Branch of the People's Bank of China has successfully created a number of iconic digital RMB pilot application scenarios, promoting the steady growth of digital RMB transaction scale and continuously improving the application ecology. So far, the city has 40.36 million digital RMB transactions with a transaction amount of 84.7 billion yuan; 6.14 million personal wallets, 280,000 corporate wallets, and 590,000 application scenarios."

"First, the application of key areas has expanded. In the field of public services, the coverage rate of digital RMB public wallets of municipal state-owned enterprises exceeds 80%, and more than 4.4 billion yuan of taxes and fees have been paid through digital RMB; in the field of commercial consumption, the acceptance rate of digital RMB in 10 characteristic business districts and blocks in the city exceeds 40%; in the field of transportation, culture and tourism, on the basis of opening digital RMB payment for bus routes and rail transit, 3 new passenger port enterprises have been added; in the field of medical and health care, the city's public hospitals above the second level have basically realized the full process and full scenario application of digital RMB, and grassroots health and medical institutions have opened digital RMB electronic wallets and realized digital RMB payment and settlement."

"Second, the business environment continues to be optimized. The digital RMB bridge project was first applied in the province to help enterprises efficiently and quickly receive cross-border remittances through the bridge project, reducing the time for cross-border remittances from half a day to within 2 hours. Funds were raised in the form of digital RMB, and 200 million yuan of corporate bonds were issued. An operating organization and the Municipal Ecological Environment Bureau jointly built the 'Wenzhou Small and Micro Hazardous Waste Unified Collection and Transportation Cloud Platform', which unified the flow of funds and information through the digital RMB smart contract technology and real-time account advantages, and realized the functions of 'effective supervision of collection and transportation funds' and 'waste collection and transportation completed, funds automatically synchronized to accounts'."

"Third, the payment experience of citizens has been continuously improved. We have carried out the construction of a high-quality service area for facilitating payments for foreign personnel at Longwan International Airport, deployed digital RMB hard wallet exchange equipment, established a digital RMB experience area, and completed the digital RMB acceptance transformation and full-scene connection of all merchants in the airport. We have promoted the establishment of the Wenzhou-Kean University Payment Facilitation Service Center, and an operating institution has launched the province's first digital RMB 'hard wallet' featuring foreigners, effectively improving the richness and convenience of payment in foreign-related scenarios in colleges and universities."

This is from an article dated April 03, 2025. Sounds like China is making progress with the digital renminbi, but it is not ready for anything like what is described in Lena Potrova's video. I don't know who Lena Petrova is or where she got this information, and I am wondering if we are getting a sneak peak at something real that is coming down the pike, or whether I managed to stumble into bona-fide fake news.

"De-dollarizing" is something that we should see happening (at least if you subscribe to the "soft power"/"hard power" theory I discussed earlier), but it does not appear the BRICS countries can as of yet get their act together and put together an alternative to the US dollar system.

Thumbnail
"Wells Fargo's AI assistant just crossed 245 million interactions -- no human handoffs, no sensitive data exposed."

"Wells Fargo has quietly accomplished what most enterprises are still dreaming about: building a large-scale, production-ready generative AI system that actually works. In 2024 alone, the bank's AI-powered assistant, Fargo, handled 245.4 million interactions -- more than doubling its original projections -- and it did so without ever exposing sensitive customer data to a language model."

"The system works through a privacy-first pipeline. A customer interacts via the app, where speech is transcribed locally with a speech-to-text model. That text is then scrubbed and tokenized by Wells Fargo's internal systems, including a small language model (SLM) for personally identifiable information (PII) detection. Only then is a call made to Google's Flash 2.0 model to extract the user's intent and relevant entities. No sensitive data ever reaches the model."

Further down in the article, it says 80% of usage is Spanish-language.

Thumbnail
The CEO of Shopify has announced that all developers must use AI daily. Actually all *employees*, developers or otherwise, must use AI daily.

Shopify has 20-40% annual revenue growth, but will do everything possible to not spend any of that money on hiring. Any proposals for headcount increase will be answered with, "Why can't you do that with AI?" All employees are required to increase productivity 20-40% per year or be fired. He said this is easy -- is a low bar.

Thumbnail
"Soft power" vs "hard power."

I'm sharing this video (even though I know many of you prefer something you can read) because it seems to answer a question that has puzzled me for a long time: why is it when an empire is declining do it's leaders take actions that *accelerate* the decline, rather than slow it down?

The proposed framework for thinking about this is "soft power" vs "hard power". "Soft power" comes from trust and admiration. The word they use in the video is "entice" -- "soft power" is "the ability to entice".

When people outside the empire trust and admire the empire, they support the empire and help it maintain its power. When that trust and admiration declines, the rulers try to substitute "hard power", the use of force, or the threat of force. This can come from enacting laws, to be enforced by police or military, or by the use of actual military force. Paradoxically, this attempt to maintain power actually accelerates the decline.

Use of force has to be perceived as being exercised in a trustworthy manner in order for trust to be maintained. It needs to be perceived as exercised in a judicial manner in accordance with the principles for which the system is trusted and admired. If the use of force comes across as arbitrary, capricious, or self-serving, it will accelerate the loss of trust in the empire.

My commentary: Ok, so my first thought was that the US does seem to be doing this, both liberal/Democratic administrations and right/Trump/Republican administrations. The previous administration, the Biden Administration, after Russia invaded Ukraine, exercised hard power with the most severe sanctions regime in history. (Most people I know will hate me for saying this, because they love the Biden Administration.) While US allies were fully on board, it turned out that was only about 1/3rd of the planet. The other 2/3rds were probably freaking out. "Oh my god, we've got to get away from the US dollar and US financial systems like SWIFT. If US leaders decide they don't like us, for any reason, they won't hesitate to rip the rug out from under us. The US cannot be trusted."

With the new Administration, the Trump administration, it's tariffs. Hard power against everyone, allies and enemies alike. Pay up, people of the world. But they're not going to pay up, they're going to switch their trading partners. They're going to view the US as an unstable, untrustworthy trading partner. They're going to cozy up to China. If China is a serious geopolitical rival, getting every country everywhere to increase their trade with China and decrease their trade with the United States seems like the opposite of what you ought to do. Trump seems to think tariffs are a revenue source, something that can replace income tax. Apparently he's never seen the movie Ferris Bueller's Day Off (1986), or he'd know the Smoot-Hawley Tariff Act of 1930 failed to generate revenue for the federal government or alleviate the Great Depression (though they called it the Hawley-Smoot Tariff Act of 1930 in the movie -- see video below).

Anyway, now that I've had a day to think about this, I see some flaws in the argument, or at least the examples given in the video. The main example they use is Rome, but the Roman Empire took hundreds of years to fall, and even when it did, it was the Western half the fell and the Eastern half kept going for a few hundred more years. A more appropriate example to modern times might be the British Empire, but didn't the British Empire rely on "hard power" and not care about "soft power"? I may be wrong but my impression was that the British took advantage of the fact that the industrial revolution started in England and that put them hundreds of years ahead, technologically, of the poorest parts of the world, including war technology, and those poor parts of the world had natural resources the British wanted. So it was pretty much all "hard power".

It seems like trust is essential for a business, where customers make purchases voluntarily and must trust the quality of a brand or they will take their business elsewhere. I'm not so sure it always matters for an empire with "hard power" at its disposal? But maybe a multi-hundred year head start in technology like the British Empire had is an extreme historical anomaly and doesn't apply to the modern world, and in the modern world, a successful empire must have both "soft power" and "hard power" -- a powerful legal and military apparatus but also a trustworthy brand? What do y'all think?