Boulder Future Salon

Thumbnail
"Avoiding skill atrophy in the age of AI."

Just as GPS navigation eroded road navigation skills, "AI-powered autocomplete and code generators can tempt us to 'turn off our brain' for routine coding tasks."

"A 2025 study by Microsoft and Carnegie Mellon researchers found that the more people leaned on AI tools, the less critical thinking they engaged in, making it harder to summon those skills when needed."

"What does this look like in day-to-day coding? It starts subtle. One engineer confessed that after 12 years of programming, AI's instant help made him 'worse at [his] own craft'. He describes a creeping decay: First, he stopped reading documentation -- why bother when an LLM can explain it instantly?"

"Then debugging skills waned -- stack traces and error messages felt daunting, so he just copy-pasted them into AI for a fix."

"Deep comprehension was the next to go -- instead of spending hours truly understanding a problem, he now implements whatever the AI suggests."

"We're not becoming 10 times developers with AI -- we're becoming 10 times dependent on AI." "Every time we let AI solve a problem we could've solved ourselves, we're trading long-term understanding for short-term productivity."

So the solution is to stop using AI, right? Of course not.

If you follow the list of guidelines on this page, supposedly your skills won't atrophy in the age of AI.

You're supposed to always verify and understand the output of the AI, never use AI for "fundamentals", always attempt problems yourself before asking AI, have human code reviews for AI contributions, if an AI solution works, engage in active learning and learn how it works yourself, keep learning journal of "AI assists", and program with the AI with a "pair programming mindset".

The immediate problem that comes to mind for me with everything on this list is, "Ain't nobody got time for that." We developers are supposed to 5x-10x our productivity. All the things on this list take time.

So, y'all tell me, how am I going to avoid skill atrophy in the age of AI?

Thumbnail
"Russian McDonalds", which is actually called Vkusno -- i Tochka (Вкусно -- и Точка, means "Tasty -- and that's it") has robots.

In this video, Vasilisa Mamont shows an ad for the robots (definitely aiming for "cuteness") and then visits a Vkusno -- i Tochka in real life in... oh, she doesn't say where it is. But from her other videos we can see she's in Moscow for the "Victory Day" parade, so I assume it's in Moscow.

I haven't seen her channel before, but it looks like a strongly pro-Russia channel. Well, if a channel is made inside Russia it has to be pro-Russia. The YouTubers I followed who were inside Russia before the war are now outside Russia. There's another video on the channel where she interviews a US citizen who is migrating to Russia using the new "Shared Values" (Traditional Values) Visa. The YouTubers outside Russia say the idea Russia represents "traditional values" is a joke. Russia under the communists wasn't "traditional" at all. Anyway, this wasn't supposed to be about the geopolitics (I mention it just because if I know a-priori that a channel has a bias, I try to tell you all about that up front), I just wanted to tell you all about the robots. It seems like we're starting to see robots in restaurants, after decades of anticipation, but they're still only in a few places and not as impressive as I expected. I guess that raises the question, what was I expecting? I guess I was expecting a fully automated restaurant with no humans to exist by now, but that hasn't happened.

Thumbnail
Vastly huger context windows in language models are possible, or so it is claimed, by Jacob Buckman, CEO of Manifest AI, who claims to have invented a way of incorporating the key idea behind recurrent neural networks with transformers to make "power attention", enabling vastly huger context windows without the models forgetting anything in the context window, which is a problem for language models today that have the size of their context windows pushed up.

Thumbnail
Claude's system prompt got leaked. Wait, people are saying Anthropic doesn't keep their system prompts secret, so this isn't really a 'leak'? Well, either way, here you are, you can read Claude's system prompt if you feel like it.

Thumbnail
"We have reached the 'severed fingers and abductions' stage of the crypto revolution."

"This previous weekend was particularly nuts, with an older gentleman snatched from the streets of Paris' 14th arrondissement on May 1 by men in ski masks."

"The abducted father was taken to a house in a Parisian suburb, where one of the father's fingers was cut off in the course of ransom negotiations."

"This was the second such incident this year. In January, crypto maven David Balland was also abducted along with his partner on January 21. Balland was taken to a house, where he also had a finger cut off."

"A few weeks before that, attackers went to the home of someone whose son was a 'crypto-influencer based in Dubai.' At the father's home, the kidnappers 'tied up [the father's] wife and daughter and forced him into a car."

"Early this year, three British men kidnapped another British man while all of them were in Spain; the kidnappers demanded 30,000 euros in crypto 'or be tortured and killed.'"

"There's the Belgian man who posted online that 'his crypto wallet was now worth 1.6 million euros.' His wife was the victim of an attempted abduction within weeks."

"I reported last year on a gang based out of Florida that had been staging home invasions of people perceived to own lots of crypto. One of their hits took place in Durham, North Carolina."

All the abducted people in this story survived, and the criminals were caught. Cryptocurrency is not as anonymous as people think it is. Still, if you have any cryptocurrency, you might not want to brag about it online.

Thumbnail
Netflix is undergoing a massive user interface redesign, and the new UI won't support choose-your-own-adventure movies. So all of those are going to get removed, and one of them is the Black Mirror movie "Bandersnatch". (According to the article, there's a total of 5 hours and 12 minutes of video, but each viewing is 90 minutes based on the viewer's interactive choices.) There's also a 2020 episode of "The Unbreakable Kimmy Schmidt" called "Kimmy vs. The Reverend" and a series of documentary Interactive Specials starring Bear Grylls, all under the "You Vs. Wild" banner.

There's a whole bunch more made for kids: "Cat Burglar," "We Lost Our Human," "Battle Kitty," "Barbie: Epic Road Trip," and others.

I guess like those choose-your-own-adventure books we had when I was a kid, they weren't the most popular. Most humans, for whatever reason, like regular, linear stories.

Thumbnail
"High-tech mechanical waiters popping up in local restaurants."

Local to Ashburn, Virginia, that is.

"Most of us have probably seen the 'Terminator' movies. We knew the age of robots was coming -- but who would have thought they would be so darn cute? That's the word Brandy Schaefer uses to describe the robot servers at the Honey Pig Korean BBQ restaurant in the Ashburn Farm Market Center."

The article notes that acceptance of robots is high in Asian cultures. But the robots are not limited to Asian restaurants.

"The Deli Man robot has four levels in order to carry piping hot pizzas, plates of pasta and savory sub sandwiches out to the dining room."

Thumbnail
"AI and the fatfinger economy": Cory Doctorow posits an explanation of why AI is being shoved into everything, why AI-summoning buttons have been placed in places you're likely to accidentally hit them, and why once you do, those interactions are so hard to exit.

"Growth is a heady advantage for tech companies, and not because of an ideological commitment to 'growth at all costs,' but because companies with growth stocks enjoy substantial, material benefits. A growth stock trades at a higher 'price to earnings ratio' ('P:E') than a 'mature' stock. Because of this, there are a lot of actors in the economy who will accept shares in a growing company as though they were cash (indeed, some might prefer shares to cash). This means that a growing company can outbid their rivals when acquiring other companies and/or hiring key personnel, because they can bid with shares (which they get by typing zeroes into a spreadsheet), while their rivals need cash (which they can only get by selling things or borrowing money)."

"The problem is that all growth ends. Google has a 90% share of the search market. Google isn't going to appreciably increase the number of searchers, short of desperate gambits like raising a billion new humans to maturity and convincing them to become Google users (this is the strategy behind Google Classroom, of course). To continue posting growth, Google needs gimmicks."

So Google and companies like it want to convince the world they're a "growth company" set to double or triple in size by dominating an entirely new sector.

Inside the companies, these "corporate growth stories" are converted to "key performance indicators" tied to employee performance reviews and bonuses. So this is why you see every product team at every major tech company cramming AI into everything. They are boosting their metrics.

Crucially, what's not the reason every major tech company cramming AI into everything is demand from you, the end user.

My commentary: In the long term view, it is probably the case that AI is an entirely new sector and somebody will dominate it. That may well be one or more of the companies today cramming AI into everything. So for some, this may pay off, but for others, the lack of end user demand for what they're pushing will cause them to lose out. Most of the time in technology, there are many options early in the technology revolution, followed by consolidations, leaving maybe 1 to 3 companies dominating that product type and everyone else going out of business. We've seen this with PCs, with the internet, with mobile devices, etc.

Followup commentary: I didn't mean to imply there isn't demand for AI technology. Most of us nowadays use AI from day to day when it's useful. I just don't think there's demand for AI technology everywhere it's being crammed.

Thumbnail
3D-printed Starbucks. In Brownsville, Texas. Rumored to be opening on April 28. Today is May 8th. Is it open?

Thumbnail
The Trump Administration's Signal got hacked. So it looks like what happened here, is, because there is a requirement for government communications to be archived for historical record, the Trump Administration wasn't using the standard version of Signal, and on May 4th, some journalist took a photo of Mike Walz using his phone and people saw that he was using a modified version of Signal called TeleMessage. Some hacker figured out where TeleMessage was sending messages to be archived, which was an Amazon Web Services (AWS) server, which they were able to hack, allegedly quite easily, and gain access to the archive, in unencrypted form. You have to wonder, if some hacker was able to do that, and do it that easily, maybe they weren't the first? Maybe the server got hacked by Russia, North Korea, or Tokelau? Also, TeleMessage was not even made by a US company, it was made by an Israeli company. A non-US company was trusted to archive US government communications.

Thumbnail
A "quantum navigation system" more accurate that GPS and jam-proof has been announced. Although unlike the last "quantum navigation system" I told you all about, this one doesn't use Bose-Einstein condensate to measure acceleration like a gyroscopic accelerometer. This one claims to use the earth's magnetic field, to measure out such tiny variations in the earth's magnetic field that becomes "like a magnetic fingerprint" that can be mapped out.

But wait, you may be thinking, if it's that sensitive, wouldn't it pick up all kinds of magnetic fields just from the inside of the aircraft (or seacraft)? If you guessed they claim to filter out that "electromagnetic noise" with AI, you win.

But how does it work? Well, here's what they say.

"Magnetic-anomaly navigation relies on the fact that the Earth's magnetic field possesses small amounts of local variation ('anomalies') that are geographically distinct. These anomalies are stable in time and have been mapped for various purposes, including resource exploration. With an appropriate sensor capable of detecting these anomalies, and an available reference map, it is possible to infer positional information to both improve on inertial navigation systems positioning and provide bounded accuracy indefinitely."

"The magnetic field that is measured is composed of several parts. There is the core Earth field, which is described by a time-dependent model such as the International Geomagnetic Reference Field, and has a scalar magnitude of approximately 25,000 -- 65,000 nanotesla (nT). On top of this there are anomalies that arise from crustal geology and are stable in time. These variations are on the order of 10 nT -- 100 nT over a few kilometers and are what is used for Magnetic-anomaly navigation. Global anomaly maps have been produced, such as the Earth Magnetic Anomaly Grid Version 3 or the World Digital Magnetic Anomaly Map, and can in principle be used for navigation. These are well-supplemented by higher-resolution maps developed by the geophysical surveying sector or defense agencies. Finally there are time-dependent effects such diurnal ionospheric variation ( approximately 100 nT) and space weather arising predominantly from solar activity (up to 1000s of nT during solar events)."

"The Q-CTRL magnetometers are scalar optically pumped magnetometers based on optical detection of atomic spin precession using a vapor cell containing rubidium atoms in a buffer gas."

Ok, so from what I can tell, what this means is that electrons have "spin" and this "spin" undergoes "precession", like the wobbling of a top, in the presence of a magnetic field. Someone clever figured out that if you use the right atoms (rubidium -- it's always rubidium, as Angelia Collier likes to say -- but why? I don't know) and "pump" them with pulses of laser light at exactly the right frequency, you can get them to reveal their precession, and thus reveal the magnetic field that is generating that precession.

"All photodetectors and light sources are integrated into the sensor head, along with vapor cell heaters. We have produced variants with pump and probe beams in both orthogonal and co-linear geometries; performance is comparable other than the modest increase in size associated with the orthogonal optical configuration."

"Vapor cell heaters" they say. I get the impression you're not freezing this down to a few degrees above absolute zero, like with a Bose-Einstein condensate.

"The map engine includes core and anomaly field modelling, map levelling, upward and downward continuation, and prediction of temporal effects such as the diurnal variation and space weather. The navigation-and-map-matching engine includes platform denoising, statistical filters, and navigation algorithms."

"Our approach to magnetic denoising is augmented by a physics-driven model used to learn the platform's magnetic behavior and how it corrupts the external Earth field. This is performed by solving (and adaptively updating in real-time) a set of coefficients that provide a model of the vehicle's magnetic field."

"The algorithm initially has no knowledge of the detailed magnetic characteristics of the vehicle, other than plausible physical assumptions that are true for any vehicle. It rapidly learns the platform characteristics as the navigational mission begins, and this training is continuously refined."

Thumbnail
AI-powered work simulations. Hmm. That's an idea.

"Break free from traditional interviews. HireOS creates immersive work environments that reveal true talent through real challenges, not rehearsed answers."

Thumbnail
Footage of fiber-optic drones from the war in Ukraine. Fiber optic drones spool out a fiber optic cable behind them. They cannot be jammed with radio signal jamming. They can fly behind mountains and houses and fly very low out of the line-of-sight generally required for radio communications. They can sit turned off on the side of a road and take off when targets arrive. This video has footage from both Ukrainian and Russian fiber-optic first-person-view (FPV) drones. Because they can't be jammed, if one is flying towards you, you're pretty much going to die. The path to escape with the best odds is to dive into thick brush, where the drone's fiber optic cable will get tangled. Much of Ukraine is open space without thick brush, or you could be in a vehicle. If you manage to escape one drone, there might be more. Usually multiple drones are used in the same attack.

Thumbnail
AI systems are increasing exponentially, and one measure of this is the comparison with how long it takes a highly skilled human to perform a task vs whether an AI is capable of performing the task, which was studied by METR (Model Evaluation & Threat Research). One of those researchers is Sydney Von Arx, who is interviewed here. So if you want to put a face to the name (though she is just one member of the team that did the research), here you go. She confirms that they found that every 7 months, AI models are able to do tasks that take a skilled human twice as much time, with a 50% probability of success. She says the team tried really hard to falsify this, and to prove to themselves that it was real only if it really stood up to scrutiny. She says it does and she is a believer in this trend.

So over time, AIs will be capable of performing tasks that take humans more and more time, from seconds to days to years, and that time measurement that it takes the human will continue to double every 7 months.

I commented on this research previously -- I estimated that by the 11th of November, 2032, AI systems would be able to perform tasks that take humans 1 year. That's what you get when you extrapolate this out into the future (and calculate with more precision than is justified).

Thumbnail
"Commenters on the popular subreddit r/changemymind found out last weekend that they've been majorly duped for months. University of Zurich researchers set out to 'investigate the persuasiveness of Large Language Models (LLMs) in natural online environments' by unleashing bots pretending to be a trauma counselor, a 'Black man opposed to Black Lives Matter,' and a sexual assault survivor on unwitting posters."

"Reddit's Chief Legal Officer Ben Lee says the company is considering legal action over the 'improper and highly unethical experiment' that is 'deeply wrong on both a moral and legal level.' The researchers have been banned from Reddit. The University of Zurich told 404 Media that it is investigating the experiment's methods and will not be publishing its results."

Not publishing its results... yet we already know, "commenters on the popular subreddit r/changemymind" were "majorly duped for months". Obviously this means the Turing Test has been spectacularly passed.

If you've been having any doubt as to whether the Turing Test has been passed, you should have none now.

The models used, in case you were wondering, were OpenAI GPT-4o, Claude 3.5 Sonnet, and Llama 3.1-405B.

Also, unethical? Isn't the purpose of AI from this point onward to imitate and surpass (and displace in the job market) humans?

Thumbnail
Aurora Driver has begun commercial operations. Aurora Driver is the eighteen-wheeler truck driving AI from Aurora Innovation. The first driverless truck delivery trip was on April 26th, between Dallas and Houston.