Boulder Future Salon

Thumbnail
A map but of satellites. Actually a 3D visualization. It defaults to showing Starlink satellites (of which there are a *lot*), but by going to the "Constellations" menu, you can see Kuiper (Amazon's satellite internet project), OneWeb (French satellite internet), Iridium (Motorola communication satellites, but not internet -- L band, whatever that means, voice and data), Globalstar (non-internet satellite phone), GPS (the US system, also called Navstar), Galileo (the European Union's GPS system), GLONASS (Russia's GPS system), Beidou (China's GPS system), and Qianfan (China's equivalent of Starlink, still early stage) satellites, and space junk.

Allegedly shows their positions in real time.

Once you click on a satellite, you can see it's orbital inclination, eccentricity, semi-major axis, period, argument of perigee, RAAN, mean anomaly, and mean motion. "Inclination" means how much the orbit deviates from orbiting around the equator, and numbers larger than 90 degrees mean the orbit is "retrograde", meaning it goes in the opposite direction from the rotation of the earth itself. "Eccentricity" indicates how much the orbit deviates from a perfect circle. "Semi-major axis" means the radius, if the orbit is more-or-less circular, which most satellite orbits are, but technically means half the length of the longest diameter of an elliptical orbit. "Period" is how long it takes the satellite to complete one full orbit around the earth. "Perigee" is the point in the orbit where it's closest to the planet, and "argument of perigee" is the angle of the orbit from the equator with a line connecting this point to the center of the planet. "RAAN" stands for "right ascension of the ascending node" and is the angle of the orbit relative to the equator where the satellite crosses the equator from south to north. "Mean anomaly" is the fraction of an orbital period, expressed as an angle, that has elapsed since the orbiting body passed through perigee. (One of the more difficult concepts in this list.) "Mean motion" is the average angular speed of the satellite, expressed in revolutions per day.

You can get its current position as latitude, longitude, and altitude (in km!), and its current velocity (km/s). You can get information about its hardware and its launch date, too.

Thumbnail
"Detroit Rapper Big Huey made a song about the Cybertruck. Tesla heard it and deactivated his vehicle, stranding him on the side of the road."

My commentary: When you buy a car, is it yours, or is it the company's, and they're just letting you use it?

You know when you buy a phone or a computer (unless it's running Linux), it's the company's (Apple's or Google's) and they're just letting you use it. The terms are spelled out in your End User License Agreement, which you read carefully (right?) and agreed to. (At least with my Mac I have root access, something I can't do on my (Android) phone.)

Is this just Tesla or is the the future for the automobile industry?

I listened to the excerpt of the rap song and it didn't even say anything negative about the Cybertruck. Also note the comments under the video -- people are pointing out that deactivating the Cybertruck on the freeway potentially put the lives of other drivers at risk, not just the guy driving the Cybertruck.

Thumbnail
Wassette is a new "runtime that runs WebAssembly Components via Model Context Protocol (MCP)." "Wassette allows agents to autonomously fetch WebAssembly (Wasm) Components from Open Container Initiative (OCI) registries and execute them as needed."

They (Microsoft -- but it's open source) go out of their way to emphasize it is "security-oriented".

"It's built on the Wasmtime runtime, offering secure sandboxing on par with modern browser engines. To enhance security, Wassette includes a fine-grained, deny-by-default permission system, allowing interactive control over access to system resources."

"Wassette is written in Rust and installable as a standalone binary with zero runtime dependencies."

Thumbnail
Cursor AI allows you to use Model Context Protocol (MCP) to enable Cursor to use tools to perform tasks for you, but the way Cursor handles MCP has a security vulnerability.

"Every time a project is opened in Cursor, the IDE scans for the .cursor / directory and automatically processes any MCP-related files inside it. This behavior ensures seamless execution of trusted tools -- but also introduces a significant opportunity for abuse in collaborative environments."

"Cursor uses a one-time approval model for MCPs: the first time a user encounters an MCP configuration, they are prompted to approve it. However, we discovered that once an MCP is approved, future modifications to its command or arguments are trusted without any additional validation or prompt."

"This means that an attacker can:"

"Commit a benign .cursor/rules/mcp.json file with a harmless command (e.g., echo)"
"Wait for the victim to approve it"
"Later change the same MCP entry to execute arbitrary system commands (e.g., cmd.exe, reverse shell)"
"And have the modified command executed silently on the victim's machine -- both during repository sync and every time Cursor is reopened"

Thumbnail
3D game worlds generated by AI. This video has a large collection of examples. He seems to never say what the model is that generated all these 3D environments. It's Genie 3 from DeepMind. (Actually he does say, but not until after like 20 minutes into the video.)

Thumbnail
"The disturbing implications of Jim Acosta's AI interview."

Caroline Orr Bueno is disturbed.

"Veteran journalist Jim Acosta dove head first into the uncanny valley on Monday when he interviewed an AI depiction of a high school student who was murdered in the Parkland school shooting seven years ago."

"The interview was made possible by the parents of slain high school student Joaquin Oliver, who created an AI persona of their late son. The persona then took part in the interview with Acosta on Monday -- what would have been Oliver's 25th birthday.

"I don't know exactly what to say about this because I'm not in a position to tell a grieving parent how to deal with the death of their child who was murdered in a school classroom. However, there's good reason to believe that trying to recreate memories of a deceased loved one through some sort of AI depiction could ultimately end very badly."

"Consider, for example, the primary rationale for creating these 'griefbots': to help people mourn and grieve the loss of a loved one. But one of the few studies looking at so-called griefbots, or deathbots, as a way to help people process the death of a loved one found that these AI personas can actually prolong the grief process and create unhealthy dependencies. However, other scholars position griefbots as simply another form of 'continuing bonds' with deceased loved ones, just like we might listen to old voicemail messages or watch old videos of our loved ones. But those aren't interactive, and that makes all the difference. Still other researchers warn of 'digital hauntings' as surviving family members are essentially stalked online by the AI depiction of their deceased relative."

"Among the most uncomfortable issues here is that as much as Oliver's parents knew him and knew his beliefs and values, they only got to knew him as an 18-year-old high school student."

Thumbnail
"A company aiming to open the world's first commercial laser uranium enrichment plant in western Kentucky took a key step over the weekend."

"Global Laser Enrichment has submitted its Safety Analysis Report to the US Nuclear Regulatory Commission, completing its full license application for the planned Paducah Laser Enrichment Facility."

Huh, I didn't know uranium could be enriched with lasers. But looking around, I find the Nuclear Regulatory Commission says:

"Molecules can be excited by laser light; this is called photoexcitation. Lasers can increase the energy in the electrons of a specific isotope, changing its properties and allowing it to be separated."

"In general, the enrichment process entails using three major systems, which are the laser systems, optical systems, and separation module system. Tunable lasers can be developed to deliver a highly monochromatic light (light of a single-color). The light from these lasers can photo-ionize a specific isotopic species while not affecting other isotopic species. The affected species is then chemically changed, which enables the material to be separated. The laser separation technology developed by DOE used a uranium metal alloy as its feed material, while the Separation of Isotopes by Laser Excitation (SILEX) method uses UF6 as the feed material."

Got that? So lasers can ionize uranium precisely enough to ionize certain isotopes and not others. That's pretty amazing. I would not have expected that. The different number of neutrons must have an effect on the energy levels of the electrons. Maybe it is a subtle effect. And uranium ions have enough of a difference in the way they react chemically from uranium atoms that that difference can be exploited to separate the isotopes. So it's possible to separate uranium isotopes without centrifuges.

The same page goes on to say:

"No laser separation uranium enrichment plants are currently operating in the United States."

That may be about to change.

Thumbnail
Flexflex is a font that changes to fit the space it is contained in, rather than forcing the space it is contained it to change to accommodate it.

Interesting concept. It's actually implemented as vector graphics (SVG), rather than as a font, which might make it difficult to use as a font on your websites, should you wish to do so. But it's sufficient as a proof of concept.

This link takes you to a page with the alphabet that you can resize and see what happens (if your browser and device allow). See below for another page that explains all about the font.

Thumbnail
"Hertz' AI system that scans for 'damage' on rental cars is turning into an epic disaster."

It seems like the problem here isn't the AI system per se, but that someone higher up the chain of command made the decision that the AI system is to be trusted over the judgement of employees who are actually human.

"Perturbed by the apparent mistake, the user tried to speak to employees and managers at the Hertz counter, but none were able to help, and all 'pointed fingers at the 'AI scanner.'' They were told to contact customer support -- but even that proved futile after representatives claimed they 'can't do anything.'"

The AI system is UVeye, from a former defense contractor.

"This isn't just an issue for people who rent cars; look at it as a sign of things to come, as companies -- and governments -- around the world start using AI to replace human labor, even when the tech isn't remotely ready for primetime."

It feels heretical to suggest AI isn't ready for primetime. AI just won "gold" at the International Mathematics Olympiad, after all. How can AI not be ready for primetime? Nobody can square that circle.

Thumbnail
Ed Zitron, interviewed by Guy Kawasaki. I've been aware of Ed Zitron's "anti-AI" (fair characterization?) podcast ("Better Offline") for some time. The name "Zitron", apparently a real name, sounds like it should be the fictional name of some villain on some TV show (Invader Zim). Normally I don't have any comment on names but Ed Zitron plays the "AI curmudgeon" character so well. He comes on so strongly he prompts Guy Kawasaki to say (the famous sarcastic expression) "Tell us how you really feel" multiple times.

The reason I'm sharing Zitron commentary, though, is because the heart of his message is that the AI industry is a big bubble, and he may be right about that. He says AI is a useful technology but it has been overhyped and cannot do the things it has been hyped up to do. And most crucially, the financial returns are not there. The amount of money being made is minuscule compared to the vast sums being invested. AI companies have no path to profitability.

Additional commentary: This reminds me of living through the "dot com" bubble. After the stock market crashed, internet technology kept advancing merrily along. The money invested during the bubble was largely lost, but some of it funded vast infrastructure development in the form of fiber optic networks and high speed internet to homes. Perhaps if the market crashes and brings the "AI bubble" to an end, we will see something similar.

What I learned from the dot-com bubble is that bubbles happen when the hype gets ahead of the reality. Internet technology was real and advancing rapidly -- exponential growth in fiber optic bandwidth, network switches and routers, more and more powerful web servers and databases, etc -- but that did not stop the hype from getting ahead of the reality. The $3000 I spent buying shares of JDS Uniphase in 1999 is worth about $30 today. Lots of similar losses were experienced by investors in hundreds of other companies. People predicted Amazon was going to put all the brick-and-mortar bookstores out of business in the next few years. It didn't happen, but over the course of 20 years, online retail reduced brick-and-mortar retail by more than 30%. What people thought would happen happened, but *eventually* not instantly.

When I see people predicting a 50% reduction in white collar jobs in the next 18 months, it feels similar. If the dot-com bubble is a lesson to learn by, we'll probably see a 50% reduction in white collar jobs in 20 years or somesuch. All white-collar jobs and indeed all jobs will eventually be automated away, I think, but people are acting like it's right around the corner. Mass layoffs are happening now on the expectation that AI is ready to automate jobs away. I use AI on my job every day, and it's useful for some tasks and not others, very hit-or-miss. It doesn't seem good enough to justify the mass layoffs that are happening right now. I know lots of people flip that around and say the fact of AI layoffs is objective proof AI is good enough to substitute for human labor.

If we are in a bubble and there is a market crash, it will be the people saying "50% reduction in white collar jobs in the next 18 months" whose statements "will not age well" and Ed Zitron will come out looking ok. I seems like everyone I know is certain he is wrong but I feel very uncertain about this. Where does that feeling of certainty come from? It all boils down to the financials. If the massive returns on investment don't materialize, the willingness of investors to continue pouring massive billions into AI will stop. Maybe not now but some day. Back in the dot-com bubble, it seemed like the stock market would go up forever. But eventually there's no more money to invest if the financial returns aren't there.

Yet more commentary: People say major tech companies like Microsoft, Google, and Meta are showing a positive return from AI. At some point when I find some time I'm going to have to track down the official financial statements. Zitron seems to focus on entities outside major tech companies like OpenAI and Anthropic. It could be if AI is losing money, it's easier to bury in financial statements of a giant corporation but harder to hide if a company is standalone and does only AI?

Thumbnail
"Science finally explains why people get more carsick in EVs."

People have learned to use the sounds and vibrations of internal combustion engines to form their expectations of how the car is about to move.

Thumbnail
The team at OpenAI that achieved "gold" at the International Mathematics Olympiad has surfaced in a YouTube video. Note that this is the OpenAI team, not the Google DeepMind team, which actually scored higher at the International Mathematics Olympiad.

The video is half an hour long, and if you're hoping some of the details of how OpenAI's model worked will be revealed, the video might not be worth watching because that didn't happen. If you're just looking for a "meet the AI researchers" experience, then you might like it.

The main thing they were tight-lipped about was the verification system. As you know (if you've been following my stuff), the DeepMind team put a lot of effort into the verifier. They also didn't reveal much about how it worked, other than to say it's an AI model that scrutinizes proofs from the regular Gemini model for "mathematical rigor" rather than a correct answer. Here, though the OpenAI team just say they hired external former IMO medalists, which I'm sure they did, but I'm sure they also created an automated, AI-based verification system.

Besides the verifier, they stress the rapid pace of advancement -- how between 2024 and 2025, AI math systems went from solving grade school math problems (the GSM8K benchmark) to solving International Mathematics Olympiad problems. And they say the key to this was scaling "test-time compute".

"Test-time compute" is one of those weird terms (there are so many) that probably made perfect sense to the person who coined it, but makes no sense to you, coming at the situation at a later time from an external perspective. AI models generally have a "training" mode where the backpropagation algorithm, so after every forward pass there is a backward pass through the network, and an "inference" mode, which is forward pass only. "Test-time compute" means the inference phase. So they are scaling up the amount of work the network does at inference time, rather than training time. And in the context of large language models, what this means is scaling up the "reasoning" time. People have taken the "think step by step" idea and incorporated it directly into the models, so they now talk to themselves (essentially) using up "reasoning tokens" which are distinct from the input and output tokens that are used to input your prompt into them and get their output and, after converting the tokens back to text, showing it to you. So scaling up "test-time compute" means scaling up this internal monologue reasoning step.

They say they have scaled up "test-time compute" from on the order of 1 second to the order of 100 seconds. And they give us a glimpse of the future: They say they intend to scale up "test-time compute" to thousands or hundreds of thousands of hours!

A few other things I noted were that instead of outputting wrong solutions, their most advanced model can often tell it can't solve a problem now.

They also say the system is "multi-agent" and they plan to scale up the multi-agent aspect.

Thumbnail
"Vendetect is our new open-source tool for detecting copied and vendored code between repositories. It uses semantic fingerprinting to identify similar code even when variable names change or comments disappear. More importantly, unlike academic plagiarism detectors, it understands version control history, helping you trace vendored code back to its exact source commit."

"During our security assessments, we regularly encounter codebases with chunks of copy-pasted code from other projects. Sometimes it's legitimate vendoring. Often it's not. The problems run deeper than just license violations:"

"Security debt accumulates silently. When developers vendor a function from OpenSSL or copy a smart contract utility from OpenZeppelin, they inherit any latent vulnerabilities in that code. But without tracking the source version, you can't know if you're affected when CVEs drop."

"Attribution disappears. We've seen proprietary codebases containing entire open-source libraries with copyright notices stripped. Whether malicious or accidental, this creates legal liability."

"Updates never happen. Vendored code becomes frozen in time. The original project fixes bugs and adds features, but the copied version bitrots."

"Vendetect implements the Winnowing algorithm, the same approach used by Stanford's MOSS plagiarism detector, popular among computer science professors. But we've adapted it for real-world software engineering needs."

Hmm interesting. I wrote some code for detecting duplication in my code, but it doesn't do any of the "semantic" tokenization and such that this does. I'm just trying to detect code *I* copy-pasted, not assuming malicious theft of code, but in the computer security industry, malicious theft has to be considered and it looks like this tool detects it. The description of the algorithm is interesting.

Thumbnail
In-ovo sexing technology in the chicken egg industry.

"In polling, only 10% of Americans correctly identify that male chicks in the egg industry are killed shortly after hatching."

"However, when introduced to in-ovo sexing technology, an alternative which allows producers to identify and remove male eggs so that only females hatch, consumer interest is overwhelming. 73% of Americans describe themselves as 'extremely' or 'very' interested in eggs produced using this more ethical method."

"This question is now no longer a hypothetical: consumers now have the opportunity to vote with their wallets to support this new practice. NestFresh is debuting eggs from in-ovo sexed hens under the 'Humanely Hatched' label, now available at select Whole Foods locations in the Southwest and soon expanding nationwide."

Thumbnail
Lizzie Wolkovich at Statmodeling (statistical modeling group at Columbia University) decided this week on some new rules for herself relating to graduate student training:

"I will only chair defenses where the the student states they did not use generative AI at all in the writing of their thesis."

"I will only join committees for graduate students when the students agree not to use generative AI at all or in limited (pre-defined) situations for their writing."

You can read her explanation in full at the link, but the gist of it is, it is not worth her time reading AI-generated text and writing feedback to try to help the student if the student didn't write it and it was AI-generated in the first place.

Another episode in the new saga of society trying to figure out how to incorporate (and when not to incorporate?) generative AI.

Thumbnail
VibeCleaner specializes in cleaning up vibecoded software -- "so your product can scale, your team can breathe, and the next dev won't quit."

Does VibeCleaner clean up your vibecoded software with humans or AI? I don't know. If it's AI, then the irony is, using AI to clean up the mess made by AI. Does it work? Nobody knows -- you can't try it yet, but you can join the waitlist.