Boulder Future Salon

Thumbnail
"Cursor's YOLO mode is not for the fainthearted, letting AI write and execute code without the input of a human operator."

"So what's the worst that can happen?"

What's the worst that can happen? Isn't that a rhetorical question that you're supposed to ask yourself to convince yourself you're worried about nothing?

"An AI program manager at a major pharmaceutical company found out this week after switching on the 'you only live once' setting and watching in horror as Cursor carried out a devastating suicide attack on his computer, wiping out itself and everything on the device."

Hope he had backups.

"One of the most important ways to protect yourself whilst letting Ultron work on your backend is enabling file deletion protection within Cursor's auto-run settings."

Oh, now you tell him.

"This includes two key options called 'file protection' and 'external file protection' which stop the AI from modifying or deleting sensitive files. When activated, these settings serve as a strong first line of defence against unintended damage to a codebase."

"Cursor also supports the use of allow/deny lists, which let users explicitly define what the AI agent is permitted to do."

The article goes on to say:

"Developers who want to explore Cursor's autonomous features including YOLO mode are strongly advised to do so in a virtual machine or sandboxed environment."

Thumbnail
In the mad rush to deploy LLMs as chatbots, we have overlooked their utility for adding judgment to traditional software, says Jonathan Mugan.

"Traditional computer programs rely on rigid logic, yet the real world is full of ambiguity. The arrival of Large Language Models (LLMs) means that computer programs can now make 'good enough' decisions, like humans can, by introducing a powerful new capability: judgment. Having judgment means that programs are no longer limited by what can be specified down to the level of numbers and logic. Judgment is what AI couldn't make robust before LLMs. Practitioners could program in particular logical rules or build machine learning models to make particular judgments (such as credit worthiness), but these structures were never broad enough or dynamic enough for widespread and general use. These limitations meant that AI and machine learning were used in pieces, but most programming was still done in the traditional way, requiring an extreme precision that demanded an unnatural mode of thinking."

"Most of us know LLMs through conversational tools like ChatGPT, but in programming, their true value lies in enabling judgment, not dialogue. Judgment allows programmers to create a new kind of flexible function, allowing computer systems to expand their scope beyond what can be rigidly defined with explicit criteria."

Thumbnail
A completely AI-generated ad was broadcast during the NBA finals. The ad was made by Kalshi, a financial exchange and prediction market based in New York City, launched in 2021, and was allegedly made for only $2,000 by Google's new Veo 3 model.

Thumbnail
Turron alleges to be "a video recognition system that works like Shazam -- but for video."

"It analyzes short snippets (2-5 seconds), breaks them into keyframes, and uses perceptual hashing to identify the exact or near-exact source, even if the clip has been edited or altered. This preserves the full context of the snippet and enables reliable tracking of original video content."

It's in Java and looks like it requires a server-side component to function and may be non-trivial to install.

There isn't a handy explanation of how the "perceptual hashing" works. But there is source code so I guess you can read the source code if you want to know how it works.

Thumbnail
"Agentic" AI has Sabine Hossenfelder worried. AI worms? AI prompt injection? AI enabling hackers to find security vulnerabilities? AI emailing law enforcement and the FDA if it doesn't like your line of questioning? AI blackmailing users to prevent itself from being taken offline? AI models exchanging messages of spiritual bliss? Maybe that one wouldn't be so bad.

I just worry that a codebase I work on will get bugs in it I don't know about. Or security holes.

Thumbnail
"Level up your Patient Care Report writing by using templates and AI" with Patient Care Report Assistant (PCRAssist).

When you read this, keep in mind that "PCR" here stands for "Patient Care Report", not "polymerase chain reaction" (what PCR stands for in DNA and RNA tests).

"Transform narratives into SOAP, CHART, or DCHART format with a single click."

"Automatically merge your narrative into relevant parts of predefined templates."

"AI technology transforms verbose descriptions into precise medical terminology."

"Receive instant feedback on inconsistencies and missing report details."

They also say. "Voice-to-text capability for hands-free reporting".

How do y'all feel about this? AI being used for medical reports.

Thumbnail
"The rise of the agenticist: Why the hottest AI job of 2025 doesn't exist yet."

"An Agenticist is a specialized AI professional who designs, implements, and manages autonomous agent systems at scale. Unlike traditional AI engineers who focus on model development or data scientists who analyze patterns, Agenticists think in terms of agent interactions, emergent behaviors, and system-wide intelligence."

This is according to ALgarch (looks suspiciously like "AIgarch", but evidently it's actually "ALgarch"), an "AI-first" company specializing in AI agents and automation and various other AI consulting and AI custom solutions.

Continuing on, they say:

"Why traditional AI roles aren't enough:"

AI engineers excel at building and training individual models, but struggle with the complexity of multi-agent coordination. Data scientists can identify patterns and insights, but aren't equipped to handle the dynamic, real-time decision-making of autonomous systems. Ml engineers can deploy and scale models, but agent orchestration requires understanding emergent behaviors that emerge from agent interactions."

"The gap became clear to me when working with a client who had successfully deployed several ai tools -- chatbots, recommendation engines, predictive analytics -- but couldn't get them to work together effectively."

"The core competencies of an agenticist:"

This is followed by 4 sections, each with multiple bullet points, which I won't copy here so you'll just have to click through if you want to read them. The 4 sections are "agent architecture design", "behavioral orchestration", "context engineering", and "performance optimization."

Then there's a section, "Real-world applications we're already seeing" with "software development", "customer service", and "business operations"

There's a "Building your agenticist career path" section with "foundation phase (3-6 months)", "specialization phase (6-12 months)", and "mastery phase (12+ months)".

I guess I better get started right now.

Thumbnail
"In an internal memo to employees who work on Gemini, Sergey Brin recommended being in the office at least every weekday and said 60 hours is the 'sweet spot' for productivity."

"He added that competition to develop artificial general intelligence has ramped up but maintained his belief that Google can come out on top if the company can 'turbocharge' its efforts."

This happened back in February but I didn't hear about it until today.

Ordinarily, when I hear that a person with a net worth of $145 billion is demanding others work 60 hour weeks, it just means $145 is not enough and they want billions more.

What is interesting in this case is that this is targeted at Google's AI team specifically. In other words, if you're a regular Google employee, maybe it's ok to work 40 hours/week, but if you work on the AI team, 60 is the minimum. 60 is the "sweet spot" -- more than 60 and you risk burnout, less than 60 and you are letting everyone else down and bad for morale.

"Competition has accelerated immensely and the final race to AGI is afoot."

Thumbnail
"Astonishing discovery by computer scientist: how to squeeze space into time."

Not the spacetime continuum you might be familiar with from physics class (or Star Trek). Here we're talking about computational space, also known as memory, and computational time.

Spoiler: In the video, Kelsey Houston-Edwards demonstrates how it's possible to swap two memory cells (a common operation in computers, for example in sorting) without allocating a 3rd to hold a temporary value. This is done by taking advantage of the fact that two XOR operations return you back to where you started. Apparently this principle generalizes, and in the math paper, Ryan Williams demonstrates that any time you have N operations that return you back to where you started, you can incorporate this into a computation graph to store temporary intermediate results without allocating any more memory. In the video, Kelsey Houston-Edwards highlights one such technique, called "roots of unity" (which involves imaginary numbers), but I skimmed through the math paper, and it does not seem to rely on roots of unity (or XOR for that matter), or any specific implementation of the underlying principle. The video follows the general outline of the paper, but the paper rigorously proves each step (or at least appears to -- I didn't verify each step -- the paper is 19 pages.)

In practice, I think memory is so cheap now, this isn't going to find practical application, but it's an interesting theoretical result, something I never expected.

Thumbnail
"Presented to you in the form of unedited screenshots, the following is a 'conversation' I had with Chat GPT upon asking whether it could help me choose several of my own essays to link in a query letter I intended to send to an agent."

"What ultimately transpired is the closest thing to a personal episode of Black Mirror I hope to experience in this lifetime."

Spoiler: Amanda Guinzburg asks ChatGPT to help her select a couple of her own essays to send to an agent, as described, but ChatGPT pretends to read them, without actually reading them, then lies about it -- ChatGPT liar liar pants on fire. But what gets really creepy is how staggeringly apologetic ChatGPT gets. I guess "reinforcement learning with human feedback" can lead to "sycophancy" and what's a "sycophantic" chatbot to do when it gets caught in its own self-contradictions?

Thumbnail
Builder.ai was valued at $1.5 billion, but it turned out there was no AI. All the code from Builder.ai was Indian developers pretending to write code as AI. Allegedly. And the company has filed for bankruptcy. According to an article in Chinese published on Binance Square (English translation at this link).

It's like a reverse deepfake.

Thumbnail
Andrew Feldman, Co-Founder and CEO of Cerebras, has surfaced in this wide-ranging interview on "The AI chip wars & the plan to break Nvidia's dominance."

The key insights of Cerebras are first, that AI is memory-intensive, and unlike Nvidia, which buys memory from SK (Hynix), Samsung, Micron Technology, etc, Cerebras manufactures its own memory, integrated into its own chips, alongside the circuits that do the AI computation. And second, that the way to make their huge "wafer scale" chips have an acceptably high yield was to manufacture hundreds of thousands of identical tiles, and then disable the handful of tiles that are defective. So you don't have to manufacture a perfect huge wafer, which is just about impossible because there are always flaws when you manufacture a silicon wafer.

He notes that when things become faster and cheaper, they get used everywhere. When computers became faster and cheaper, suddenly they were in cars, and then they were in your pocket, and then they were in your dishwasher, and your TV. 30 years ago if you're like, I need a computer in my TV, people would be like, you're kidding me. Now computers are in kids' toys. He thinks AI is undergoing the same rapid diffusion. It will soon be everywhere and in everything. Chips like Cerebras' that dramatically lower the cost and increase the speed will be a big part of it.

Nvidia GPUs are fully utilized during training, but during inference, they are underutilized, in fact utilization can be as low as 7%, he claims. On August 26th, Cerebras launched chips specifically optimized for inference, and they've been benchmarked as the fastest and most power efficient in the industry ever since, beating not only Nvidia but also Google's TPUs, Amazon's "Trainium" chips, etc.

Thumbnail
DeepTeam is billed as an "open-source LLM red teaming framework for penetration testing large-language model systems."

"DeepTeam runs locally on your machine, and uses LLMs for both simulation and evaluation during red teaming. With DeepTeam, whether your LLM systems are RAG piplines, chatbots, AI agents, or just the LLM itself, you can be confident that safety risks and security vulnerabilities are caught before your users do."

"40+ vulnerabilities available out-of-the-box, including:"

"Bias: gender, race, political, and religion."

"Personally identifiable information (PII) leakage: direct leakage, session leakage, and database access."

"Misinformation: factual error and unsupported claims."

"Robustness: input overreliance and hijacking"

"10+ adversarial attack methods, for both single-turn and multi-turn (conversational based red teaming):"

"Single-turn: prompt injection, leetspeak, rot-13, and math problem."

"Multi-turn: linear jailbreaking, tree jailbreaking, and crescendo jailbreaking."

"DeepTeam is powered by DeepEval, the open-source LLM evaluation framework."

Having a look at DeepEval, it says:

"DeepEval is a simple-to-use, open-source LLM evaluation framework, for evaluating and testing large-language model systems. It is similar to Pytest but specialized for unit testing LLM outputs. DeepEval incorporates the latest research to evaluate LLM outputs based on metrics such as G-Eval, hallucination, answer relevancy, RAGAS, etc., which uses LLMs and various other NLP models that runs locally on your machine for evaluation."

"Whether your LLM applications are retrieval augmented generation (RAG) pipelines, chatbots, AI agents, implemented via LangChain or LlamaIndex, DeepEval has you covered. With it, you can easily determine the optimal models, prompts, and architecture to improve your RAG pipeline, agentic workflows, prevent prompt drifting, or even transition from OpenAI to hosting your own Deepseek R1 with confidence."

RAGAS is apparently a library that measures the performance of your LLM application.

Thumbnail
OpenAI o3 was used to find a "zeroday" security vulnerability in the Linux kernel.

"With o3 LLMs have made a leap forward in their ability to reason about code, and if you work in vulnerability research you should start paying close attention. If you're an expert-level vulnerability researcher or exploit developer the machines aren't about to replace you. In fact, it is quite the opposite: they are now at a stage where they can make you significantly more efficient and effective. If you have a problem that can be represented in fewer than 10k lines of code there is a reasonable chance o3 can either solve it, or help you solve it."

"CVE-2025-37778 is a use-after-free vulnerability."

The bug is in ksmbd, "a linux kernel server which implements SMB3 protocol in kernel space for sharing files over network." SMB stands for "Server Message Block" and is usually known as "Samba". It is a protocol that enables Windows and Linux machines to share files.

"It is interesting by virtue of being part of the remote attack surface of the Linux kernel."

"I gave the LLM the code for the 'session setup' command handler, including the code for all functions it calls, and so on, up to a call depth of 3 (this being the depth required to include all of the code necessary to reason about the vulnerability). I also include all of the code for the functions that read data off the wire, parses an incoming request, selects the command handler to run, and then tears down the connection after the handler has completed. Without this the LLM would have to guess at how various data structures were set up and that would lead to more false positives."

"I told the LLM to look for use-after-free vulnerabilities."

"I gave it a brief, high level overview of what ksmbd is, its architecture, and what its threat model is."

"I tried to strongly guide it to not report false positives, and to favour not reporting any bugs over reporting false positives. I have no idea if this helps, but I'd like it to help, so here we are."

"My experiment harness executes this N times (N=100 for this particular experiment) and saves the results."

"o3 finds the kerberos authentication vulnerability in the benchmark in 8 of the 100 runs. In another 66 of the runs o3 concludes there is no bug present in the code (false negatives), and the remaining 28 reports are false positives. For comparison, Claude Sonnet 3.7 finds it 3 out of 100 runs and Claude Sonnet 3.5 does not find it in 100 runs. So on this benchmark at least we have a 2x-3x improvement in o3 over Claude Sonnet 3.7."

Once the vulnerability was found, we can see it's not trivial. It requires figuring out how to get the system into the right state for the "free" to be triggered, then find the paths in another function (the authentication system) that does not reinitialize the buffer that has been freed, then finding other parts of the codebase that could potentially access the buffer after it has been freed.

Thumbnail
"I've been paragliding for 18 years and follow the progress of AI very closely, but even I couldn't say with 100% certainty if this video is real or fake."

"One scene is definitely fake: the one where the camera makes a move that would only be possible from a drone. In this scene..."

"I think the other two scenes might be real though, recorded with a 360 camera on a selfie stick - a common setup for paragliding videos."

"There are definitely some questions:"

"Why did he (or someone) have to make a fake AI-generated scene and insert it between the two real ones (if they are indeed real)?"

He goes on through various details, trying to figure out whether the video is real or fake.

"The real story, however, might be how major media outlets shared this as authentic footage, yet the AI-generated portion looks laughably fake by today's standards."

"But here's what's truly concerning: what we consider state-of-the-art AI generation today will look just as crude in twelve months."

Yes. We already know, with art, people can't tell whether an art piece was made by an AI or human artist. Call it the art equivalent of the Turing Test. Still photos that are AI-generated can sometimes be indistinguishable from actual photos. Video is not quite there yet, but it's getting close.

I'm actually surprised there haven't been more fake photos in the news by now. Well, if they went undetected, we would never know. But I expected there to be more disputed photos. Let's put it that way.

Thumbnail
Students have been generative AI's most enthusiastic early adopters, says Nicholas Carr, with nearly 90% of college students and more than 50% of high school students regularly using chatbots for schoolwork.

"Because text-generating bots like ChatGPT offer an easy way to cheat on papers and other assignments, students' embrace of the technology has stirred uneasiness, and sometimes despair, among educators." "But cheating is a symptom of a deeper, more insidious problem. The real threat AI poses to education isn't that it encourages cheating. It's that it discourages learning."

Before we continue, let me say that yes, I am aware Nicholas Carr is considered a "Luddite". However, let's continue anyway.

He says that when a task is automated, human skill either increases, atrophies, or never develops. At first, this sounds like saying the stock market will either go up or down, in which case it's impossible to be wrong because you've covered all the possibilities. (The stock market is pretty unlikely to stay the same down to the penny.)

But on closer inspection, he's making a more nuanced argument. When someone is already an expert, task automation frees them up to learn more challenging concepts. When someone is not an expert, task automation will lead to atrophy. And when someone has not learned how to do a task in the first place, task automation prevents learning the task in the first place.

A simple example might be calculators. Throughout most of human history, mathematicians did not have calculators. Having a calculator bestowed on one from a time traveler from the future would free an expert mathematician to develop more advanced mathematical concepts. But there are probably many people alive today who learned how to calculate by hand but let the skill atrophy because they always use calculators, and there are probably many people alive today who never learned how to do arithmetic by hand at all. That missing foundation may impair their ability to learn more advanced mathematics.

Ok, so far so good -- I'm following his line of reasoning. He quotes Clay Shirky, who apparently in the years since I read his book (Here Comes Everybody, published in 2008) has managed to get himself promoted to the prestigious title of "Vice Provost for AI and Technology in Education at New York University", as saying, while the "output" of school is papers, exams, research projects, and so on, the "product" is student "experience" of learning.

"The utility of written assignments relies on two assumptions: The first is that to write about something, the student has to understand the subject and organize their thoughts. The second is that grading student writing amounts to assessing the effort and thought that went into it. At the end of 2022, the logic of this proposition -- never ironclad -- began to fall apart completely. The writing a student produces and the experience they have can now be decoupled as easily as typing a prompt, which means that grading student writing might now be unrelated to assessing what the student has learned to comprehend or express."

That's Clay Shirky. Getting back to Nicholas Carr, he talks about how AI produces "the illusion of learning"

"An extensive 2024 University of Pennsylvania study of the effects of AI on high-school math students found, as its authors write in a forthcoming PNAS article, that 'access to GPT-4 significantly improves performance [as measured by grades],' but when access to the technology is taken away, 'students actually perform worse than those who never had access.'"

"An ironic consequence of the loss of learning is that it prevents students from using AI adeptly. Writing a good prompt requires an understanding of the subject being explored."

I'll come back to that last bit.

Ok, I think that conveys the gist of the piece. Now for my commentary. Which I guessing a lot of you won't like, but here goes.

One of the super hard, super painful lessons of my life was that the purpose of school isn't learning. The purpose of school is *grades*. If I'm in a class where I'm not learning anything, and nobody is learning anything (easy to ascertain just by asking fellow students), the correct course of action is *not* to complain. Shut up, do what you're told, get your "A", and go on with your life. If you complain, if you treat the lack of learning like a "problem" that needs to be solved, a lot of bad things happen, and zero good things happen. I know because when I was in school, I actually ran this experiment. You get labeled "disobedient" and a "troublemaker". The teacher will tell other teachers what a "disobedient" student you are and what a "troublemaker" you are, so before you even walk into any other class at the university, they will already have heard about you -- in a negative way. The administration will hear about how "disobedient" you are and what a "troublemaker" you are. Your fellow students, who understand full well that they are there to get "A"s and if they learn a lot, that's great, but if not, it's ok to sacrifice learning to get "A"s rather than the other way around -- the "A"s are what ultimately matters -- will ostracize you, because they don't want to get on the teacher's "enemies" list. The teacher will take you telling them that they are incompetent at their job personally, and will make it their mission in life to destroy you. Questioning their authority is not allowed.

It's taken me years to make sense of this, but the way I've made sense of it is in terms of intrinsic vs extrinsic motivation. "Learning" is an intrinsic motivation. "Grades" is an extrinsic motivation. The entire educational system in this country, and just about every other, everywhere in the world, is built on the assumption that students are motivated by grades. Therefore the system requires extrinsic motivation. I've come to think of intrinsic motivation as a separate dimension of personality, so you can plot "intrinsic" and "extrinsic" on 2 axes. At the Boulder Hackerspace, everyone who I met there was there because of intrinsic motivation -- people go to hackerspaces to build their own projects, whatever they're curious about doing. But some people had advanced degrees, which requires a lot of extrinsic motivation. So I think it's possible for a person to be high on both intrinsic and extrinsic motivation. The key thing to understand is that the educational system is indifferent to intrinsic motivation -- all it cares about is extrinsic motivation, that students are motivated by grades. The best students are students who treat school like a "game" -- like a giant, real-life video game where the goal is to get the high score. How do you get the high score? In the context of education, a high score is a high GPA, in a prestigious major, from a prestigious school.

If you were to ask me now, I would say the purpose of school is sales. You get a degree so that when you send in your résumé for a job, people go "Oh my god, you have XYZ degree from XYZ school! We have to hire you right now!" Normally when you send in your résumé, people are like, "Bartholomew Anoplipsqueroidi? Who is Bartholomew Anoplipsqueroidi?" or whatever your name is. But if you have "University of Colorado at Boulder" attached to your name -- or better yet, MIT, Stanford, Harvard, etc -- now people will be like, "I've heard of CU Boulder (or MIT, Stanford, Harvard, etc)!" That's why you get a degree -- it's to attach a famous brand name to yourself. And then use that to "sell yourself" on the job market. (Maybe people who need this explained explicitly are autistic or something? -- normal people seem to understand automatically that the purpose of school is not learning, but maybe it's helpful to pretend the purpose of school is learning for sales purposes, and that if you follow the extrinsic rewards backward from money to job offers to degrees to to GPAs to "A"s in specific classes, it all makes perfect sense.)

You might think employers would care about the actual learning, but I realized afterward, no employer is going to go through your transcript and inquire as to whether you learned all the concepts on the syllabus for each course. All they care about is: degree or no degree? And maybe they care about your GPA for the first few jobs. For them, it's a fast way to sift through a pile of résumés. I don't think employers care about intrinsic vs extrinsic motivation -- for them, it's probably fine for people to be money-motivated (extrinsic motivation) because that gives the employer a lever of control. I once saw an interview with an economist on YouTube. I wish I had the link handy, but I seem to have lost track of it. Anyway, he said, a college degree signals three things to employers: 1. That the person is smart, 2. That the person is hard-working, and 3. That the person is "conformist". He went on to say that we love to trumpet "smart" and "hardworking" but we sweep "conformist" under the rug because we don't like to admit it. I would probably have been less charitable and used the word "obedient" instead of "conformist" because I got hammered with the "disobedient" label so much. But the fact that I was "disobedient" and denied a college degree on that basis is, perhaps, a correct assessment: if a person is "disobedient" that person is genuinely not wanted by employers, who want "obedient" employees, and so "disobedient" peolpe should be filtered out. So ultimately the university did the correct thing, though I didn't understand it at the time. It would have been in error for the educational system to certify me as sufficiently obedient for employers, when in reality I wasn't.

Ok, so, two things. First, the purpose of school is not learning. That's the first mistake Nicholas Carr makes throughout his piece. The second is: Shouldn't students be learning to use AI? I'm currently employed and in the workforce (I've sufficiently gotten it through my thick skull that I must be obedient -- I'm obedient enough to get by), and I'm reminded on a fairly regular basis these days that I'm not doing a good job of 5x-10xing my productivity using AI. I'm supposed to fix software bugs 5x-10x faster. I'm supposed to implement new software features 5x-10x faster. Claude just came out with Claude 4 and it's supposed to be able to handle enormous context windows without "forgetting" content in it like large-context-window models typically do, and this is supposed to help tremendously with getting AI agents to make changes in a large, existing codebase. So somehow I've got to set aside time for learning Claude Code with this new model and how to get AI to do the work of multiple software engineers. If this is what a typical workplace is like now, shouldn't young people be learning how to do exactly this?

It makes me think we should abandon the idea that using AI is "cheating", and make the assignments so hard the only way they can be done is with AI assistance, to make school assignments more like the workplace we are preparing students to enter (supposedly). One simple way to do this could simply be time. Instead of having a writing assignment issued on Monday and due, say, by midnight on Sunday (or whatever deadline is typical of work submitted online these days), make it so the assignment is issued on Monday at 10:00am and the first 50% will be graded and the second 50% will all be automatic "F"s. If the first 50% are all submitted by noon on Monday -- assisted by AI, of course -- then all the students who even attempt the assignment without AI will automatically fail. This will motivate students to invest heavily in learning how to prompt AI systems -- "prompt engineering" (lol, that term still seems ridiculous) as it's now called.

(In reality, no school today would ever give 50% of students failing grades in any class -- in fact the opposite phenomena, grade inflation, is happening. Grade inflation is when average grades throughout the country go up, but average scores on standardized tests don't budge at all. Since we're looking at averages, we can't pin blame on any particular school, teacher, or student, but we can see that incentives align throughout society for higher grades to be given for less learning. But grade inflation is its whole own topic for some other time.)

Never mind the question of how all these AI-generated assignment submissions would be graded (maybe AI-graded, too? lol).

That leads me full circle back to the bit I said I'd come back to.

"An ironic consequence of the loss of learning is that it prevents students from using AI adeptly. Writing a good prompt requires an understanding of the subject being explored."

Hmm. Assuming this is true, and it does seem reasonable that it would be true, this seems like quite a dilemma. Does motivating students to become expert AI prompters provide sufficient motivation to motivate them to learn underlying fundamentals? Or does the process simply fail here, and AI-less learning of fundamentals has to be necessary? Or should the school system simply take a sink-or-swim approach: give good grades to the best AI-generated work, irrespective of how it was accomplished? Let students fend for themselves to figure out how to properly prompt AI? That's how the world of work is today, so maybe it makes sense for school to work the same way? What do you all think?

I think we all know, it's just a matter of time before AI automates all jobs. I don't know how long it will take. But there's only X years of jobs left, for some X, and young people need to learn how to maximize their income in the labor market while the labor market exists. Ultimately, everyone will have to find non-labor sources of income because the labor market will go away. What should young people be learning for X years?

Actually, it's X - S, if S is the number of years the person will remain in school.

For example, if we assume X is 20 -- I find it hard to imagine it will take longer than 20 years for the entire job market to be automated, but the predictions that it might be right around the corner might be premature and it might take a full 20 -- then if a person is graduating now, this year, S = 0 and the person will be in the labor market for 20 years. (Might not be that easy -- AI layoffs appear to have already started -- but let's assume our hypothetical graduate will be able to stay in the workforce right up to the very end.)

If the young person is starting college now, then S = 4, so X - S = 16, so the person will be in the workforce for 16 years. If they are starting high school, then S = 8, so X - S = 12, so the person will be in the workforce for 12 years. If the person is starting elementary school, well, the "K12" designation right there tells you S = 12, or 13 if you include the "K", so X - S = 8 or 7. So a person starting elementary school now will have 8 years in the labor market, and someone starting kindergarten will have 7 years. A child born this year will have S = 22, which makes X - S a negative number (-2), which means the labor market will be gone 2 years before they can graduate college.

When you spell it out like that, it actually raises the question of whether young people should be in school at all. Maybe they should be trying and failing to start businesses using AI, so by the time they become adults, they will be owners of profitable businesses generating revenue from products and services generated by AI?

There's also the issue that for young people with time left to be in the labor force, most of the jobs aren't going to be "white-collar" college-degree-type jobs. They're going to be jobs like cleaning hotel rooms. I always think of cleaning hotel rooms because, somewhere around 20 years ago (I don't remember exactly when, but circa ~2005 seems about right), someone got the idea of remote-controlling a robot to clean a room. Even with the motors and actuators of the day (which were worse than those that exist today), the human (who was actually a grad student in a nearby building) was able to clean the room by remote-controlling the robot, and neither the robot nor the human had any additional assistance. (Of course it took a lot longer than it would take a human to walk into the room and clean it, but...) This proved that the missing component preventing robots that clean hotel rooms from existing wasn't any motors or actuators or any physical robotics technology, it was *intelligence*. (And this is still the state of affairs today -- AI for robotics lags behind AI that generates language, sound, and images.)

I always notice today when I hear people say things like, AI is wiping out the low end of the IQ spectrum, or wiping out the middle, and you will have to be super high IQ in the future to get a job. Well, "IQ" does not mean what you think it means, if we're judging "IQ" by what's easy or hard to automate. If we're judging "IQ" by what's easy or hard to automate, then people cleaning hotel rooms come out looking like geniuses. As a society, we are not used to the idea that mathematicians are dumb and people who clean hotel rooms are geniuses. From where we are right now, it looks like the job of a professional mathematician will probably be easier to automate than the job of a person cleaning hotel rooms. Maybe the next popular mantra after "Learn to code" will be "Learn to clean hotel rooms"?