Boulder Future Salon

Boulder Future Salon

Thumbnail
"Quantum record smashed as scientists build mammoth 6,000-qubit system -- and it works at room temperature."

My first thought on reading that headline was, oh, RSA encryption is in trouble -- the largest RSA keys I've heard of are 4,096 bits. A 6,000-qubit quantum computer can break all of them.

This experiment wasn't a complete quantum computer, but it was a step towards building a 6,000 (or more precisely, 6,100)-qubit quantum computer.

This experiment is based on "optical tweezers." The idea behind optical tweezers is that it's possible for light to transfer momentum to particles, and to do so in such a way as to "trap" them in one place. Once trapped, the lasers can change and move the particles around like tweezers.

If you're thinking light is made of photons and photons have no mass, so it's impossible for them to transfer momentum to a particle, well... it turns out that photons do have energy, and energy is equivalent to mass (E = mc^2), so if a photon loses energy, the momentum it transfers to the particle can come from that energy difference.

If you're wondering how it's possible to trap a particle in one place, if you emit coherent laser light and then focus it through a high numerical aperture (NA) lens, then the electric field gradient created on one side can "push" the particle while the electric field gradient on the other side can "pull" the particle.

Numerical aperture (NA) is a dimensionless number that represents a lens's ability to gather light and concentrate it towards a focal point. The higher the NA number, the stronger the lens. When the light source is a coherent laser beam, the light, which, remember, is an electromagnetic wave, creates a strong electric field gradient.

For a particle to be trapped in this way, it has to be a "dielectric" material. To understand what a "dielectric" is, you first have to think of an electrical conductor as a material that has outer electrons that can be knocked loose so they can flow freely through the material. An electrical insulator is the opposite -- a material where all the electrons are locked in place and can't move. A "dielectric" material is an insulator, but the electrons are not so locked down that they can't wiggle, and if enough of them wiggle in the same direction, then the material becomes electrically polarized, even though it is not conducting electricity.

The experiment here used 6,100 cesium-133 atoms at far-off-resonant wavelengths, where "at far-off-resonant wavelengths" is a funny phrase that means that the wavelengths of light chosen for the lasers was "far-off" the wavelengths where cesium-133 atoms can absorb or emit photons, which are known as "resonant wavelengths". They did this in a vacuum (or more precisely, near-vacuum, since a complete vacuum is not possible here on Earth) and at room temperature, though it occurred to me the concept of "temperature" as "average kinetic energy of particles" doesn't make sense in a vacuum, so you have to define temperature differently, such as by the wavelengths of black body radiation emitted by the material that makes the vacuum chamber... but I digress. I think the point here is they built their machinery to not require any freezing equipment to get the material near absolute zero, something usually required for quantum computing. The machinery can be built in a room-temperature lab.

If you're wondering why they chose cesium-133, the answer they give is:

"Cesium atoms possess the highest polarizability among the stable alkalimetal atoms at near-infrared wavelengths where commercial fiber amplifiers provide continuous-wave laser powers that exceed 100 W."

(As an aside, 100 W -- 100 watts -- seems like a lot for a laser. I've heard of 100 W lasers cutting acrylic, hardwood, and even some metals, being used for engraving, etc. The only reason I can think of why they need such a powerful laser here is they're splitting it 11,998 ways -- let's just say 12,000.)

A 100W laser can cut materials like 12mm acrylic or 1/2 inch (12mm) hardwood efficiently.

Cesium-133 is a very stable isotope of cesium that is the same isotope used for atomic clocks.

If you're thinking, 6,100 cesium atoms trapped in place is fine, but how do you get from that to quantum computing? Don't you have to get those 6,100 atoms to be "entangled", in the quantum physics sense of the term?

Yes you do, and here's where my understanding of quantum physics runs out, and along with that, my ability to explain anything to you. The process used is something called a "Rydberg Blockade", and I have found a paper that explains it (linked below), which you should be able to understand if you are well versed in quantum physics.

Using the "Rydberg Blockade" technique, atoms can be placed into any desired quantum state, and the amazing thing about this is, this includes states with superposition and entanglement with other atoms. This can all be done with sufficiently precise control over the lasers trapping the atoms.

Qubits need to be encoded in some long-lived quantum state of an atom, such as "hyperfine" states, nuclear spin states, or optical clock states, whatever those are. In this case, the "hyperfine" states was what was chosen.

The so-called "hyperfine" states are detected by hyperfine transitions, which are transitions between extremely close energy levels. In the hydrogen atom (which, remember, consists of one proton, one electron, and usually, in the most common isotope, no neutrons at all), when the electron flips its spin relative to the proton, it emits a radio wave with a wavelength of 21 centimeters, which is used in radio astronomy to map out hydrogen atoms throughout the universe. Hydrogen atoms that are invisible at normal optical wavelengths give themselves away through this "21 cm" emission.

In the case of cesium 133, cesium 133 has a hyperfine transition in the atom's ground state with a precise frequency that makes it usable for atomic clocks. The same hyperfine state that is the basis for the hyperfine transition used by atomic clocks is used here to create qubits and high-fidelity two-qubit gates.

They claim their "hyperfine qubit tweezer array", as the full setup came to be called, was able to maintain the entangled state for 12.6 seconds. Not nanoseconds, or microseconds, or milliseconds ... *seconds*. At room temperature.

Thumbnail
QASM is "a quantum programming language".

"QASM originated as a language for formally defining a quantum circuit to render images for visualization purposes. As quantum computation evolved, the language was adopted as a way to specify quantum circuits as input to a quantum computer."

"A QASM program declares the classical bits and qubits, describes the operations (gates) on those qubits and the measurements needed to obtain the classical result by inspecting the qubits."

"cQASM is used to describe relatively simple circuits, which is fine for the current generation of quantum computers. In the future, a higher level of abstraction will be required to deal with the billions of qubits needed to make up a practical quantum computer."

(Note: "cQASM" is the particular version of QASM described on this site.)

Thumbnail
Pew Research survey on global attitudes towards AI.

36,961 people were surveyed -- 8,628 in the United States and 28,333 in other countries: Canada, Mexico, Brazil, Argentina, the United Kingdom, Germany, the Netherlands, France, Spain, Sweden, Poland, Hungary, Italy, Greece, Turkey, Israel, Nigeria, South Africa, Kenya, India, Indonesia, Australia, South Korea, and Japan -- and asked how they feel about the rise of AI in daily life?

The options were "More concerned than excited", "More excited than concerned", and "Equally concerned and excited". The US ranked highest among people answering "More concerned than excited" with 50% of the population saying "More concerned than excited". South Korea ranked the lowest at 16%.

People were asked if they trust the EU, the US, and China to regulate the use of AI effectively. For the EU, 53% said they have some or a lot of trust in the EU, 34% said they have no trust or not too much trust in the EU, with the remaining 15% saying not sure. For the US, 37% said they have some or a lot of trust in the US, 48% said they have no trust or not too much trust in the US, with the remaining 11% saying not sure. For China, 27% said they have some or a lot of trust in China, 60% said they have no trust or not too much trust in China, with the remaining 13% saying not sure.

People were also allowed to say whether they trust their own government. Most people put their own government's country at the top of the trust list, with 55% saying they have some or a lot of trust in their own government's ability to regulate the use of artificial intelligence effectively, 32% saying they have no trust or not too much, and 12% not sure.

However, later in the report, there is a per-country breakdown, showing people in Greece had low trust (73% saying no trust or not too much) in their own government, and Italy also ranking below the US (48% for Italy, 47% for the US). France (45%), Brazil (45%), Argentina (43%), Japan (39%), Mexico (37%), Nigeria (37%), Spain (35%), Poland (34%), and Hungary (33%) were all below the 25-country median of 32%. On the flip side, the countries where people trusted their own government the most were India (89% say some trust or a lot of trust), Indonesia (74%), Israel (72%), Germany (70%), the Netherlands (68%), Australia (65%), South Africa (64%), Turkey (60%), the UK (57%), Sweden (55%), South Korea (55%), and Kenya (54% -- but nobody there knows anything about AI as we will soon see).

People were asked if they have heard or read a lot about artificial intelligence, and put in age brackets. The country with the biggest difference between the 50+ age bracket and the 18-34 age bracket was Greece, with 20% of the 50+ age bracket saying they've heard or read a lot about AI and 68% of the 18-34 age bracket saying they've heard or read a lot about AI, for a gap of 48%. The gap was also pretty wide in South Korea, Japan, Poland, France, Sweden, Israel, and Spain. It was lowest in Kenya, but not very many people there of any age said they've heard or read a lot about AI (7% for the 50+ age bracket, 14% for the 18-34 age bracket). Kind of hard to have a big difference when everyone of all ages is pretty close to 0.

Pew Research says there is a correlation between internet use and knowledge about AI. People who say they are online almost constantly are more likely than others to have heard a lot about AI. They also say people with more education are more likely than other groups to have heard a lot about AI. They also say people in wealthier countries tend to be more likely than those in less wealthy countries to have heard or read a lot about AI.

"At one end of this spectrum is the US, where GDP per capita is about $86,000 and 47% of adults have heard a lot about AI. By comparison, in Kenya, GDP per capita is about $2,200 and 12% of adults say they have heard a lot about AI."

"In some countries, people on the ideological right are less likely than those on the left to trust the EU to regulate AI. One of the largest ideological gaps is in the Netherlands, where 85% of those on the left trust the EU on this matter, compared with 61% on right."

"In Europe, people with a favorable opinion of some right-wing populist parties are less likely to trust the EU to effectively regulate AI. For example, 43% of Alternative for Germany (AfD) supporters trust the EU on this matter, compared with 78% of nonsupporters."

"In 15 countries, people who place themselves on the ideological right express more trust in the US to regulate AI effectively than those on the left."

"This pattern appears in eight of the 10 European countries surveyed, with Spain showing one of the largest gaps (45% vs 21%)."

"Outside of Europe, ideological divides emerge in eight countries. In Australia, for example, 53% of those on the right trust the US to regulate AI, compared with 15% of those on the left."

"In 10 countries, adults ages 18 to 34 are more likely than those ages 50 and older to trust the US to regulate AI. For example, 82% of young Nigerians trust the US on this issue, compared with 65% of older Nigerians."

"In 19 countries surveyed, adults under 35 are somewhat more trusting than those ages 50 and older on China's ability to regulate AI. One of the larger age gaps is in Spain, where 54% of younger adults trust China on this issue, compared with 21% of older adults."

"In several of these countries, adults ages 50 and older are more likely than those under 35 to say they are unsure if they trust China to regulate AI."

"In most countries, younger people have more favorable views of China in general than older people."

Thumbnail
"Adobe exec says the $141 billion software giant embraces candidates who use AI to apply for jobs -- because they're the people 'creating the future'"

Meanwhile...

Thumbnail
"Claude Sonnet 4.5 demonstrates the ability to sustain complex, multi-step reasoning and code execution tasks for over 30 hours. On the SWE-bench Verified benchmark, which measures an AI model's ability to solve real-world software issues, Claude Sonnet 4.5 achieved a score of 77.2%, up from 72.7% for Sonnet 4, marking a notable advance in autonomous coding capability. On the OSWorld benchmark, which assesses real-world computer-use skills, Sonnet 4.5 reached 61.4%, improving significantly from 42.2% just four months earlier."

Thumbnail
Someone attempted to submit code to systemd with a new feature called "detect-fash," "which scans a system for the presence of software and configurations known to be associated with fascist ideologies."

systemd (no capitalization) is a system for booting up Linux systems, and managing services that run on the machine once the computer is booted up.

Inside the "detect-fash" submission, we see functions named:

detect_omarchy
detect_ladybird
detect_hyprland
detect_dhh

Omarchy is a new Linux distribution created by David Heinemeier Hansson (who goes by the initials DHH, so I will henceforth just call him DHH) which I've heard is supposed to be easy for Mac users to migrate to, kind of like how Linux Mint is easy for Windows users to migrate to. It is said to be very "opinionated" with all sorts of user interface decisions made for you, although since it is Linux under the hood, it is actually possible to customize it all.

Ladybird is an open-source web browser made by Andreas Kling.

Hyprland is a "Wayland compositor", where "Wayland compositor" means a display server that implements the Wayland display server protocol, which is a replacement for the X Server protocol, which is what most Linux systems currently use, but people are trying to migrate to Wayland, which is a newer and supposed to be better protocol.

The "detect_dhh" detects to see if systemd is running on DHH's computer by looking for his public ssh key.

DHH is the creator of Rails (as in "Ruby on Rails" -- but he did not create the Ruby programming language -- that was done by a Japanese guy -- he did the "Rails" framework) and I have a link below that explains why people think he's a "fascist". He's the only one of the three I understand. The others I have no idea. (If you know, please explain to me.)

There is an additional interesting twist on this. If you click over to the GitHub account that issued the pull request, you'll see it's an account with Russian writing. Rendered in our alphabet, it says "otrodyas takogo ne bylo, i vot - opyat", which Google Translate translates as "I've never seen anything like this before, and here it is again." But Wikipedia translates it as "The thing that never happens just happened again." Apparently the quote is attributable to Viktor Chernomyrdin, a former Prime Minister of Russia (from 1998) who was known for comedic sayings. Another given on his Wikipedia page (link below) is "We wanted the best, but it turned out like always."

The fact that the submitter is (probably) Russian is hugely significant. Open source project maintainers in the United States are prohibited by sanctions laws from accepting submissions from anyone connected with the Russian government, which is on the US government's list of officially prohibited entities. The penalties for violating this law are said to be severe. So the fact that this was submitted from Russia may mean, rather that being an attempt to combat use or contribution to open source software by "fascists", this could actually be an attempt to take out the leadership of the systemd project by getting the people who run it punished by the US government. As I understand it the primary people who run the systemd project work at Red Hat and are located in the US.

Thumbnail
If you've heard that geofencing was used by Israel to target advertising by Christians, it looks like that's true and the reason we know is a Foreign Agents Registration Act filing, something I didn't know anything about. The Foreign Agents Registration Act (FARA) requires 'foreign agents' to register with the Department of Justice (DOJ) and disclose their activities and financial compensation.

My point here isn't to make any political or religious statement, I just think it's interesting that "geofencing" can be used for targeted advertising -- and the FARA Act is a thing that exists that can revel a foreign entity using it (although you have to wonder if, after this, such activity will get hidden behind a chain of shell companies). The idea is, when people go to church, their GPS coordinate from their mobile phone will go inside the "geofenced" area of the church grounds and identify the person as an attendee of the church. This could be used for anything, not just churches, and in mentioning this to some friends, they've told me geotargeted advertising has actually been a thing for a long time. I guess I naïvely thought that just meant, you travel to city X, you get ads for restaurants in city X, something like that, but apparently geofencing is much more sophisticated than that now. You enter the grounds of a specific church, and computers somewhere remember that you're a Christian and a member of that church forever, and you get targeted advertising on that basis. Of course -- it seems rather obvious when you spell it out like that. The FARA filing (link below) lists the specific churches targeted (starting on page 34). (Scottsdale Bible Church, Scottsdale, AZ; North Phoenix Baptist Church, Phoenix, AZ; ...) Once identified as a Christian, the person can receive targeted advertising with pro-Israel messages from the government of Israel.

The FARA filing even describes some of what those messages are: Educational messages about the history of Jews in the region, before and the creation of the state of Israel in 1948; educational messages about the history of the creation of Israel, its legitimacy as a power in the region, and its protection of non-Jewish populations; education about ongoing activities to protect civilians and maintain moral superiority; information about democratic freedoms in Israel including religious and non-religious freedoms; question the longstanding policy of a 2-state solution; highlight historical co-existence between Jews and Arabs continuing into the creation of Israel and the many concessions made by Israel in exchange for peace; Information about the great partnership between Americans and Israelis internationally; Christians In Israel and the Birthplace of Jesus Christmas Message; ...

Thumbnail
Tiny Recursive Models beat large language models on the ARC-AGI tests of intelligence.

"With only 7M parameters, TRM obtains 45% test-accuracy on ARC-AGI-1 and 8% on ARC-AGI-2, higher than most LLMs (e.g., Deepseek R1, o3-mini, Gemini 2.5 Pro) with less than 0.01% of the parameters."

The wording of that is very careful. The best LLM/multi-modal model on both ARC-AGI-1 and ARC-AGI-2 is a version of Grok 4 custom-trained for the ARC-AGI-1 and ARC-AGI-2 tests. It gets scores of 79.6 on ARC-AGI-1 and 29.4 on ARC-AGI-2. However, this model has 1.7 trillion parameters. Tiny Recursive Models are able to get 44.6 on ARC-AGI-1 and 7.8 on ARC-AGI-2 with only 7 million parameters. The ability to do so well with so few parameters is what's noteworthy.

"ARC-AGI-1 and ARC-AGI-2 are geometric puzzles involving monetary prizes. Each puzzle is designed to be easy for a human, yet hard for current AI models. Each puzzle task consists of 2-3 input-output demonstration pairs and 1-2 test inputs to be solved. The final score is computed as the accuracy over all test inputs from two attempts to produce the correct output grid. The maximum grid size is 30x30. ARC-AGI-1 contains 800 tasks, while ARC-AGI-2 contains 1120 tasks. We also augment our data with the 160 tasks from the closely related ConceptARC dataset. We provide results on the public evaluation set for both ARC-AGI-1 and ARC-AGI-2."

"While these datasets are small, heavy data-augmentation is used in order to improve generalization. ARC-AGI uses 1000 data augmentations (color permutation, dihedral-group, and translations transformations) per data example. The dihedral-group transformations consist of random 90-degree rotations, horizontal/vertical flips, and reflections."

"Tiny Recursive Model with self-attention obtains 44.6% accuracy on ARC-AGI-1, and 7.8% accuracy on ARC-AGI-2 with 7M parameters. This is significantly higher than the 74.5%, 40.3%, and 5.0% obtained by Hierarchical Reasoning Model using 4 times the number of parameters (27M)."

How does it work?

Well, the actual paper talks a lot about a previous model (which you just saw mentioned in that last quote) called Hierarchical Reasoning Model. Tiny Recursive Model was created by improving upon Hierarchical Reasoning Model.

The philosophy of Hierarchical Reasoning Model is that you actually have two models. One processes inputs at a very high frequency. The second processes outputs from the first at a low frequency. In this manner, you establish a clear hierarchy.

The Tiny Recursive Model dispenses with the explicit hierarchy in favor of "recursion". There's a single network. It contains a transformer "attention" system, but combines that with the input (remincent of residual networks), the current best answer, and a hidden latent state (reminscent of recurrent networks -- attention-based "transformers" made recurrent networks just about completely disappear).

Hierarchical Reasoning Models require a complex inner loop with fixed parameters for controlling when the high-level network runs. The Tiny Recursive Model has a simpler inner loop, though it has a fixed parameter for updates to the hidden latent state (6 times through the loop) and another fixed parameter for the number of times it does the "deep recursion " incorporating the input, current best answer, and hiden state (3 times through that loop).

The Hierarchical Reasoning Model has a complex early stopping mechanism, that in the paper the creators of the Tiny Recursive Model say was both "biologically inspired" (using ideas from neuroscience) and inspired by Q-learning in reinforcement learning. It is computationally expensive to calculate whether to "halt". The new Tiny Recursive Model uses a simple binary cross-entropy, a commonly used loss function in machine learning. The cross-entropy goes through a sigmoid function and if the result is more than 0.5 (potentially another fixed parameter), then the model considers its answer confident enough to stop.

The Hierarchical Reasoning Model outputs its final answer only from the network at the top of the hierarchy. The Tiny Recursive Model, in contrast, maintains the "current best answer" throughout the process. It maintains latent state throughout the process as well, allowing it to continuously maintain inner "thinking" that is not part of the final answer.

It remains to be seen whether this is a revolution that will revolutionize the field of AI. Since these models are so small, there would seem to be tremendous headroom to scale them up and potentially crush humans on the ARC-AGI-1 and ARC-AGI-2 tests.

Thumbnail
"User ban controversy reveals Bluesky's decentralized aspiration isn't reality. Bluesky's protocol is so complicated that not even the biggest alternative network has figured out how to become independent."

"Bluesky's engineering team has been moving ahead with its long-promised open source efforts, breaking up its software stack into several pieces to enable a federated Authenticated Transfer Protocol (ATProto) network where anyone with the know-how and funds could run their own copy of Bluesky."

But...

"The only completely independent implementation of ATProto is Bluesky. But that isn't for want of trying on the part of Rudy Fraser, the creator of Blacksky."

"Despite Fraser's efforts to implement his own PDS, Relay, and App View, however, Blacksky still remains partially dependent upon Bluesky's application server, largely because while the code to implement the dataplane of posts and users within an application server is released, the open-source version is slower. As a result, Blacksky is dependent on Bluesky's application server to give users a fast experience, which also means that it is dependent on Bluesky's labeling system and its moderation choices."

And the government is trying to influence those moderation choices.

"Federal Communications Commission Brendan Carr's threats against late night comedian Jimmy Kimmel led to his temporary suspension by ABC, and he was far from the only Republican to issue them. Louisiana Rep. Clay Higgins, chair of the House subcommittee on federal law enforcement, sent a menacing letter to Bluesky and other social media networks demanding that they identify and ban anyone deemed to be celebrating Charlie Kirk's killing."

Thumbnail
"Today's LLMs are the epicycles of intelligence: extraordinarily useful for navigation through language, capable of producing predictive charts of our symbolic universe -- but like their astronomical predecessors, perhaps working well without being fundamentally correct."

"In astronomy, it took two orthogonal insights -- Copernicus's heliocentrism and Kepler's ellipses -- spread over seventy years to break free from epicycles, and another eighty for Newton to reveal the logic behind them. By analogy, we may still be in AI's pre-Copernican era, using parameter-rich approximations that will eventually give way to a more compact and principled foundation."

Is the possibility that gradient descent and backpropagation aren't the foundations of intelligence itself keeping you up at night?

Thumbnail
camfer (no capitalization) is an AI CAD tool that works with SolidWorks on Windows.

If you're a SolidWorks user and give it a whirl, let me know how it goes.

Thumbnail
"The AI boom's reliance on circular deals is raising fears of a bubble."

"Nvidia plans to invest in OpenAI, which is buying cloud computing from Oracle, which is buying chips from Nvidia, which has a stake in CoreWeave, which is providing artificial intelligence infrastructure to OpenAI."

"If it starts to become clear that AI productivity gains -- and thus the return on investment -- may be limited or delayed, 'a sharp correction in tech stocks, with negative knock-ons for the real economy, would be very likely,' analysts with Oxford Economics research group wrote in a recent note."

Thumbnail
AI GIF Generate is an AI animated GIF generator.

Thumbnail
In the discussion between Richard Sutton, pioneer of reinforcement learning, and Dwarkesh Patel, YouTuber, the two spoke past each other because they were "speaking two different languages", says Ksenia Se of "Turing Post".

Words like "prediction", "goal", "imitate", "world model", and "priors", have different meanings in the minds of Richard Sutton and Dwarkesh Patel.

Richard Sutton thinks of them in terms of reinforcement learning, and having studied part of his textbook (co-authored with Andrew Barto) (I read about half of it and confess to not having done most of the exercises -- they are quite challenging!), I understand him very clearly, while Dwarkesh Patel thinks in terms of the current large language models.

To me, Dwarkesh Patel's thinking seems limited because he's not able to see beyond large language models and their token-oriented, self-supervised training system. It may be fine for language, but other techniques, to come primarily from the reinforcement learning research people, are likely in my mind to make robots competitive with humans in terms of physical dexterity in the physical world.

Thumbnail
"How functional programming shaped (and twisted) frontend development."

If it seems like ideas in React and Redux resemble ideas from the "functional languages paradigm" in languages like Haskell, it's not your imagination.

Some choice quotes:

"There's a strange irony at the heart of modern web development. The web was born from documents, hyperlinks, and a cascading stylesheet language. It was always messy, mutable, and gloriously side-effectful. Yet over the past decade, our most influential frontend tools have been shaped by engineers chasing functional programming purity: immutability, determinism, and the elimination of side effects."

"The web is fundamentally side-effectful. CSS cascades globally by design. Styles defined in one place affect elements everywhere, creating emergent patterns through specificity and inheritance. The DOM is a giant mutable tree that browsers optimize obsessively; changing it directly is fast and predictable. User interactions arrive asynchronously and unpredictably: clicks, scrolls, form submissions, network requests, resize events. There's no pure function that captures 'user intent.'"

"This messiness is not accidental. It's how the web scales across billions of devices, remains backwards-compatible across decades, and allows disparate systems to interoperate. The browser is an open platform with escape hatches everywhere. You can style anything, hook into any event, manipulate any node. That flexibility and that refusal to enforce rigid abstractions is the web's superpower."

"Functional programming revolves around a few core principles: functions should be pure (same inputs yields same outputs, no side effects), data should be immutable, and state changes should be explicit and traceable. These ideas produce code that's easier to reason about, test, and parallelize, in the right context of course."

"CSS was designed to be global. Styles cascade, inherit, and compose across boundaries. This enables tiny stylesheets to control huge documents, and lets teams share design systems across applications. But to functional programmers, global scope is dangerous. It creates implicit dependencies and unpredictable outcomes."

"React introduced synthetic events to normalize browser inconsistencies and integrate events into its rendering lifecycle. Instead of attaching listeners directly to DOM nodes, React uses event delegation. It listens at the root, then routes events to handlers through its own system."

"This feels elegant from a functional perspective. Events become data flowing through your component tree. You don't touch the DOM directly. Everything stays inside React's controlled universe."

"But native browser events already work. They bubble, they capture, they're well-specified. The browser has spent decades optimizing event dispatch."

Thumbnail
It is alleged (by The Citizen Lab, at the Munk School of Global Affairs and Public Policy at the University of Toronto), that Israel is using AI to create online "influence operations" aimed at "regime change" in Iran, starting with a deepfake of IDF air strikes on Evin Prison in Tehran.