Boulder Future Salon

Thumbnail
"In theory, at least according to quantum chromodynamics (our theory of the strong nuclear force), there should be multiple ways to make a bound state of quarks, antiquarks, and/or gluons alone."

"You can have baryons (with 3 quarks each) or antibaryons (with 3 antiquarks each)."

"You can have mesons (with a quark-antiquark pair)."

"You can have exotic states like tetraquarks (2 quarks and 2 antiquarks), pentaquarks (4 quarks and 1 antiquark or 1 quark and 4 antiquarks), or hexaquarks (6 quarks, 3 quarks and 3 antiquarks, or 6 antiquarks), etc."

"Or, you can also have states made of gluons alone -- with no valence quarks or antiquarks -- known as glueballs."

Just to joggle your memory, in the Standard Model, protons and neutrons are made of quarks. Electrons aren't made of quarks -- they aren't made of anything, they're their own elementary particle. Particles made of quarks are called hadrons. There's this big machine called the Large Hadron Collider (maybe you have heard of it?) that collides hadrons together -- usually protons, because they are easy to accelerate with magnetic fields. Protons and neutrons are also baryons, which appears in the list above, which have 3 quarks and combine in certain ways dictated by "quantum numbers".

This raises the question of what glues quarks together into larger particles? The Standard Model answer is another particle -- gluons. Gluons carry the so-called "strong force". The subfield of physics for studying the interactions of quarks and gluons is called quantum chromodynamics, because for some reason, physicists decided to name properties of quarks that govern how they interact "color", even though this has nothing to do with actual color, the kind you see with your eyes, because quarks are vastly smaller than the smallest wavelength of light that your eyes can see and therefore don't have any color. (The smallest wavelength of light your eyes can see is about 400 nanometers. The diameter of a proton is about 0.84 femtometers. Remember, your metric prefixes go milli-, micro-, nano-, pico-, femto-, each 1000x smaller than the previous. So protons are about 470 million times smaller than the smallest wavelength of light your eyes can see. As protons are made of quarks, quarks must be smaller still.) They called other properties of quarks "flavors". If you haven't realized by now scientists give weird names to everything, maybe now is the time.

Anyway, based on this, you might not think it's possible to make a particle composed only of gluons. But not only is that what these physicists are claiming to have spotted, but they came up with the most obvious possible name for it, the "glueball".

This wasn't at the Large Hadron Collider (LHC), though, this was at an electron-positron collider in Beijing known as Beijing Spectrometer III (BES III).

"In a radical new paper just published in the journal Physical Review Letters, the BES III collaboration just announced that an exotic particle, previously identified as the X(2370), may indeed be the lightest glueball predicted by the Standard Model."

The article goes on to describe a technique called "Lattice QCD".

"By treating spacetime as a discrete grid with a very small inherent spacing, we can make predictions for larger-scale phenomena: the confinement of QCD bound states, the conditions under which a quark-gluon plasma should arise, and even a prediction for the masses of various bound states, including not only the proton and neutron, but heavy and exotic bound states as well."

This is not a theory of physics, suggesting that space and matter are discrete on small enough scales -- there is such a theory and the scale is known as the Planck length -- it's another, well, to get from our proton diameter down to the Planck length, we'd need to keep going from femto- to atto-, zepto-, yocto-, and we run out -- brrrrrp! No we don't, in 2022, they added two more metric prefixes: ronto- and quecto- ... but we still need 2 more to get down to the Planck length. Anyway, no, that's not what this is about. The "Lattice" in "Lattice QCD" is a numerical approximation technique. As the spacing between lattice sites is approaches zero, this approximation method approaches the values predicted by the QCD theory, which uses continuous rather than discrete mathematics.

Thumbnail
"I discovered hydrothermal vents, but I'm only known for finding the Titanic."

Moral of the story: If you solve the mystery of how life got a foothold on this planet, don't also find a sufficiently famous rusty old boat.

"We were always taught that life had to live in a very narrow pH, and all of a sudden we were finding life in a very acidic environment. We realised that these clams and tube worms were actually ingesting the chemistry of the vents, using it as fuel. Their bacteria were harnessing the energy of hydrogen sulphide to fix carbon. That just blew the socks off science because we had been told that all life on Earth of any major megafauna was due to photosynthesis."

Thumbnail
"Closed as unhelpful: A elegy for Stack Overflow"

"Even a decade ago, Stack Overflow had already developed a reputation as kind of a prickly, intimidating place."

"None of it mattered because Stack Overflow was a quantum leap in surfacing help for programmers."

"Stack Overflow gave stumped programmers the dopamine hit of looking up the solution to their problems, rather than learning how to solve them."

"Large language models (LLMs) have not, so far, proved to be as generally useful as a lot of true believers hoped. They're bad at surfacing insights not found in their training data, they're asymptotically mediocre at 'creative' tasks, and they're untrustworthy in any field requiring accurate information."

"But they have turned out to be reasonably accurate at producing answers to common programming questions."

"Most importantly, the thing Stack Overflow was worst at -- providing a welcoming place to learn -- is the thing LLMs are the best in the world at. Whereas posting on Stack Overflow can be like releasing a fragile bird into a cage full of furious snakes, ChatGPT does not care if your question is a duplicate. It does not care if you did not provide enough context. It is not hung up on parliamentary procedure. ChatGPT is infinitely patient and (more or less) constantly available."

"When LLMs first became available, a whole bunch of people tried to use them to generate Stack Overflow answers and rack up cheap karma; the overall quality of answers, though, was low enough that Stack Overflow has banned AI-generated content entirely from the platform."

"This raises the most troubling problem going forward, not just for SO but for all of us. The relationship between user-generated training data and AI-generated results so far appears to be one-way. LLMs do not add to humanity's body of knowledge; they only synthesize and regurgitate."

Thumbnail
"ChatGPT maker OpenAI exploring how to 'responsibly' make AI erotica."

This is all from one little paragraph in OpenAI's "Model Spec" document for ChatGPT.

"We believe developers and users should have the flexibility to use our services as they see fit, so long as they comply with our usage policies. We're exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts through the API and ChatGPT. We look forward to better understanding user and societal expectations of model behavior in this area."

Thumbnail
"United States Marine Forces Special Operations Command (MARSOC) has two robot dogs fitted with gun systems based on Onyx's SENTRY remote weapon system (RWS) -- one in 7.62x39mm caliber, and another in 6.5mm Creedmoor caliber."

"The underlying robot dog doing this tunnel work for MARSOC is Ghost Robotics' Vision 60 quadrupedal unmanned ground vehicle, or Q-UGV, Eric Shell, head of business development at Onyx Industries, said."

"Ghost Robotics describes its Q-UGV as a 'mid-sized high-endurance, agile and durable all-weather ground drone for use in a broad range of unstructured urban and natural environments for defense, homeland and enterprise applications.'"

Thumbnail
People wanting only daughters are using in-vitro fertilization (IVF) for sex selection. Allegedly this is popular with software engineers in silicon valley.

"Old debates around sex selection focused on the wish for sons. Today in America, that preference is often reversed. One study found that white parents having a first child picked female embryos 70 percent of the time. (Parents of Indian and Chinese descent were more likely to pick boys.) Anecdotes back this up, with message boards filled with moms dreaming of a 'mini me.' A 2010 study showed that American adoptive parents were 30 percent more likely to prefer girls than boys and were willing to pay $16,000 more in finalization costs to ensure a daughter. Close looks at demographic data suggest that families with daughters tend to have fewer subsequent children than do families with sons, indicating a sense that a daughter is what makes a family complete."

Thumbnail
MVNOs making a comeback? Humane, the company behind the AI Pin, uses its own service. But it's not really a telecom company, it's a mobile virtual network operator (MVNO). MVNOs use the physical infrastructure of a major telecom company, such as T-Mobile. T-Mobile is the company that provides the infrastructure for Mint Mobile, which the article mentions just got acquired by T-Mobile. T-Mobile also provides the infrastructure for Humane.

While an MVNO doesn't build the infrastructure, they do form their own brand and do their own marketing and have complete control of the customer service and billing and everything to do with the customer experience.

Mint Mobile doesn't do any AI product, or anything like that -- they offer discounted short-term subscriptions. Apparently worth $1.35 billion, as that was the acquisition price. Whether MVNOs will become a common feature for AI products remains to be seen.

Apparently a key enabler of MVNOs is the transition of the mobile industry from SIM cards to eSIM.

Thumbnail
Most close up video of the sun, shot from the European Space Agency (ESA)'s Solar Orbiter. Solar Orbiter is a spacecraft that orbits in a highly elliptical orbit around the sun, bringing it out by us, out by Earth, on the outer part of its orbit, and inside the orbit of Mercury very close to the sun on the inner part of its orbit. This video was shot in ultraviolet on its last whipping through the inner part of the orbit a few months ago, but the video was only recently released.

It shows in the transition between the sun's atmosphere to corona there are formations called "coronal moss" that form delicate, lace-like patterns (that's how they describe them -- look like sharp small wiggles in the image to me), "spicules", which are "spires of gas" reaching upward, and "coronal rain", which is made of higher-density, lower-temperature plasma that falls back down on the sun.

The "coronal moss" are thought to be at the base of magnetic field loops that are otherwise invisible.

The video also caught a "small" eruption that is bigger than Earth. "Small" in comparison with the sun, though.

Thumbnail
Google had a plethora of AI announcements at Google I/O.

Gemini Advanced subscribers now have access to the newest model, Gemini 1.5, which has a 1-million-token context window. In practice the "context window" is the combined size of your prompt and any underlying "system message" that the creators of the system put in.

Google demoed a "Ask your photos" feature where you can ask questions like "What's my license plate number" and it searches all your photos and finds your license plate number and tells you what it is. You can ask when your kid learned how to swim. You can ask questions of your Gmail, such as "summarize all the announcements from my kid's school".

Google is working towards AI agents that will do multiple steps for you instead of just answering one question. You could tell it to complete a task for you and then it will go and try to complete all of those steps to complete the task. "Return these shoes for me." It figures out where the shoes came from, how much they cost, how to contact customer support, and then it actually contacts the shoe seller.

Their lightweight model is called Gemini 1.5 Flash and is designed to run on mobile phones.

Project Astra is their attempt to create a real-time AI agent that uses the camera on your phone. You can ask it to explain what you're looking at, "What is this part of the speaker called?" or ask it to make up rhymes.

Google's response to OpenAI's Sora is a video generation model called Veo.

Google is rolling out an "AI Overview" in Google Search. (I've already seen it.) It uses what they call "multi-step reasoning". You should be able to ask Google Search "multi-step questions".

They're building AI into Android phones that can detect if you're potentially talking to a scammer.

They're open-sourcing a 2-billion parameter model called Gemma 2.

Thumbnail
"WindRunner is the world's largest aircraft, specialized to deliver the largest onshore wind turbines."

"Today's largest wind turbines and the even larger ones of the future cannot be transported to prime onshore wind farms via ground infrastructure."

"WindRunner can land on semi-prepared airstrips as short as 6,000 feet (1,800m), something no other large commercial aircraft can achieve."

They say "is" (present tense), but this plane hasn't been built yet. If it ever gets built, it will be the biggest plane ever built -- see comparison at the bottom with the Antonov An-124 and Boeing 747.

All for delivering giant wind farm blades, which will be left wherever out in the wilderness after the wind turbine wears out and the company no longer wants to spend the money for maintenance. I'm jus' sayin', we already have lots of broken wind turbines, and this is going to mean bigger turbines in even more remote areas where they're even less likely to get proper maintenance.

Thumbnail
"PennyLane is a cross-platform Python library for differentiable programming of quantum computers. Train a quantum computer the same way as a neural network."

Oh wow, that's, uh, that's a concept.

"Key Features:"

"Machine learning on quantum hardware: Connect to quantum hardware using PyTorch, TensorFlow, JAX, Keras, or NumPy. Build rich and flexible hybrid quantum-classical models."

"Just in time compilation: Experimental support for just-in-time compilation. Compile your entire hybrid workflow, with support for advanced features such as adaptive circuits, real-time measurement feedback, and unbounded loops. See Catalyst for more details."

"Device-independent: Run the same quantum circuit on different quantum backends. Install plugins to access even more devices, including Strawberry Fields, Amazon Braket, IBM Q, Google Cirq, Rigetti Forest, Qulacs, Pasqal, Honeywell, and more."

"Follow the gradient: Hardware-friendly automatic differentiation of quantum circuits."

"Batteries included; Built-in tools for quantum machine learning, optimization, and quantum chemistry. Rapidly prototype using built-in quantum simulators with backpropagation support."

I don't have a quantum computer, so I'll leave it to all of you to run this and tell me how it goes.

Thumbnail
Kaspersky, the company that makes anti-virus software that some of you out there probably use (although maybe you'll rethink that after reading this), has been accused of making neural net software that's been added to Iranian drones and sent to battle in Ukraine.

So you'll see this link is to "Part 2" of a story. "Part 1" is about how a company called Albatross located in the Alabuga special economic zone, which is located in "the Republic of Tatarstan", which is not a separate country, but a state within Russia that is called a "Republic" anyway instead of "Oblast" which is the usual word for what would correspond approximately to a "state" in our country (well, assuming your country is the US, which it might not be, as there are people from everywhere here on FB, but you probably have something analogous in your country, "Province" for example), and is located -- if you've ever heard of the city of Kazan, Kazan in the capitol of Tatarstan -- ok that was a bit long for a sub-clause, where was I? Oh yeah, a company called Albatross in the Alabuga special economic zone in Tatarstan got hacked, and what the documents revealed is that this company, "Albatross" was making "motor boats", but "motor boats" was a code name for drones (and "bumpers" was the code name for the warheads they carried), and more specifically the "Dolphin 632 motor boat" was really the Iranian Shahed-136 UAV, which got renamed to the Geran-2 when procured by the Russian military.

"Part 2" which is the link here goes into the Kaspersky connection. Allegedly two people at Kaspersky previously took part in a contest, called ALB-search, to make a neural network on a drone that could find a missing person. In the military adaptation, it finds enemy soldiers. Kaspersky Lab made a subdivision called Kaspersky Neural Networks.

The article links to a presentation regarding a neural network for a drone for agriculture, with slides about assessment of crop quality, crop counting, weed detection, land inventory, and such, but it goes on to describe searching for people and animals, UAV detection (detection of other drones in its surroundings), and even traffic situation analysis.

There's also a system called Kaspersky Antidrone, which is supposed to be able to hijack, basically, control of someone else's drone within a controlled airspace.

The article alleges Kaspersky was working with Albatross not only to deploy their neural networks to Albatross drones and to use them for detection of enemy soldiers but to develop them into artillery spotters as well. This is all with an on-board neural network that runs directly on the drone.

If true, this would indicate advancement of drones in the Ukraine war, which, so far I've heard very little of neural networks running on board on drones, as well as advancement of cooperation between Russia and Iran as well as integration of civilian companies such as Kaspersky into the war effort.

This information comes from a website called InformNapalm which I haven't seen before but they say was created by some Ukrainians as a "citizen journalism" site following the Russian annexation of Crimea in 2014.

Kaspersky has denied the allegations (article on that below).

Thumbnail
"Robert Dennard, father of DRAM, is deceased -- also known for his foundational Dennard scaling theory."

This obituary is worth noting for futurists because Dennard scaling is indeed a foundational theory, closely related to Moore's Law.

Dennard scaling, in short, is:

When you cut the linear dimensions of a digital circuit in half, you reduce the area to 1/4th of it's original size (area of a square is the side squared), which enables you to pack 4x as many transistors into that area, you reduce the electrical capacitance to 1/4th, you cut the voltage in half, you reduce the current by half, you reduce your power consumption to 1/4th, and you reduce transition time by half, which enables you to double your "clock speed".

This might make you wonder, why in the mid-2000s did clock speed stop increasing and power consumption stop going down, even though transistors continued to get smaller? Well, I researched this question a few years ago, and the surprising answer is: they would have if we had been willing to make our chips colder and colder. To have continued Dennard scaling to the present day, we'd need, like, cryogenically frozen data centers. The relationship to temperature is that, if you don't drop the temperature, then your electrical signals have to overcome the random jiggling of the atoms in the circuit -- which is what temperature is, the average kinetic energy of the molecules in your material. The way you overcome the "thermal noise" this introduces into your electric circuit is with voltage. So, you can't drop your voltage, and you can't drop your power, and, as it turns out, if you can't drop your voltage and power you can't drop your transition time, so you can't double your clock speed.

Thumbnail
OpenAI announces GPT-4o. The "o" is for "omni". The model "can reason across audio, vision, and text in real time."

There's a series of videos showing conversation by voice, recognizing "bunny ears", two GPT-4os interacting and singing, real-time translation, lullabies and whispers, sarcasm, math problems, learning Spanish, rock paper scissors, interview prep, "Be My Eyes" accessibility, and coding assistant and desktop app.

Thumbnail
Facial recognition AI has come to the TSA (Transportation Security Administration).

"TSA is using facial identification to verify a passenger's identity at its security checkpoints using the US Customs and Border Protection (CBP) Traveler Verification Service (TVS), which creates a secure biometric template of a passenger's live facial image taken at the checkpoint and matches it against a gallery of templates of pre-staged photos that the passenger previously provided to the government (e.g., US Passport or Visa). Participation is optional. Passengers who have consented to participate may choose to opt-out at any time and instead go through the standard identity verification process by a Transportation Security Officer (TSO)."

Thumbnail
"Pink Slip AI."

"Departures, Effortlessly Managed."

Seriously?

"Personalized dismissals made simple. Reduction in labor made simple. Empathetic downsizing made simple." "Easy to Setup. Review. Downsize."

"Click on this button to check out my real company."

Oh, ha ha, is today April 1st?