|
"LDraw[tm] is an open standard for LEGO CAD programs that allow the user to create virtual LEGO models and scenes. You can use it to document models you have physically built, create building instructions just like LEGO, render 3D photo realistic images of your virtual models and even make animations. The possibilities are endless. Unlike real LEGO bricks where you are limited by the number of parts and colors, in LDraw nothing is impossible."
So there's a CAD program for LEGO. Before you build something with your LEGO bricks, model it in a CAD system first.
"LDraw is a completely unofficial, community run free CAD system which represents official parts produced by the LEGO company." |
|
|
Vinylon: North Korea's best invention
Wait, *North* Korea? North Korea invented something?
Vinylon is a textile for clothing made from (drumroll please...) (spoilers!) (you'll never believe this...) limestone and coal.
It's falling into disuse because it's itchy and hard to color. Regular clothing from China and Russia have been coming in. |
|
|
"Drought has dried a major Amazon River tributary to its lowest level in over 122 years."
"The level of the Negro River at the port of Manaus was at 12.66 meters on Friday, as compared with a normal level of about 21 meters. It is the lowest since measurements started 122 years ago. The previous record low level was recorded last year, but toward the end of October."
"The Negro River's water level might drop even more in coming weeks based on forecasts for low rainfall in upstream regions, according to the geological service's predictions. Andre Martinelli, the agency's hydrology manager in Manaus, was quoted as saying the river was expected to continue receding until the end of the month."
The AP tagged the article with "Climate change", but doesn't actually mention climate change. |
|
|
This research paper, published May 5th of this year, says "People cannot distinguish GPT-4 from a human in a Turing Test". Can we declare May 5, 2024, as the date machines passed the Turing Test?
I asked ChatGPT to tell me what all the implications are of AI passing the Turing Test. (Is it an irony I ask an AI that passed the Turning Test what the implications are of AI passing the Turing Test?)
It said, for "Philosophical and ethical implications", that we'd have to redefine what it means to be "intelligent" and what "human intelligence" means, what it means to be "conscious", and the ability to simulate human conversation could lead to ethical dilemmas (deceptive automated customer service systems or deceptive medical or legal automated systems).
For "Social implications", it said "Impact on employment", especially roles that involve interpersonal communication (e.g., customer service, therapy, teaching), "AI in media and entertainment" -- writing novels, creating art, or generating music -- and "Public trust and misinformation" such as deepfakes and fake identities.
For "Legal and regulatory implications", it said "Legal accountability" -- who is accountable for an AI's actions -- "Regulation and oversight" especially in sectors where trust and human judgment are paramount (e.g., healthcare, legal advice, financial trading), and "Personhood and rights" -- does AI deserve rights?
Under "Technological implications", "Advances in human-AI interaction" -- sophisticated, seamless, and natural interaction using language -- resulting in personal assistants, customer service, and virtual companions, "Enhanced autonomous systems" (self driving cars, really?), and "AI as creative agents", but not just content creation, also "emotional work" such as therapy and help with decision-making
Under "Economic implications", "Market disruption", disruption of many industries, particularly those reliant on human communication and customer service, "Increased AI investment", well that's certainly happened, hasn't it, look how many billions OpenAI spends per year, but people will seek to capitalize on AI in specific sectors, e.g., healthcare, education, finance, and "AI-driven Productivity", particularly "in sectors where human-like interaction or decision-making is critical".
Under "Cultural Implications", it listed "Changing social interactions", meaning developing social bonds with AI entities, and over-reliance on AI, and "Education and knowledge", transform education and enabling personalized learning. (Except the point of education isn't really learning, it's credentials, but that's a story for another time).
Under "Security implications", it listed "Cybersecurity threats", especially social engineering attacks (e.g., phishing, fraud) enabled by AI's conversational abilities, and "Autonomous decision-making in security", in areas like national defense or policing, where bias could be a problem.
And finally under "Scientific implications", it listed "Advances in cognitive science", how understanding and building AI that can pass the Turing Test might yield insights into human cognition and intelligence -- eh, not yet, not that I've seen anyway -- and "AI in research", with AI taking on hypothesis generation, data analysis, or even autonomous experimentation.
I put the same question to Google's Gemini and (after the disclaimer that "The assertion that AI has *recently* passed the Turing Test is debatable") it... mostly listed the same items with slightly different categorization. The one new item it put in was "New benchmarks for AI", "Passing the Turing Test may necessitate the development of new, more comprehensive tests to evaluate AI's capabilities beyond just mimicking human conversation." That's a good point, Gemini.
I put the same question to Claude and it listed the same results, as short points, inviting me to ask it elaborate more.
Asked Meta.AI (from the company formerly known as Facebook), but it didn't seem to yield any new items.
I asked Grok (from X, Elon Musk's company), and it gave me the same list without any categorization.
I asked Perplexity and it mentioned "Multimodal AI development": "Success in language-based Turing Tests could accelerate progress in other areas of AI, such as visual reasoning and abstract problem-solving." Similarly under "Scientific and research implications", it listed "Shift in AI research focus: Passing the Turing Test might redirect AI research towards other aspects of intelligence beyond language processing." It also listed "Interdisciplinary collaboration": "There would likely be increased collaboration between AI researchers, cognitive scientists, and ethicists."
Perplexity also added "New Business Models: Industries could emerge around creating, managing, and regulating human-like AI interactions". Other systems highlighted increased investment in existing "sectors".
I also put the question to DeepSeek, the Chinese LLM, which gave a similar list but put legitimacy and ethical considerations first. It was also the first to mention "Data privacy": "Increased reliance on AI systems" "may raise concerns about data privacy and the collection of sensitive information."
Finally, I put the question to ChatGLM, another Chinese LLM, which, under "Educational Implications", added "Pedagogical shifts: Educators may need to adapt their teaching methods to incorporate AI that can engage in dialogue with students." Also under "Security implications", it listed "Defense and military": "The use of AI in military applications could become more sophisticated, leading to new arms races." |
|
|
The mystery of the possible Dyson spheres may be solved. First, a quick reminder on what Dyson spheres are:
"In their need for more powerful energy sources, an advanced civilization might harness the entire output of a star. They wrap a star within a sphere to capture every last photon of stellar energy. Such an object would have a strange infrared or radio spectrum. An alien glow that is faint and unique. So astronomers have searched for Dyson spheres in the Milky Way, and have found some interesting candidates."
Next the possible Dyson spheres:
"One major search was known as Project Hephaistos, which used data from Gaia, 2MASS, and WISE to look at five million candidate objects. From this they found seven unusual objects. They appear to be M-type red dwarfs at first glance, but have spectra that don't resemble simple stars. This kind of star-like infrared object is exactly what you'd expect from a Dyson sphere. But of course extraordinary claims require extraordinary evidence, and that's where things get fuzzy."
And how the mystery is solved (probably):
"Almost immediately after the paper was published, other astronomers noted that the seven objects could also be hot Dust-Obscured Galaxies, or hotDOGs. These are quasars, so they appear star-like, but are obscured by such a tremendous amount of dust that they mostly emit in the infrared."
So there you have it. Not Dyson spheres, hotDOGs.
Click through if you want to get the paper with all the details of what spectral bands were analyzed and how they were interpreted. |
|
|
"SLS is still a national disgrace".
The blog post starts with, "Four years ago, unable to find a comprehensive summary of the ongoing abject failure known as the NASA SLS (Space Launch System), I wrote one. If you're unfamiliar with the topic, you should read it first."
Uh, no, don't do that... unless you have a *lot* of time. This follow-up even still takes a lot of time. If you have a lot of time and are in the mood to feel depressed over NASA mission failures, you've come to the right place.
"By continuing to humor this monstrosity, NASA has squandered its technical integrity and credibility."
"NASA has spent $20b on SLS and related programs in the last four years, so let's tick off updates since my previous blog on this topic. $20b should buy a lot of progress, but if anything, this program is even further from any semblance of functionality than it was back then."
Besides the commentary on the SLS failures, what struck me was the "Litany of other canceled and delayed projects":
"Mars Perseverance, a supposedly cheaper built-to-print replica of the Curiosity Mars rover from 2012 then required extensive re-engineering costing $2.4b, the same as the previous rover."
"Mars Sample Return's budget grew from $1.6b to $11b while the schedule slipped multiple years to the right."
"VERITAS (mission to Venus) indefinitely postponed due to budget constraints."
"HWO/HabEx/Luvoir and the new (bi)decadal survey. JWST, originally budgeted in 1999 for $1b to launch in 2007, ultimately launched in 2021 after spending $10b, so horrendously late and over budget that instead of re-imposing any kind of programmatic discipline or inventing a contract structure that doesn't reward Northrop Grumman for wasting money, NASA instead has decided to delay the next big space telescope (currently termed the Habitable Worlds Observatory) into the 2050s."
"Dragonfly -- a super cool nuclear powered robotic octocopter to explore Titan, was originally budgeted at $850m and is now pushing well beyond flagship status at $3.35b."
"VIPER -- a robotic moon rover at the south pole. Originally conceived as the Resource Prospector rover from NASA Ames, then canceled in 2018 after spending $100m on development, the concept returned as VIPER. Its budget grew from $250m to $450m and most recently to $685m, exceeding a critical cost cap leading to cancellation despite being a supposedly essential part of Project Artemis."
"Psyche -- a mission to a metal asteroid. It missed its launch window due to a software issue discovered during final check out, growing its budget from $1b to $1.2b and pushing VERITAS into limbo."
"NEO Surveyor. In 1998 Congress mandated that NASA map 90% of near Earth objects larger than 1 km -- asteroids capable of destroying our entire civilization. Now, 30 years later, the mission has been pushed back another two years despite an increased budget, now expected to top $1.2b."
"Europa Clipper. Budget grew from $2b in 2013 to $5.2b."
"Ingenuity -- developed for less than $25m, because JPL was spending its own money."
"CASIS -- ISS National Lab. In an effort to defray costs and prolong the life of the station, NASA spent years building CASIS (an independent non-profit) and the ISS National Lab to find private customers."
"Chandra X-ray observatory, already launched and operating in space, is being defunded with 10 years of operational life remaining, because of budgetary pressures."
"Block 1B and the Exploration Upper Stage. Yet another configuration of SLS with a beefier upper stage, but only a marginal increase of launch capacity."
"Artemis space suit provider Collins backs out."
"Artemis space suit provider Axiom Space in trouble."
"I haven't forgotten Gateway and Starliner, they get a full treatment below."
Gateway, also known as Lunar Gateway, is a space station assembled in orbit around the Moon.
"The entire damn lunar gateway only exists because SLS is too anaemic to launch the incredibly overweight Orion anywhere useful, so perhaps we should just drop the whole thing into the Atlantic ocean and be done with it."
"In September 2014, NASA awarded Boeing $4.2b and SpaceX (somewhat grudgingly) $2.6b to develop capsules to transport people to and from the ISS."
"SpaceX's Crew Dragon capsule first flew in 2019 and in 2020 brought astronauts to and from the space station. As of September 2024, it's flown thirteen flights (three private) to the ISS, two other private flights including the highest ever Earth orbit and first private space walk, and carried 54 people in space."
"In contrast, Boeing's Starliner only flew two astronauts to the ISS, after two previous launches with a series of failures and near misses. Despite being a much simpler design than Crew Dragon, Starliner has suffered from:"
[list of stuff]
"Conway's Law explains that product structure mirrors the organizational structure that built it. ..."
Commentary: I have been wondering if there's something in the nature of human bureaucracies that causes them to gradually become moribund over time. It seems like this inflicts not just NASA, not even just the US government, but our society at large. It seems like, infrastructure-wise, things get done more and more slowly and at higher and higher cost, with no discernible explanation. The advancement of technology is supposed to make everything cheaper and faster, yet we see the opposite happening. At least in the tech industry there are nimble startups, but they seem to be outliers to the way our society is increasingly operating, and even they have a tendency to get acquired by established incumbents, for whom people have observed "enshittification" is a thing that happens enough to coin a word for it. |
|
|
"Hacker plants false memories in ChatGPT to steal user data in perpetuity"
Flaw in long-term memory in chatbots that try too hard to be personal assistants?
"Rehberger found that memories could be created and permanently stored through indirect prompt injection, an AI exploit that causes an LLM to follow instructions from untrusted content such as emails, blog posts, or documents. The researcher demonstrated how he could trick ChatGPT into believing a targeted user was 102 years old, lived in the Matrix, and insisted Earth was flat and the LLM would incorporate that information to steer all future conversations. These false memories could be planted by storing files in Google Drive or Microsoft OneDrive, uploading images, or browsing a site like Bing -- all of which could be created by a malicious attacker." |
|
|
Meta (the company formerly known as Facebook) has created a video generation model, called "Meta Movie Gen". Have a look at the sample videos. |
|
|
Looks like OpenAI has an answer to Claude's "Artifacts", which they call "Canvas".
"People use ChatGPT every day for help with writing and code. Although the chat interface is easy to use and works well for many tasks, it's limited when you want to work on projects that require editing and revisions. Canvas offers a new interface for this kind of work."
"With canvas, ChatGPT can better understand the context of what you're trying to accomplish. You can highlight specific sections to indicate exactly what you want ChatGPT to focus on. Like a copy editor or code reviewer, it can give inline feedback and suggestions with the entire project in mind."
"You control the project in canvas. You can directly edit text or code. There's a menu of shortcuts for you to ask ChatGPT to adjust writing length, debug your code, and quickly perform other useful actions. You can also restore previous versions of your work by using the back button in canvas."
"Coding shortcuts include:"
"Review code: ChatGPT provides inline suggestions to improve your code."
"Add logs: Inserts print statements to help you debug and understand your code."
"Add comments: Adds comments to the code to make it easier to understand."
"Fix bugs: Detects and rewrites problematic code to resolve errors."
"Port to a language: Translates your code into JavaScript, TypeScript, Python, Java, C++, or PHP."
Wait, *into* PHP? No, no, no, you should only be translating code *out* of PHP. PHP is one of the worst languages ever. It might even be worse than JavaScript.
No Go. Alrighty, let's continue.
"A second challenge involved tuning the model's editing behavior once the canvas was triggered -- specifically deciding when to make a targeted edit versus rewriting the entire content. We trained the model to perform targeted edits when users explicitly select text through the interface, otherwise favoring rewrites. This behavior continues to evolve as we refine the model."
It seems to me like this could be a first step in transforming "coders" into "managers" who "manage" an AI system that actually does the code writing.
For those of you who aren't coders and use regular language, they say:
"Writing shortcuts include:"
"Suggest edits: ChatGPT offers inline suggestions and feedback."
"Adjust the length: Edits the document length to be shorter or longer."
"Change reading level: Adjusts the reading level, from Kindergarten to Graduate School."
"Add final polish: Checks for grammar, clarity, and consistency."
"Add emojis: Adds relevant emojis for emphasis and color." |
|
|
"For LLMs, IBM's NorthPole chip overcomes the tradeoff between speed and efficiency."
IBM's NorthPole chips have a different architecture from GPUs, more directly inspired by the brain.
This page has funky graph that has energy *efficiency* (not energy) on the vertical axis and latency on the horizontal axis -- but in reverse order, so the slowest latency is on the right. Both axes are logarithmic. The only reason I can think of why it's done this way is to make "better" up on the vertical axis and to the right on the horizontal axis. Better energy efficiency is good so you want to go up on the vertical axis. Low latency is good so you want to go to the right on the horizontal axis. With this set up they put the NorthStar chip in the upper right corner.
I wonder if there's a possibility of this competing commercially with Nvidia. |
|
|
"Scaling up self-attention inference."
This wepage outlines the mathematics behind the "attention" mechanism used in large language models, then describes a new mathematical technique that allows the context window of a large language model to be split into pieces that can be computed independently and then combined. The end result is the same as computing the "attention" results from the entire context window.
This should enable large language models (LLMs) to continue to have larger and larger context windows, because now the computation requirement scales logarithmically with the size of the context window instead of linearly. So each time you increase your CPUs and GPUs by some linear increment, you'll double the size of the context window you can do. |
|
|
"The riddle of the Amish."
"The Amish have much to teach us. It may seem strange, even surreal, to turn to one of America's most traditional groups for lessons about living in a hyper-tech world -- especially a horse-driving people who have resisted 'progress' by snubbing cars, public grid power, and even high school education. Nonetheless, the Amish deserve a hearing."
"The key to unlock the riddle, in Donald Kraybill's view, is to realize that the Amish are negotiating with modernity in three ways: they accept, they reject, they bargain. They might reject 'radios, televisions, high school, church buildings, and salaried ministers,' Kraybill notes, but accept 'small electronic calculators and artificial insemination of cows. And more recently...LED lights on buggies, and battery-powered hand tools'. Then, there also is bargaining. One way they bargain with modernity is by 'Amishizing' certain technologies or techniques, for example 'neutering' computers (stripping WiFi, video games, internet access, etc.), or creating a school curriculum that reflects Amish values and traditions. Another form of bargaining is found in the Amish distinction between access and ownership of certain technologies. An Amish person may pay someone for a car-ride to work, but may not own a car."
"The Amish, arguably more than any other group in America, have tried to domesticate technology so that its potent force does not overwhelm or cripple their culture."
"The Amish aren't anti-technology; they are pro-community."
Towards the end of the article, it mentions fertility rates.
"Amish population continues to grow (a 116% increase since the year 2000) and communities are spreading into new states (including Colorado, Nebraska, New Mexico, South Dakota, Vermont, and Wyoming). Since the Amish don't tend to proselytize, this growth is in large part organic and natural, through new births and an extremely high retention rate that has increased to around 90% at a time when other Christian denominations are shedding members at record numbers."
To get a sense of what that means, I looked up the fertility rate of the US as a whole and found 1.84 children per woman. I tried looking it up for the Amish, but most people just said something like 6-8 or 6-9. I found two websites that said 7, so I decided to just plug in 7.0 and run with it.
If we start with a society that's 50% regular Americans and 50% Amish, after 1 generation it'll be 29% regular Americans and 71% Amish, after 2 generations it'l be 19% regular Americans, 81% Amish, and ...
At this point I decided to plug in the actual population numbers. For the US, I got 307.205 million, and for the Amish... again accurate numbers were difficult to come by but I found someone who said 350,000, so I decided to run with that number.
After 1 generation: 99.6% non-Amish, 0.4% Amish
After 2 generations: 98.8% non-Amish, 1.2% Amish
After 3 generations: 96.0% non-Amish, 4.0% Amish
After 4 generations: 87.8% non-Amish, 12.2% Amish
After 5 generations: 68.8% non-Amish, 31.2% Amish
After 6 generations: 43.0% non-Amish, 57.0% Amish
After 7 generations: 24.9% non-Amish, 75.1% Amish
After 8 generations: 17.2% non-Amish, 82.8% Amish
After 9 generations: 14.7% non-Amish, 85.3% Amish
After 10 generations: 13.9% non-Amish, 86.1% Amish
After 11 generations: 13.7% non-Amish, 86.3% Amish
After 12 generations: 13.6% non-Amish, 86.4% Amish
If we assume a "generation" is about 25 years, then what this means is that in 300 years, the population of the US will be 86.4% Amish.
If you're wondering what happens after generation 12, the simulation stabilizes on 13.6% regular non-Amish Americans and 86.4% Amish. It doesn't go up to 99% Amish because of the 10% defection rate per generation. But you'll note that most non-Amish Americans in that time frame will be Amish defectors, not descendants of people who today are non-Amish.
In reality, it could happen faster than in my simulation because generation times for the non-Amish are actually longer than 25 years, and generation times for the Amish are about 20 years.
Another implication of this model is that the shift to "progressive" or "left" values that we are seeing in the upcoming generations ("gen Z" and "gen alpha") are probably not going to last more than 3 generations (about 75 years). After that, we'll see a shift towards conservative, or more precisely Amish, values.
Another thing that might make the model different from reality is that we don't account for other high-fertility groups, in particular the Muslims. Muslims have a lower fertility rate than the Amish but they have the advantage of in-migration from the outside world, where the global Muslim population also has a high fertility rate.
This brings me to my central thesis in all this: What is the ultimate cause of low fertility rates? Those of you who've been hanging around me for any length of time probably have some idea what my answer is. My answer is: It's technology, or more specifically, when technology maximally *complements* humans, fertility rates are maximized, and when technology maximally *competes against* humans, fertility rates are minimized.
This underlying driver is, in my view, obscured by many things: people point out how cities have lower fertility rates than rural areas and attribute low fertility rates to "urbanization"; people argue about the role of the feminism movement, birth control, porn, social media, dating apps, and so on; people looking for competition against humans by machines focus on unemployment numbers instead of fertility numbers -- but people don't react to competition from machines by increasing the unemployment numbers, they do it by spending more time in school (which accounts for the declines we've seen in total labor force participation) and delaying marriage and childbirth.
If you look at the long arc of human history, humans spent 95+% of our existence as a species as hunter/gatherers. During this time, the human population barely increased.
We didn't see substantial increases in the human population until the agricultural revolution. The agricultural revolution started more or less immediately after the end of the last ice age, 11,600 years ago, but took a long time to get started -- what we think of as the "agricultural revolution" didn't really get going until around 5,000 years ago. Even then, the rate of human population growth was modest.
Eventually, what really got the human population growing fast was inventions that dramatically increased agricultural production. Heavy plows, like the wheeled moldboard plow, allowing for deeper plowing, turning over more soil, improved scythes, iron tools replacing wooden tools, horse collars, horse harnesses, and horseshoes, watermills and windmills, crop rotation, rotating grains with legumes, etc, and 3-field systems, with spring planting, autumn planting, and fallow, drainage and irrigation techniques, composting techniques, and selective breeding of both plants and animals.
Eventually, the industrial revolution happened. Human, ox, and horse power got replaced with steam power and gasoline. The Haber-Bosch process of industrial-scale nitrogen fixation and fertilizer production was invented. Industrial-scale production of fertilizer with phosphorus, sulfur, electrolytes (sodium, potassium, calcium, magnesium), and micronutrients were invented. Industrial-scale pesticide production began. Genetically engineered crops supplemented selective breeding for high-yielding crops. All these industrial-scale developments taken together brought us the "green revolution", and this is when you *really* saw the human population explode -- from under 1 billion to more than 8 billion today.
The thing that first clued me in that machines were a life form that competed against humans was a book called "Cosmic Evolution by Eric Chaisson. It showed how a measurement in physics, free energy rate density, correlated with subjective "complexity" or "structure" or "negative entropy". More complex animals have higher free energy rate density than less complex animals, and within human beings, human brain turns out to have the highest free energy rate density of any part of any animal. But the interesting thing is that CPUs surpassed the human brain on this measure around 2005 or so. So by this measure, the smartest "species" on the planet is actually computers, and actually has been since about 2005.
The interesting thing about that is that in 2005, computers were sort of idiot savants. They could do billions of arithmetic calculations per second -- more than any human -- and without any mistakes, too. But they couldn't see or understand language or manipulate objects in the physical world. Now computers are starting to take on these "human" non-idiot-savant abilities -- computers have vision and can generate images, can understand and generate language, and, well, still can't manipulate objects in the physical world. But those abilities are increasing. We don't know when but they'll be on par with humans at some point.
If we imagine a future where computers have completely taken over the job market, does that mean humans are just going to die off? All but the rich, who can survive on their investment incomes? No, there is another option -- subsistence farming. And that's what the Amish are doing. The world a hundred years into the future will consist of machines who run the world economy -- creating goods and services for other machines to purchase, with humans largely out of the market due to lack of labor income, but some will have lots of income from their investments -- and humans who survive outside the labor market through subsistence farming.
The key to becoming the latter group is the ability to resist technology. Worldwide we see the groups with the highest fertility rates are religious fundamentalists. And they don't need to be Christian -- the Heredim, also know as Ultra-Orthodox Jews, have high fertility rates. Within Islam, the most fundamentalist groups, like the Salafis in Egypt, have the highest fertility rates. What I find interesting about the Amish is that they are not fundamentalists. Rather than resisting technology as a side-effect of extreme adherence to a fundamentalist belief system, they resist technology deliberately, as an objective in and of itself. And they prove it can be done. And since they have proven it can be done, it seems reasonable to assume high fertility rates will continue into the future -- or will drop only modestly. In my model I assumed no change in fertility rates, but it seems to be the fertility rates of the non-Amish population is actually likely to continue dropping. Fertility rates for the US population as a whole have dropped faster than anyone expected and there doesn't seem to be any floor. If we look around the world we can see countries like South Korea with even lower fertility rates like South Korea (1.12 children per woman) and they seem to just continue going down.
So, I think the world we have to look forward to in a couple hundred years is: machines are the dominant species and control the world economically and militarily, humans survive within that world as subsistence farmers, mostly Amish and fundamentalist Muslims and fundamentalist sects of other religions.
This is the point where people usually chime in and say, there's going to be universal basic income. The thing is, I have the research paper from the UBI study funded by Sam Altman, and it's 147 pages, and I'm only a few pages in. So I really can't comment on it right now -- that will have to wait until another time.
My feeling for a long time has been that UBI is politically unfeasible, but people have told me, it will become politically feasible once a large enough percentage of the population is affected, and affected severely enough. If that's your view, then you can consider my projection to be what the world of the future will look like in the absence of UBI. |
|
|
OpenAI o1 is so smart, humans are not smart enough to create test questions to test how smart it is anymore. Discussion between Alan D. Thompson and Cris Sheridan. Open AI o1 beats PhD level experts across the board on tests we humans have made to test how intelligent other humans are. PhD-level humans are trying to come up with new questions but it is hard for other PhD-level humans to understand the questions and verify answers.
OpenAI reset the numbering, instead of continuing with the "GPT" series, because they think this is a new type of model. The "o" actually just means "OpenAI" so when I say "OpenAI o1", I'm really saying "OpenAI OpenAI 1".
You might think, if this is a new type of model, we'd know what type of model it is. Nope. OpenAI has not told us anything. We don't know what the model architecture is. We don't know how many parameters it has. We don't know how much compute was used to train it, or how much training data it used. We don't know what token system is used or how many tokens.
All we really know is that "chain-of-thought" reasoning has been built into the model in a way previous models never had built into them. (Called "hidden chain of thought", but not necessarily hidden -- you are allowed to see it.) This "chain-of-thought" system is guided by reinforcement learning in some way, but we don't know how that works.
The "system card" that OpenAI published mainly focuses on safety tests. Jailbreak evaluations, hallucinations, fairness and bias, hate speech, threats, and violence, chain-of-thought deception, self-knowledge, theory of mind, political persuasion, "capture-the-flag" (CTF) computer security challenges, reverse engineering, network exploits, biological threat creation.
It has some evaluation of "agentic" tasks (things like installing Docker containers), and multi-lingual capabilities.
Anyway, OpenAI is called "Open" AI but is becoming increasingly secretive.
That and we appear to have entered a new era where AI systems are smarter than the humans that make the tests to test how smart they are. |
|
|
Question: "When will an AI achieve a 98th percentile score or higher in a MENSA admission test?"
Sept. 2020: 2042 (22 years away)
Sept. 2021: 2031 (10 years away)
Sept. 2022: 2028 (6 years away)
Sept. 2023: 2026 (3 years away)
Resolved September 12, 2024
This are median prediction times from Metaculus, is an online forecasting platform, based on 275 predictions.
AI did it by a law test.
MENSA considers a 95% score on the Law School Admission Test (LSAT) to correspond to a 98% score on a general IQ test.
OpenAI released o1, which scores 95.6% on the LSAT "raw score", which is above the threshold.
I'd be interested to see when an AI system could pass the 98% threshold on one of MENSA's regular IQ tests, though. |
|
|
Diffusion Illusions: Flip illusions, rotation overlays, twisting squares, hidden overlays, Parker puzzles...
If you've never heard of "Parker puzzles", Matt Parker, the math YouTuber, asked this research team to make him a jigsaw puzzle with two solutions: one is a teacup, and the other is a doughnut.
The system they made starts with diffusion models, which are the models you use when you type a text prompt in and it generates the image for you. Napoleon as a cat or unicorn astronauts or whatever.
What if you could generate two images at once that are mathematically related somehow?
That's what the Diffusion Illusions system does. Actually it can even do more than two images.
First I must admit, the system uses an image parameterization system called Fourier Features Networks, and I clicked through to the research paper for Fourier features Networks, but I couldn't understand it. The "Fourier" part suggests sines and cosines, and yes, there's sine and cosine math in there, but there's also "bra-ket" notion, like you normally see in quantum physics, with partial differential equations in the bra-ket notation, and such. So, I don't understand how Fourier Features works.
There's a video of a short talk from SIGGRAPH, and in it (at about 4:30 in), they claim that diffusion models, all by themselves, have "adversarial artifacts" that Fourier Features fixes. I have no idea why diffusion models on their own would have any kind of "adversarial artifacts" problems. So obviously if I have no idea what might cause the problems, I have no idea why Fourier Features might fix them.
Ok, with that out of the way, the way the system works is there are the output images that the system generates, which they call "prime" images. The fact that they give them a name implies there's an additional type of image in the system, and there is. They call these other images the "dream target" images. Central to the whole thing is the "arrangement process" formulation. The only requirement of the "arrangement process" function is that it is differentiable, so deep learning methods can be applied to it. It is this "arrangement process" that decides whether you're generating flip illusions, rotation overlay illusions, hidden overlay illusions, twisting squares illusions, Parker puzzles, or something else -- you could define your own.
After this, it runs two training processes concurrently. The first is the standard way diffusion illusions are trained. This calculates an "error", also called a loss, from the target text conditioning, which is called the score distillation loss.
Apparently, however, circumstances exist where it is not trivial for prime images to follow the gradients from the Score Distillation Loss to give you images that create the illusion you are asking for. To get the system unstuck, they added the "dream target loss" training system. The "dream target" images are images made from your text prompts individually. So, let's say you want to make a flip illusion that is a penguin viewed one way and a giraffe when flipped upside down. In this instance, the system will take the "penguin" prompt and create an image from it, and take the "giraffe" prompt and create a separate image for it, and flip it upside down. These become the "dream target" images.
The system then computes a loss on the prime images and "dream target" images, as well as the original score distillation loss. If the system has any trouble converging on the "dream target" images, new "dream target" images are generated from the same original text prompts.
In this way, the system creates visual illusions. You can even print the images and turn them into real-life puzzles. For some illusions, you print on transparent plastic and overlap the images using an overhead projector. |
|
|
"How AlphaChip transformed computer chip design."
"AlphaChip" is the name Google has bestowed on their reinforcement learning system for doing chip layouts for semiconductor manufacturing.
It's dramatically accelerating the pace of chip design by dramatically shortening the time it takes to door the chip "floorplanning" process, with results superior to what human designers can do.
"Similar to AlphaGo and AlphaZero, which learned to master the games of Go, chess and shogi, we built AlphaChip to approach chip floorplanning as a kind of game."
"Starting from a blank grid, AlphaChip places one circuit component at a time until it's done placing all the components. Then it's rewarded based on the quality of the final layout. A novel "edge-based" graph neural network allows AlphaChip to learn the relationships between interconnected chip components and to generalize across chips, letting AlphaChip improve with each layout it designs."
"AlphaChip has generated superhuman chip layouts used in every generation of Google's TPU since its publication in 2020. These chips make it possible to massively scale-up AI models based on Google's Transformer architecture."
"TPUs lie at the heart of our powerful generative AI systems, from large language models, like Gemini, to image and video generators, Imagen and Veo. These AI accelerators also lie at the heart of Google's AI services and are available to external users via Google Cloud."
"To design TPU layouts, AlphaChip first practices on a diverse range of chip blocks from previous generations, such as on-chip and inter-chip network blocks, memory controllers, and data transport buffers. This process is called pre-training. Then we run AlphaChip on current TPU blocks to generate high-quality layouts. Unlike prior approaches, AlphaChip becomes better and faster as it solves more instances of the chip placement task, similar to how human experts do."
"With each new generation of TPU, including our latest Trillium (6th generation), AlphaChip has designed better chip layouts and provided more of the overall floorplan, accelerating the design cycle and yielding higher-performance chips."
"Beyond designing specialized AI accelerators like TPUs, AlphaChip has generated layouts for other chips across Alphabet, such as Google Axion Processors, our first Arm-based general-purpose data center CPUs."
"External organizations are also adopting and building on AlphaChip. For example, MediaTek, one of the top chip design companies in the world, extended AlphaChip to accelerate development of their most advanced chips -- like the Dimensity Flagship 5G used in Samsung mobile phones -- while improving power, performance and chip area." |
|