|
"Trump Media & Technology Group, which operates the Truth Social social-media network and trades under the symbol 'DJT,' has a market capitalization of approximately $9.48 billion, allegedly higher than X, the company formerly known as Twitter, which allegedly has a valuation of $9.4 billion."
"Industry analysts believe the jump in TMTG's stock price reflects growing enthusiasm among its investors that Trump will win the 2024 presidential election over VP Kamala Harris." |
|
|
"TSMC isn't a pure monopsony in the wafer fab equipment market. Intel and Samsung are still buying tools, just not as often as wafer fab equipment manufacturers would like. Memory manufacturers also buy wafer fab equipment tools, and trailing-edge foundries do too."
"Monopsony refers to a market in which there is only a single buyer, i.e., producers cannot find alternative buyers of their product."
"Monopsony permits the buyer to establish prices, terms and conditions that are quite different from those that would result from a market structure in which there were many competing buyers and sellers."
The only seller the article mentions is ASML. Single-seller market, too? At least when it comes to high-numeric-aperture (high-NA) extreme ultraviolet (EUV) lithography tools.
"When there are only a few buyers, it's an oligopsony. Given the consolidated nature of the semiconductor industry, most markets within the semiconductor supply chain are already oligopsonistic." |
|
|
OpenAI o1 isn't as good as an experienced professional programmer, but... "the set of tasks that O1 can do is impressive, and it's becoming more and more difficult to find easily demonstrated examples of things it can't do."
"There's a ton of things it can't do. But a lot of them are so complicated they don't really fit in a video."
"There are a small number of specific kinds of entry level developer jobs it could actually do as well, or maybe even better, than new hires."
Carl of "Internet of Bugs" recounts how he spent the last 3 weeks experimenting with the o1 model to try to find its shortcomings. /
"I've been saying for months now that AI couldn't do the work of a programmer, and that's been true, and to a large extent it still is. But in one common case, that's less true than it used to be, if it's still true at all."
"I've worked with a bunch of new hires that were fresh out with CS degrees from major colleges. Generally these new hires come out of school unfamiliar with the specific frameworks used on active projects. They have to be closely supervised for a while before they can work on their own. They have to be given self-contained pieces of code so they don't screw up something else and create regressions. A lot of them have never actually built anything that wasn't in response to a homework assignment.
"This o1 thing is more productive than most, if not all, of those fresh CS graduates I've worked with.
"Now, after a few months, the new grads get the hang of things, and from then on, for the most part, they become productive enough that I'd rather have them on a project than o1."
When I have a choice, I never hire anyone who only has an academic and theoretical understanding of programming and has never actually built anything that faces a customer, even if they only built it for themselves. But in the tech industry, many companies specifically create entry-level positions for new grads."
"In my opinion, those positions where people can get hired with no practical experience, those positions were stupid to have before and they're completely irrelevant now. But as long as those kinds of positions still exist, and now that o1 exists, I can no longer honestly say that there aren't any jobs that an AI could do better than a human, at least as far as programming goes."
"o1 Still has a lot of limitations."
Some of the limitations he cited were writing tests and writing a SQL RDBMS in Zig. |
|
|
"Colossal released a progress report on the work involved in resurrecting the thylacine, also known as the Tasmanian tiger, which went extinct when the last known survivor died in a zoo in 1936. Marsupial biology has some features that may make de-extinction somewhat easier, but we have far less sophisticated ways of manipulating it compared to the technology we've developed for working with the stem cells and reproduction of placental mammals. But, based on these new announcements, the technology available for working with marsupials is expanding rapidly."
"Colossal has obtained a nearly complete genome sequence from a thylacine sample that was preserved in ethanol a bit over a century ago. According to Pask, this sample contains both the short fragments typical of older DNA samples (typically just a few hundred base pairs long), but also some DNA molecules that were above 10,000 bases long." |
|
|
"A few weeks ago, the German political magazine Panorama and STRG_F reported that law enforcement agencies infiltrated the Tor network in order to expose criminals."
Does this mean Tor cannot anonymize people on the internet any more?
"The reporters had access to documents showing four successful deanonymizations. I was given the chance to review some documents. In this post, I am highlighting publicly documented key findings."
September 12: "Frankfurt District Court orders Telefónica (O2) to surveil its customers for up to three months."
September 16: Tor project makes a statement, "Pinpointing Tor entry relays of onion services to successfully deanonymize Tor users."
September 18: Journalists detail one case, "Operation Liberty Lane is referenced."
Operation Liberty Lane is "the alleged name of what is believed to be a joint operation of the United States, UK, Germany and potentially other countries with the goal to expose Tor users of illegal onion services."
September 18: Tor Project makes statement, "gives the impression that only Ricochet is affected."
Ricochet is a replacement for TorChat and Tor Messenger, two previous messaging services built on the Tor protocol. Ricochet messages never leave the Tor network as both the sender and receiver work by starting a Tor hidden service on their respective computers. Ricochet does not use any central servers to coordinate communication, thus is a genuinely decentralized instant messenger.
September 25: Interview with Daniel Moßbrucker. "More and more Tor relays in Germany are under surveillance for longer and longer periods, in such a way that apparently data has been used for timing analysis."
Page has links to many media reports and online discussions: tagesschau.de, tor-relays mailing list, Panorama, STRG_F, Tor Project, Hacker News, Tor Project users forum, and Deutsche Welle. |
|
|
TSMC's Phoenix, Arizona chip fab achieved a 4% better yield than comparable manufacturing sites in Taiwan. |
|
|
"Boeing satellite explodes in space, debris in orbit, further adding to the company's difficulties."
Ok, call me suspicious, but I don't think this is attributable to incompetence at Boeing, which you might be thinking because Boeing's gotten bad press recently. To me it seems unlikely a satellite will spontaneously "explode" in space, especially since this satellite (IS-33e) was launched in August 2016 and has worked until now, so more likely, it was *hit* by something.
Maybe it was hit by something natural, like a very small meteorite, or maybe it was hit by something made by humans, but not on purpose, like maybe it was hit by a piece of space debris. There's a lot of space debris, so that actually seems probable. Or maybe it was hit on purpose. But Boeing either doesn't think so, or, if they do, they're not telling. There aren't any reports of launches by anybody launching anything to hit it, as far as I know. I suppose there's the possibility of some newfangled energy weapon, that wouldn't have a launch, but I would expect that to have a signature visible from space, and there are no reports of any such thing that I'm aware of. |
|
|
"Introducing computer use, a new Claude 3.5 Sonnet, and Claude 3.5 Haiku."
For safety reasons, the last thing we'd allow an AI to do is take full control over a computer, looking at the screen and typing keys and moving the mouse and doing mouse clicks, just like a human, enabling it to do literally everything on a computer a human can do. Oh wait...
"Available today on the API, developers can direct Claude to use computers the way people do -- by looking at a screen, moving a cursor, clicking buttons, and typing text. Claude 3.5 Sonnet is the first frontier AI model to offer computer use in public beta. At this stage, it is still experimental -- at times cumbersome and error-prone. We're releasing computer use early for feedback from developers, and expect the capability to improve rapidly over time."
"Asana, Canva, Cognition, DoorDash, Replit, and The Browser Company have already begun to explore these possibilities, carrying out tasks that require dozens, and sometimes even hundreds, of steps to complete. For example, Replit is using Claude 3.5 Sonnet's capabilities with computer use and UI navigation to develop a key feature that evaluates apps as they're being built for their Replit Agent product."
But unlike me, everyone else seems to be reacting very positively.
"It doesn't get said enough: Not only is Claude the most capable LLM, but they also have the best character. Great work Claude and Team!"
"Just imagine the accessibility possibilities. For those with mobility or visual impairments, Claude can assist with tasks by simply asking, like helping in usage with apps and systems that often lack proper accessibility features."
That's a good point, actually.
Still, you might want to run it in a VM for now?
"Wow, this is going to be quite game-changing!"
"Impressive to see Claude navigating screens like a human! Though still in beta, this could be a game-changer for automating tedious tasks. Can't wait to see how it develops!"
"What I found particularly noteworthy in this demo was that the information wasn't copied from the CRM, but typed letter by letter. Purely speculating, but perhaps because there are rare cases where websites do not accept copied input, which often also affects password managers."
"This is RPA-like functionality. Wow, Will this be a game-changer?"
RPA stands for Robotic Process Automation.
"What are the security implications of this? Could a bad actor use this to ask Claude to go into other people's computers and access their confidential information?"
Ok, at least one person besides me is feeling a little worry.
"That's epic, you guys have the best AI. This company is something special."
"Computer Use is truly a pivotal advancement. Enabling AI to interact with computers like humans do is a significant leap towards AGI. Exciting times ahead!"
"Looks like Siri on screen awareness but two (or more) years early and available for use now (but meanwhile, on server.) WOW. Well done guys."
"Absolutely incredible -- Super excited to build with this & see what others build!"
"Immediately prompting: 'Do all my work'"
If Claude can do all your work, why will you get paid?
"This could be huge for companies struggling with legacy systems and modernization."
"This is one more pivotal point in AI's evolution. In 2025, more innovation and use cases will emerge, and human involvement is slowly being eliminated. It looks like a small improvement, but it's huge at its core and will significantly impact how AI will be used in a few years. Kudos Claude Team!" |
|
|
"libLISA is a tool that can fully automatically scan instruction space, discover instructions and synthesize their semantics. It produces machine-readable, CPU-specific x86-64 instruction semantics. It relies on as little human specification as possible: specifically, it does not rely on a handwritten (dis)assembler to dictate which instructions are executable on a given CPU, or what their operands are."
"Even though heavily researched, a full formal model of the x86-64 instruction set is still not available. This is caused by the sheer complexity of the x86-64 architecture: the informal specification found in Intel manuals is roughly 4700 pages, and even these are known to be not trustworthy."
"libLISA aims to solve this problem by using a CPU as the ground truth, and deriving semantics by observing instruction execution."
"We analyzed five different architectures: AMD 3900X, AMD 7700X, Intel i9-13900 (p), Intel i9-13900 (e) and Intel Xeon Silver 4110. For each architecture, we generated around 120k encodings."
Ok, so when I first learned assembly language (which was actually Z80, not x86, and after that I learned the 6502 assembly language, before learning any x86 or MIPS, which I still don't know all that well...), I learned if you use any bit patterns as instructions that are not *defined* as instructions in the documentation, "here be dragons!" The computer will do *something* but who knows what. The computer will do something because every bit pattern given to it as an instruction will be interpreted as an instruction by the silicon, but the silicon is only designed to do the right thing for the official documented instructions, and it's behavior for every other bit patters is just whatever it happens to do. It won't return an error message saying, hey, you can't do that -- in fact, it *can't* return an error messages because assembly language is way below the level of abstraction where there's such a thing as an "error message".
You might wonder why, then, anyone would care about these "undocumented" instructions and make any effort to document them? Why not just ignore them? Just make sure you write your programs, whether written by hand in assembly language or output from a compiler, such that they avoid the "here be dragons" instructions.
The reason people can't do that is because hackers will try to get binaries on your machine that use undocumented instructions. They, the hackers, taken the trouble to figure out what some of the undocumented instructions do, and if your reverse engineering tools don't know what those instructions do, then you can't reverse engineer code from hackers.
So it turns out that a *complete* mapping of bit patterns to the instructions they carry out is desired. It's essential to fighting hackers and ensuring good computer security.
What's produced by this project, assuming their results are as advertised, isn't a *complete* complete mapping, because that's really not possible -- there's just too many possible input bit patterns to test all of them -- but it's a complete mapping of certain groupings that are likely to produce meaningful instructions. They call this "mapping from encodings to semantics". |
|
|
Apparently it's really hard to get an AI image generator to generate an image of a glass of wine that is full to the brim. |
|
|
"As humanity gets closer to Artificial General Intelligence (AGI), a new geopolitical strategy is gaining traction in US and allied circles, in the natonal security, AI safety and tech communities. Anthropic CEO Dario Amodei and RAND Corporation call it the 'entente', while others privately refer to it as 'hegemony' or 'crush China'."
Max Tegmark, physics professor at MIT and president of the Future of Life Institute, argues that, "irrespective of one's ethical or geopolitical preferences," the entente strategy "is fundamentally flawed and against US national security interests."
He is reacting to Dario Amodei saying:
"... a coalition of democracies seeks to gain a clear advantage (even just a temporary one) on powerful AI by securing its supply chain, scaling quickly, and blocking or delaying adversaries' access to key resources like chips and semiconductor equipment. This coalition would on one hand use AI to achieve robust military superiority (the stick) while at the same time offering to distribute the benefits of powerful AI (the carrot) to a wider and wider group of countries in exchange for supporting the coalition's strategy to promote democracy (this would be a bit analogous to 'Atoms for Peace'). The coalition would aim to gain the support of more and more of the world, isolating our worst adversaries and eventually putting them in a position where they are better off taking the same bargain as the rest of the world: give up competing with democracies in order to receive all the benefits and not fight a superior foe."
"This could optimistically lead to an 'eternal 1991' -- a world where democracies have the upper hand and Fukuyama's dreams are realized."
Tegmark responds:
"Note the crucial point about 'scaling quickly', which is nerd-code for 'racing to build AGI'."
"From a game-theoretic point of view, this race is not an arms race but a suicide race. In an arms race, the winner ends up better off than the loser, whereas in a suicide race, both parties lose massively if either one crosses the finish line. In a suicide race, 'the only winning move is not to play.'"
"Why is the entente a suicide race? Why am I referring to it as a 'hopium' war, fueled by delusion? Because we are closer to building AGI than we are to figuring out how to align or control it." |
|
|
In a follow-up to the neural network GameNGen version of Doom, we now have the DIAMOND diffusion world version of Counter-Strike: Global Offensive. "DIAMOND" stands for "Diffusion As a Model Of eNvironment Dreams". Trained on vastly less training data than Doom, but, only 10 frames per second. Like Doom, it isn't just generating video that looks like the game, it's actually playable. But it can do some weird things (see below). |
|
|
"LegalFast: Create legal documents, fast."
"Not using AI."
"LegalFast uses AI to power some functionality, but there's a difference between using AI as a tool and having ChatGPT generate complete documents."
So there you have it: Uses AI, but doesn't use AI. I wonder if this is going to become a thing.
Personally, I think a lot of what determines whether AI is appropriate is the reliability requirement. AI is great for things like brainstorming where you only need one great idea and it can generate some bad ones. AI would be bad to generate software for a spacecraft or a medical device. What reliability is required for legal documents? |
|
|
Linux kernel developers banned from contributing to the Linux kernel for being Russian.
Linux's second-in-command (after Linus Torvalds) Greg Kroah-Hartman said:
"Remove some entries due to various compliance requirements. They can come back in the future if sufficient documentation is provided."
People are speculating (see link to Reddit discussion below) that this is due to US sanctions against Russia, and because of that, it could apply not just to Linux but to all open source projects. If the maintainers are within the United States, or outside the United States if they are US citizens, or outside the United States but in countries with treaties with the United States, then the US legal system could punish them if they fail to ban Russian contributors.
p.s. If you're thinking nginx is beyond the reach of this, because it was created by a Russian and is maintained by a Russian, Igor Sysoev, you'd be wrong. Igor Sysoev left the project in 2022 and nginx today is maintained by F5, a US corporation (in Seattle), subject to US law, including sanctions against Russia.
One wonders if we are seeing the beginning of a split in the whole open source world between a US-based open source ecosystem and an "everybody else" open source ecosystem. If it's only Russian programmers and not anybody else the US has a problem with (China, etc), then maybe not?
Are US programmers allowed to contribute to Russian-based open source projects? |
|
|
ChatGPT topped 3 billion user visits in September 2024.
1. google.com - 82.0B
2. youtube.com - 28.0B
3. facebook.com - 12.3B
4. instagram.com - 5.7B
5. whatsapp.com - 4.5B
6. x.com - 4.3B
7. wikipedia.org - 3.8B
8. yahoo.com - 3.4B
9. reddit.com - 3.4B
10. yahoo.co.jp - 3.2B
11. chatgpt.com - 3.1B
12. yandex.ru - 2.7B
13. amazon.com - 2.6B
14. baidu.com - 2.4B
15. tiktok.com - 2.1B
None of the other language models (Gemini, Claude, Meta, X.AI, Perplexity, etc) register on this global ranking -- ChatGPT crushes them all. Which is interesting to me as I use 8 LLMs (most of the time -- will probably try more soon) and ChatGPT doesn't consistently stand out as better than the others. But ChatGPT seems to have lept far ahead in terms of brand recognition with users. |
|
|
"BabyAGI 2o is an exploration into creating the simplest self-building autonomous agent. Unlike its sibling project BabyAGI 2, which focuses on storing and executing functions from a database, BabyAGI 2o aims to iteratively build itself by creating and registering tools as required to complete tasks provided by the user. As these functions are not stored, the goal is to integrate this with the BabyAGI 2 framework for persistence of tools created."
The naming might be confusing. OpenAI came out with a model called "o1", and the name "2o" might get you thinking this BabyAGI is using the "o" model. That's not the case.
What this is is a variant of BabyAGI 2 that installs anything it likes and runs code generated by LLMs automatically, that tries continuously to update itself and its tools in order to accomplish whatever taks you gave it. It works with a variety of LLMs -- it uses a system called LiteLLM that lets you choose between more than 100 LLMs. It tries to do everything without human intervention, so when errors happen, it will try to learn from them, and continue iterating towards task completion. |
|