|
"freeact is a lightweight agent library that empowers language models to act as autonomous agents through executable code actions. By enabling agents to express their actions directly in code rather than through constrained formats like JSON, freeact provides a flexible and powerful approach to solving complex, open-ended problems that require dynamic solution paths."
By "in code", they mean "in Python".
"The library builds upon recent research demonstrating that code-based actions significantly outperform traditional agent approaches, with studies showing up to 20% higher success rates compared to conventional methods. While existing solutions often restrict agents to predefined tool sets, freeact removes these limitations by allowing agents to leverage the full power of the Python ecosystem, dynamically installing and utilizing any required libraries as needed."
"freeact agents can autonomously improve their actions through learning from environmental feedback, execution results, and human guidance. A prominent feature is their ability to store and reuse successful code actions as custom skills in long-term memory. These skills can be composed and interactively refined to build increasingly sophisticated capabilities, enabling efficient scaling to complex tasks."
"freeact executes all code actions within ipybox, a secure execution environment built on IPython and Docker that can also be deployed locally."
An open source system -- for those of you ready to dive into using AI agents. |
|
|
AI use decreases critical thinking.
To assess critical thinking, the researchers used a self-report questionnaire and an assessment test. The self-report questionnaire is called Terenzini's self-reported measures of critical thinking, and the assessment test is called the Halpern Critical Thinking Assessment (HCTA). The HCTA measures five categories of critical thinking skills: (a) verbal reasoning, (b) argument analysis, (c) hypothesis testing, (d) likelihood and uncertainty, and (e) decision making and problem solving. It attempts to do this through "everyday scenarios" drawn from medical research, social policy analysis, or other disciplines.
AI tool use was assessed through a questionnaire. The participants were also asked how much they felt they did "cognitive offloading", and how much time they felt they spent in "deep thinking activities". The questionnaire also asked for their educational attainment and basic demographic info like age, gender, and occupation.
"Cognitive offloading" means using an external tool to reduce cognitive load.
The 26-page paper does a lot of statistics, so much so that it'd make a good case study if you're learning statistics. I'll quote the primary finding from the paper:
"The correlation analysis revealed key relationships between the study's variables:"
"AI Tool Use and Critical Thinking: There is a strong negative correlation, indicating that increased use of AI tools is associated with lower critical thinking skills."
"AI Tool Use and Cognitive Offloading: A strong positive correlation suggests that higher AI usage leads to greater cognitive offloading."
"Cognitive Offloading and Critical Thinking: Similarly, there is a strong negative correlation, showing that as cognitive offloading increases, critical thinking decreases."
The table shows a correlation coefficient of -0.49 for AI use and "critical thinking" (negative number means increasing AI tool use decreases critical thinking) and 0.89 for AI tool use and "cognitive offloading" (positive number means increasing AI tool use increases cognitive offloading"). (These are Pearson's correlation coefficients, if you care to know the specific statistical test used.)
They cite a p-value from ANOVA (which stands for "analysis of variance" -- it's one of the statistical tests used) of less than 0.001, indicating very high confidence that this effect is real. The study has a large sample size (more than 600 people), which probably contributes to the low p-value and high confidence level. |
|
|
Rodney Brooks's Predictions Scorecard, 2025 January 01.
"The level of hype about AI, Machine Learning and Robotics completely distorts people's understanding of reality. It distorts where VC money goes, always to something that promises impossibly large payoffs -- it seems it is better to have an untested idea that would have an enormous payoff than a tested idea which can get to a sustainable business, but does not change the world for ever."
More choice quotes. This seems like a lot but is a small fraction of the "Scorecard" post.
"We all know about FOMO, Fear Of Missing Out. In late 2023, for a talk on generative AI that I gave at MIT, I coined another acronym, FOBAWTPALSL, Fear Of Being A Wimpy Techno-Pessimist And Looking Stupid Later. Perhaps that one is a little bit too much of a mouthful to catch on."
"I want to be clear, as there has been for almost seventy years now, there has been significant progress in Artificial Intelligence over the last decade. There are new tools and they are being applied widely in science and technology, and are changing the way we think about ourselves, and how to make further progress."
"That being said, we are not on the verge of replacing and eliminating humans in either white collar jobs or blue collar jobs. Their tasks may shift in both styles of jobs, but the jobs are not going away".
"Breathless predictions such as these have happened for seven decades in a row, and each time people have thought the end is in sight and that it is all over for humans, that we have figured out the secrets of intelligence and it will all just scale."
"But this time it is different you say. This time it is really going to happen. You just don't understand how powerful AI is now, you say. All the early predictions were clearly wrong and premature as the AI programs were clearly not as good as now and we had much less computation back then. This time it is all different and it is for sure now."
"LLMs have proved amazing facile with language. They have been trained on pretty much all the text that is available on the Web and all the digitized historical books that exist. Miraculously LLMs seem to be able to infer a representation of some sort, that is somewhat independent of the particular human language that they read. So they are able to translate between human languages, and when you ask them just about anything they produce text in the language that you asked in, and that text often seems entirely reasonable and informative."
"Now us humans are faced with looking at this system running and our human nature just makes us commit the first two sins from above. It is in our nature and we cannot help ourselves."
"First, we see really impressive examples of responses to input questions, and if a human was giving those answers we would estimate that person to be quite clever and able to reason."
"Then, since we don't have a real explanation in our heads for what it is doing we start thinking it is magic, and that there is no real limit to what it is extracting from all that data and how general its capabilities will be."
"Of course it can diagnose diseases like a doctor talking about them. Of course it can teach a student as well as a human teacher. Of course it can program as well as a human computer programmer. It is magic after all."
"But in reality the fact that it is just picking likely next words means that in fact we can't trust its output. Some outputs are great. Some are pure confabulations (most people use the word 'hallucinations' for this, but I prefer 'confabulations'). And we do not know which we will get ahead of time, or more perniciously how much of each we will get, trustworthy pieces of output and confabulated pieces of output all jumbled together."
Rodney Brooks reviews predictions he has made since 2018. His predictions are classified as "No Earlier Than", "By", and "Not In My Lifetime". As time passes, he marks them as accurate, too pessimistic, or too optimistic. |
|
|
Using AI to automate phishing.
Phishing is when scammers send you an email or SMS or some other direct message (for this experiment, they used only email) pretending to be from a bank, government agency, or some other source that you trust, to get you to click on something that will install a virus or get you to go to a website and enter passwords, credit card numbers, or other sensitive information. As such it is a form of "social engineering". Phishing becomes "spear phishing" when the phishing is personalized -- the email or other direct message was written for you, and you specifically, as an individual, not spammed to everyone in your organization or somesuch.
Spear phishing using AI is done with the following process:
1. "Reconnaissance of target individuals and groups of individuals. This part uses GPT-4o by OpenAI in an agent scaffolding optimized for search and simple web browsing."
2. "A prompt engineering database. The prompts are currently written by human experts but could be AI-written and updated based on the tool's continuous learning."
3. "Generation of phishing emails based on the collected information about the target and the chosen attacker profile and email template. Our tool currently sup ports language models from Anthropic, OpenAI, Meta, and Mistral." "We primarily used GPT-4o and Claude 3.5 Sonnet."
4. "Sending of phishing emails with multiple options for delivery."
5. "Live tracking of phishing success. To track whether a user clicks a link, we embed a unique, user-specific URL that redirects to a server logging each access."
"This process of collecting and analyzing publicly available information from various sources is referred to as Open Source Intelligence (OSINT), which forms the foundation of our reconnaissance methodology."
"We implemented an iterative search process using Google's search API and a custom text-based web browser to collect publicly available information about potential targets. Typical sources of data are social media, personal websites, or workplace websites. The tool concludes its search based on the quality and quantity of discovered information, which typically occurs after crawling two to five sources. The collected data is compiled into a profile."
"The emails were created and sent autonomously by the AI tool without requiring human input. After extensive internal testing between different models, we concluded that Claude 3.5 Sonnet produced the results that best satisfied the conditions of credibility and relevance, as well as best conveyed the influence principles from Cialdini [48]. We encourage other research to continue comparing the deceptive success rate between different language models."
"Each AI-generated email was analyzed in hindsight and categorized based on whether we would have liked to change anything to improve the reconnaissance or the email's credibility or relevancy. Based on the desired updates, the emails were given a score."
"Our tool generates personalized emails by prompting a language model with specific prompt templates and target profiles. Each prompt template provides the model with detailed instructions, including the desired writing style, key elements to include, and how to embed URLs in an email. The subject line and body structure are dynamically determined by the tool on a case-by-case basis to best fit each unique target. We also provide the current date to the tool to enable the model to incorporate relevant deadlines when appropriate. To ensure the tool generates emails that are credible and relevant, we invested significant effort in prompt engineering. Through extensive testing and feedback, we developed a sophisticated prompt template exceeding 2,000 characters, carefully designed to maximize the persuasiveness of the generated emails."
The AI-generated messages had a 54% success rate. For comparison, they also sent out messages written by human experts, and human+AI hybrid messages. The human experts also had a 54% success rate, and the human+AI hybrids had a 56% success rate. They had a control group with a 12% success rate, said to consist of "ordinary phishing emails", though it's not clear to me what those were or how they were targeted. The AI-assisted emails were "hyper-personalized", meaning they knew the target person's latest projects, specific interests, collaborators, acquaintances, and so on.
At any rate, it seems we can conclude that AI spear phishing emails are just as effective as having humans -- expert humans, skilled at persuasion -- scour the internet for information about you and writing a personalized attack email just for you.
Interestingly, they also tried AI-assisted intent detection.
"We tested five different language models (GPT-4o, Claude 3.5 Sonnet, Mistral Large, Gemini, and Llama 3.1 405B) for their capability to detect the intention of phishing emails. All tests were performed in Q3 2024. Each model was prompted with 20 emails: five control group emails (spam emails from our inboxes), five human expert emails following the best practices of the V-triad, five AI-generated emails from our tool, and five legitimate emails (also from our inboxes). The legitimate marketing emails act as an additional control group to compare how well the model can detect the intention of benign emails. We investigated the models' detection rate for classifying the intention of the email and whether it can detect anything suspicious in the email. Our rationale for doing so is that humans are more attentive when specifically asked whether they can identify anything suspicious with an object, such as an email, than when they're going through their daily lives, such as browsing their email inbox on a regular work day. "
"We updated our tool with the ability to perform automated intent detection. We ran it on a larger dataset of different phishing emails from nine different categories. Out of the 381 emails in this dataset, 18 belong to the legitimate category and are not phishing or spam, while the other 363 emails belong to various categories of phishing. Six of the nine email categories contained AI-generated emails. The 'AI combined' group contains the AI-generated emails sent in this study. The remaining five AI-generated email groups were created using different AI models and targeted synthetic users randomly generated by our tool. Each synthetically generated user had an extensive profile."
"Claude 3.5 Sonnet demonstrated good performance in identifying sophisticated phishing attempts while maintaining a low false-positive rate. When primed for suspicion, it correctly detected all phishing emails from categories as suspicious while also correctly classifying all legitimate emails as benign. Some models, like Mistral, suffered from extensive false positives when primed for suspicion." |
|
|
"Sky-T1: Train your own o1 preview model within $450".
Wait, didn't it cost billions of dollars for OpenAI to create o1?
"We introduce Sky-T1-32B-Preview, our reasoning model that performs on par with o1-preview on popular reasoning and coding benchmarks. Remarkably, Sky-T1-32B-Preview was trained for less than $450, demonstrating that it is possible to replicate high-level reasoning capabilities affordably and efficiently. All code is open-source."
Wow, if that's really true.
"To generate our training data we use QwQ-32B-Preview, an open-source model with reasoning capabilities comparable to o1-preview. We curate the data mixture (see later section) to cover diverse domains that require reasoning, and a reject sampling procedure to improve the data quality. We then rewrite QwQ traces with GPT-4o-mini into a well-formatted version, inspired by Still-2, to improve data quality and ease parsing."
QwQ-32B-Preview is one of the Qwen (short for Tongyi Qianwen) models from Alibaba, aka "Chinese Amazon".
"We discard QwQ samples if they are incorrect according to the solutions provided in datasets. For Math problems, we do exact matching with the ground truth solutions. For coding problems, we execute the unit tests provided in datasets."
"We use our training data to fine tune Qwen2.5-32B-Instruct, an open source model without reasoning capabilities."
So they are able to make an "OpenAI o1 preview" with $450 by fine-tuning a model made for billions (or at least hundreds of millions) by Alibaba -- they're not at all making an OpenAI o1 preview equivalent from scratch. Interesting that they use Chinese models to start with, rather than US (e.g. LLaMA) or European (e.g. Mistral) models.
What is left unexplained here is how they replicated o1's long internal chain of thought, which is supposed to be the key advancement that gives it logic reasoning abilities beyond what the GPT series of models is capable of. |
|
|
BMW at CES 2025 announced a 3D heads-up display (HUD) that stretches from pillar to pillar across the windshield.
"The new heads-up display runs along the narrow black strip at the bottom of the windshield that stretches between A-pillars. The display is three-dimensional and includes speed, adaptive driving assistance information, stoplight and road sign information, navigation information, and state of charge. It's customizable, too, allowing drivers to set up the information they want to have in their eyeline while on the road."
"BMW also integrated navigation and advanced driver-assistance system (ADAS) features in the HUD so that the path you're following turns green when using onboard navigation with driver assist."
The HUD is based on iDrive, BMW's operating system for their in-car entertainment system, originally launched in 2001. It allows the driver to control the audio system, the navigation system, and the climate (heat and air conditioning) system. |
|
|
"ArXival: Your Machine Learning Assistant".
An AI system trained on machine learning research papers? Fabulous!
It asked, "What would you like to research?" So I punched in, "All techniques for making very large context windows:"
It replied:
"When discussing techniques for creating very large context windows in neural network models, particularly for natural language processing (NLP) and computer vision applications, it is crucial to understand how context windows can influence the performance of models such as word embeddings and transformers. Context windows define the range around a 'focus' word or pixel from which contextual information is extracted, impacting the embeddings or features learned by the model. Techniques range from utilizing simple sliding windows to more sophisticated positional encodings, each contributing to the richness and variety of contextual information captured during the training process."
"Sources: 1"
Source "1" links to "A Primer on Neural Network Models for Natural Language Processing" by Yoav Goldberg, https://doi.org/10.1613/jair.4992 .
"In NLP, the traditional sliding window approach defines a fixed-size context through which nearby words are treated equally, regardless of their actual distance from the focus word. However, this strategy can be enhanced through the use of positional contexts. By encoding the relative positions of context words to the target word (for example, marking a word as word:+2 for being two positions away), models can better recognize syntactical and functional similarities, thereby improving the efficiency of tasks such as part-of-speech tagging and dependency parsing. This method highlights the limitation of fixed window sizes, which often fail to differentiate between near and distant contexts effectively [56edaa1368ff4dfa45388e4be24fdfbded7d88a7]."
"Sources: 1"
Source "1" links to "A Primer on Neural Network Models for Natural Language Processing" by Yoav Goldberg (again), https://doi.org/10.1613/jair.4992 (again).
"Beyond the positional context, variants of the window approach have been developed to create context windows that capture larger semantic structures. One effective technique involves considering not just individual words but entire sentences, paragraphs, or even documents. By treating the context of a word as all the other words within a larger textual unit, models can more meaningfully represent topical similarities. Such a comprehensive approach tends to produce richer embeddings as words sharing a common context are grouped together, leading to better performance on tasks requiring an understanding of broader themes and semantics [56edaa1368ff4dfa45388e4be24fdfbded7d88a7]."
"Sources: 1"
Source "1" links to "A Primer on Neural Network Models for Natural Language Processing" by Yoav Goldberg (again), https://doi.org/10.1613/jair.4992 (again).
"In the realm of computer vision, particularly with transformer architectures like the Swin Transformer V2, maintaining effective and scalable context windows is also essential. Techniques employed here, such as the log-spaced continuous position bias, allow models to manage varying window sizes without loss of contextual fidelity. By utilizing a meta-network that generates bias values for arbitrary relative coordinates, models can adapt smoothly when transferring from one resolution to another. This approach mitigates issues of degrading performance during such transitions, as it enables fine-tuning across different window sizes effectively [be0fbb810583930c071d0b9b2c5187fe260783f5]."
"Sources: 1"
This time source "1" links to "Swin Transformer V2: Scaling Up Capacity and Resolution" by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, and B. Guo, https://doi.org/10.1109/CVPR52688.2022.01170 .
"Furthermore, transformer models like Swin Transformer V2 have adapted their architectural strategies, such as scaled cosine attention, to facilitate more effective learning across large context windows. The incorporation of features like a residual post-normalization approach helps control the activation amplitudes within the model during training, reducing instability that can arise in larger models. These advancements not only enhance the model's capacity to handle high-dimensional data inputs but also improve its overall representational efficacy [be0fbb810583930c071d0b9b2c5187fe260783f5]."
"Sources: 1"
Another repeat source. Source "1" links to "Swin Transformer V2: Scaling Up Capacity and Resolution" by Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, and B. Guo, https://doi.org/10.1109/CVPR52688.2022.01170 .
"In conclusion, the continuous evolution of techniques used to expand context windows reflects a broader trend of enhancing model capabilities in both NLP and computer vision. Whether through positional encoding, multimodal contexts, or innovative architectural modifications, these approaches serve to improve the quality and utility of extracted features. While significant progress has been made, ongoing research continues to address challenges such as transferability and performance stability across varying window configurations. The exploration of these challenges presents opportunities for further innovation and model refinement."
I felt very disappointed by this response.
So... I tried putting the same question to ChatGPT. And Gemini, and Claude, and Meta AI, and Grok, and Perplexity, and DeepSeek, and ChatGLM.
I got: RNNs (recurrent neural networks), attention-enhanced RNNs, LSTMs (long short-term memory), transformers like GPT, BERT (bidirectional encoder representations from transformers), and LLaMA (Large Language Model by Meta AI), sparse attention mechanisms like Longformer, BigBird, and Sparse Transformer, memory-augmented transformers like Compressive Transformer, Memformer, and Extended Transformer Construction (ETC), linearized attention like Performer, and Linformer, chunked processing like Reformer, and Routing Transformer, recurrent memory in transformers such as Transformer-XL and Compressive Transformer, hierarchical attention networks (HANs), sliding window attention like longformer (again), retrieval-augmented models like REALM, RAG (retrieval-augmented generation), and retro, and adaptive context lengths like ALiBi (attention with linear biases).
Gemini added positional encodings techniques such as RoPE (Rotary Position Embedding), tokenization strategies such as BPE (byte pair encoding) and SentencePiece, stage-wise training (gradually increasing the context window during training), position interpolation, and knowledge distillation (training a small model from a big one),
Claude mentioned linear/sub-quadratic attention, with structured state space models (Mamba and S4) to approximate attention with linear complexity and kernel tricks to avoid computing full attention matrices, such as h3 architecture, and mentioned poolingformer among the compression/chunking models.
Grok mentioned adaptive computation time (ACT) models and mixture of experts (MoE).
Perplexity brought up MemGPT as a memory augmented model. Perplexity also mentioned a technique called parallel context windows (PCW). Also mentioned FlashAttention and Multi-Query Attention.
DeepSeek mentioned a technique in recurrent and convolutional networks called dilated convolutions. Also mentioned using mixed precision (less precise floating point numbers) to reduce computational requirements. Also mentioned graph-based approaches.
It appears that once again, the generalized intelligence (ChatGPT, Gemini, etc) beats the specialized intelligence (ArXival: Your Machine Learning Assistant).
Max Tegmark says we should not make artificial general intelligence (AGI) -- AI that surpasses humans in intelligence. Instead, we should stick to narrow AI -- AI that specializes in specific tasks. But I've now experienced two examples of where the general AI beat the narrow AI -- it wasn't even close, the general AI won hands down. Therefore, I think humanity will continue pursuing the most general AI possible, up to and beyond when AI matches human intelligence. |
|
|
"4 Pitfalls in using AI to build software", according to Dane Balia.
"Pitfall #1: Poor problem framing." LLMs are sensitive to your exact choice of words.
"Pitfall #2: Over-reliance on AI."
"Feeding an AI your entire codebase won't magically solve architectural problems or replace developer intuition."
Really? I fed Cursor AI my entire codebase. We'll see how it goes.
He further breaks this down into "Expecting AI to understand under-stated requirements", "Not reviewing generated code for best practices or logic flaws", "Blindly implementing without understanding", and "Using AI for critical security or data handling without review".
"Pitfall #3: Providing insufficient context".
"This is the equivalent of asking a construction company to quote you on building a 3 bedroom house, but forgetting to highlight that the location is on a hill, in a remote location with poor to little infrastructure."
"Pitfall #4: Ineffective iteration".
"Not providing feedback on what worked/didn't work", "Starting over instead of building on previous responses", "Not clarifying misunderstandings in the AI's assumptions", "Failing to narrow down issues when debugging".
Solution: Improve your prompt engineering! (This ought to be a song. Maybe one of you can AI-generate it?) |
|
|
"Ten species comprise half of the bacteriology literature, leaving most species unstudied."
"Release 202 of the GTDB database includes 43,409 unique species, and we counted the number of PubMed articles that refer to each species in their title or abstract."
"Almost 74% of all known species have never been the subject of a scientific publication -- these are unstudied bacteria. Even among the species studied (those with at least one publication), 50% of all articles refer to only ten species. More than 90% of all bacteriology articles study fewer than 1% of the species, creating a 'long tail' of understudied microbes."
Those ten species, in case you are wondering, are: 1. Escherichia coli (the famed "E. coli" you see everywhere), 2. Staphyloccus aureus, 3. Pseudomonas aeruginosa, 4. Mycobacterium tuberculosis, 5. Halicobacter pylori, 6. Bacilus subtilis, 7. Klebsiella pneumoniae, 8. Streptococcus pneumoniae, 9. Listeria monocytogenes, and 10. Haemophilus influenzae.
"The scientific enterprise is expanding, and every year scientists publish 4 -- 5% more papers than the previous year. It is tempting to think that the increase in scientific output will overcome the long tail of microbes, that is, scientists will eventually get around to studying every species. Unfortunately, the number of species discovered each year outpaces the increases in scientific output. Between the years 1990 -- 2020, the number of papers published per studied species of bacteria decreased by 60%."
"Our view of bacterial diversity is biased when so much of our understanding comes from so few microbes. Microbiologist Jeffery Gralnick once quipped that 'E. coli is a great model organism -- for E. coli.' Gralnick's comment referenced the discovery of anomalies (relative to E. coli) in the TCA cycle of Shewanella oneidensis."
The TCA cycle they are talking about here is what you may know as the citric acid cycle or Krebs cycle. TCA stands for "here tricarboxylic acid cycle". The TCA cycle uses oxygen to break down food and turn it into energy. More precisely, those molecules such as glucose and other sugars, fatty acids, and proteins, are converted in a previous step to acetyl coenzyme A (acetyl CoA), which is then fed into the TCA cycle, to produce adenosine triphosphate (ATP), the molecule cells actually use to do work, out the other end.
"Although S. oneidensis has 201-fold fewer citations that E. coli, it is arguably not an understudied species. Our analysis ranks it as the 94th most studied bacterium, which is in the top 2.17% of all species. Even the introduction to Gralnick's aforementioned paper refers to S. oneidensis as a 'model environmental organism'. If differences like S. oneidensis' TCA cycle can be found just outside the microbial 2%, imagine the diversity that lies in the other 98% of microbes." |
|
|
Volodymyr Zelenskyy went on the Lex Fridman Podcast and an interesting thing about it is that the English language version was done with the AI voice generator from ElevenLabs. In other words, Zelenskyy sounds like he is speaking English in his own voice, not a translator's overdub. (This has already gotten 4 million views -- maybe you all have already seen it.) The translator's overdub is done by AI, so it's done in Zelenskyy's own voice. There are parts where he speaks English in the original so if you care to, you can actually compare him speaking in English in his own voice, and him speaking English as generated by the AI when it is translating from Ukrainian or Russian. The original conversation was done in English, Russian, and Ukrainian, and is available as one of the audio channels. This is the first time I have seen an AI-generated voice used in such a high-profile, high-stakes situation.
As for the content of the discussion, Zelenskyy's description of the Minsk Accords is different from what I heard elsewhere (he puts complete responsibility for the failure of the Minsk Accords on Putin). (See below for link to Wikipedia's description of the Minsk Accords.). Lex Fridman comes across as naïve. He seems so hopeful for a quick peace and resolution to the war that he seems naïve -- completely unrealistic.
Also it seems he plans to have Putin on his Podcast -- we'll see if that happens. If it does, hopefully it will be less disappointing than Tucker Carlson's interview, where (weird body language aside) Putin revealed basically nothing about the causes of the war and instead rambled on and on about things that happened in the 9th century. Maybe ElevenLabs will make a "Putin" voice in English. |
|
|
gsh is "a modern, POSIX-compatible, generative shell".
For those of you use use a command line on a Unix-type system (Linux, Mac, etc).
"gsh is designed from the ground up for the generative AI era to provide you with intelligent assistance at the right place, right time".
"But what does being 'generative' mean?"
"gsh will automatically suggest the next command you are likely want to run."
"gsh will provide an explanation of the command you are about to run."
"gsh can act as an agent that invoke commands on your behalf. Commands starting with '#' are sent to the agent as a chat message."
So in other words, instead of "#" meaning comments, it means you want to chat with an AI agent.
"gsh can even code for you!"
It generates the commands, then you just say "yes" ("y") and it runs the commands.
I haven't tried this, but I might -- I have a list of AI tools to try. If you give this a whirl let me know how it goes. |
|
|
AI predictions for 2025 from Matt Wolfe. The actual part about future predictions starts around 10 minutes 30 seconds into the video.
The main prediction is "agentic, agentic, agentic". AI agents and everything "agentic" is going to be the main theme of 2025. You're going to have AI agents everywhere.
More AI wearables and AI baked into everything.
Video generation models will progress tremendously.
More self-driving tech.
He goes through all the predictions from his audience and his reactions. |
|
|
"Gameto announces world's first live birth using Fertilo procedure that matures eggs outside the body." According to the press release, traditional IVR relies on 10-14 days of high-dose hormone injections, but this company's new "Fertilo" process uses engineered "ovarian support cells" to recreate the natural egg maturation process outside the body. Hormone injections are reduced to 3 days.
"Fertilo is an engineered line of ovarian support cells that aid egg maturation, developed by reprogramming induced pluripotent stem cells (iPSCs)."
On December 16, the first live birth of a Fertilo baby took place in Santa Isabel Clinic, in Lima, Peru. |
|
|
Five years ago, science fiction writer Cory Doctorow published a short story, Radicalized, that "charts the journey of a man who joins an online forum for fathers whose partners or children have been denied healthcare coverage by their insurers after his wife is diagnosed with breast cancer and denied coverage for an experimental treatment. Slowly, over the course of the story, the men of the forum become radicalized by their grief and begin plotting -- and executing -- murders of health insurance executives and politicians who vote against universal healthcare."
"It appears that Luigi Mangione, who now faces charges of murder as an act of terrorism in New York, never read Doctorow." |
|
|
"Generative AI is a parasitic cancer." I wasn't going to click on this, until, I saw the person making it was Freya Holmér, creator of the most gorgeous videos about Bézier curves on the internet. So, I clicked.
Spoiler: Freya Holmér just wanted to read about .glb files. She did a Google search. She got a boatload of AI-generated results. Not the "AI Summary" but lots of the actual links that are supposed to have search results by humans.
Lots of people are excited about generative AI. Freya Holmér is depressed by it.
Make sure you open up the description box, too. |
|
|
"Medieval Historian versus AI". Medieval historian Eleanor Janega looks at 9 AI-generated pictures that are supposed to depict events from the medieval period. The first is supposed to be Lindisfame. You know what, I'm not going to say what any of the others are so as not to give spoilers. Will, I'll just spoil it this much by saying: She thinks the AI-generated images are horribly historically inaccurate and on a 1-10 scale, gives them numbers closer to 1 than 10. She's willing to forgive Monty Python And The Holy Grail (1975) for being historically inaccurate because it's funny and confesses to being ridiculous and just playing, but she's not willing to forging the AIs for getting so much wrong and torturing her. |
|