Boulder Future Salon

Thumbnail
So there's a desktop application just for writing research papers. It's called MonsterWriter, because research papers are monsters? If you're writing a research paper, you might check it out.

"Don't spend time on formatting. Focus on the content and the structure of the document. MonsterWriter takes care of the final appearance of your writing." "Write once, publish everywhere." "Export as PDF, LaTeX, HTML, Markdown. Decide at the last moment what template to use. No reformatting required!" "Fast editing of large documents." "No need to learn complex apps. MonsterWriter is not for writing letters, invoices, invites, ..."

Thumbnail
Pwn2Own Vancouver for 2022. Apparently this is the 15th year this competition was done but I didn't hear about it until today. The original idea was you hack into a laptop and if you succeed, you get to keep the laptop. But now they also have big cash prizes. Think your software is secure? Common software including Microsoft Teams, Oracle Virtualbox, Mozilla Firefox, Windows 11, Ubuntu Desktop, Apple Safari, and the Tesla Model 3 Infotainment System were successfully hacked.

Thumbnail
"In late 2019, the government of New South Wales in Australia rolled out digital driver's licenses. The new licenses allowed people to use their iPhone or Android device to show proof of identity and age during roadside police checks or at bars, stores, hotels, and other venues."

"Now, 30 months later, security researchers have shown that it's trivial for just about anyone to forge fake identities using the digital driver's licenses, or DDLs. The technique allows people under drinking age to change their date of birth and for fraudsters to forge fake identities."

Ok, are you ready to roll your eyes?

"The technique for overcoming these safeguards is surprisingly simple. The key is the ability to brute-force the PIN that encrypts the data. Since it's only four digits long, there are only 10,000 possible combinations."

Thumbnail
A company called Mathpix makes a product called Snip that uses AI to "convert images and PDFs to LaTeX, DOCX, Overleaf, Markdown, Excel, ChemDraw and more." It sounds like OCR on steroids.

Handwriting recognition, digital Ink (API with live drawing to support handwriting recognition for equations in your app), their own markdown format ("Mathpix Markdown"), MPX CLI (enables you to convert files from your command line), supports multiple languages (including Chinese, Japanese, Thai and other Asian languages, most latin alphabet languages, not just German, French, etc but Czech, Polish, Croatian, supports Russian and some other Cyrillic languages -- that's for printed text; for handwritten text can handle latin alphabet languages plus Hindi). Converts images to: LaTeX, MS Word, SMILES, ChemDraw, and AsciiMath. Can convert equations back and fourth between LaTeX and MS Word. Can convert PDFs to: Word, LaTeX, Markdown, HTML, and Overleaf (cloud-based LaTeX editor). Can OCR tables and convert to: LaTeX, Markdown, and CSV. Tables embedded in PDFs can be exported as TSV, Excel sheets, or exported to Google Sheets.

"Our company is focused on saving people time when writing math on a cell phone or computer. We provide a consumer app, Snip, that automates the tedious aspects of typing documents containing math, and we provide an API MathpixOCR for developers to integrate OCR capabilities into their own applications. We are equally passionate about both projects and Snip is powered by MathpixOCR."

Thumbnail
"A downturn is the perfect time to start a startup." "You'll get higher-quality customer feedback." "Startups have to make stuff that people want. This is harder to do when money is abundant than when it's scarce." "Companies scrutinize spend and cut anything that isn't clearly valuable. You have to build something with real ROI to get companies to use their limited budgets on your product. This constraint will force you to iterate and build a better product -- you won't be able to fool yourself into thinking you have product-market fit when you don't."

"You were going to experience a downturn at some point. Doing so early sets you up for success."

"The opportunity cost is lower than it was last year."

Thumbnail
"Recently in a very active Slack channel for founders, someone asked 'At what point do you ask potential investors to sign an NDA?' and of course he got crushed. The 'never' responses rolled over him like an avalanche." "Then people went on to tout the common wisdom that it's not the idea, it's the execution. 'Ideas are easy.'"

"So if this is true, if ideas are easy, then the corollary should be true. Namely: you don't need a one in a million idea to make a great startup." "But I disagree completely with this." "In 25 years, I haven't seen any evidence of a formula you can follow that will produce product market fit."

"There are times in history where there are more billion dollar ideas available for the taking than other times, and so the probability that your idea is one of the billion dollar ones is greater." "This is not one of those times."

"Business ideas abound when we're at the beginning of a massive technology adoption curve."

Thumbnail
Amazon Web Services (AWS) is making ARM CPUs available to customers. Until now all the servers you could rent used Intel processors. "The cloud colossus unveiled Graviton3 at its late-2021 re:Invent conference, revealing that the 55-billion-transistor device includes 64 Arm-compatible CPU cores, runs at a 2.6GHz clock speed, can address DDR5 RAM with 300GB/sec max memory bandwidth, and employs Arm's 256-bit Scalable Vector Extensions."

Thumbnail
Lumencraft developer talks about his experience with the Godot open-source video game engine. I've posted about the Unreal Engine 5 release notes, so I might as well give some mention to other game engines including an open-source one.

"Lumencraft is a top-down shooter with base-building elements where you're a lonely little digger sent into bug infested underground caves. The game is made in Godot Engine 3, with many custom-made technologies that enable a fully destructible environment, fluid simulation and dynamic lighting."

"Why did you choose Godot for your project?"

"It comes down to Godot being Open Source. We knew that we needed custom modifications for destructible terrain, dynamic lightning, and support for thousands of swarm monsters. All this is rarely supported in any engine out of the box. Additionally most of our team was already familiar with Godot. Paweł Mogiła and I [Leszek Nowak] are teaching classes on simulations and game dev that are heavily based on Godot Engine mostly due to Godot being perfect for fast prototyping. When you want to create a new feature for your game it might just take a few hours to see it will work."

Thumbnail
In October 2021, DeepMind announced that we acquired the MuJoCo physics simulator and planned to open-source it. Open sourcing is complete.

"Physics simulators are critical tools in modern robotics research and often fall into these two categories: Closed-source, commercial software, and open-source software, often created in academia."

"The first category is opaque to the user, and although sometimes free to use, cannot be modified and is hard to understand. The second category often has a smaller user base and suffers when its developers and maintainers graduate."

With MuJoCo open source, the world will have a full-featured open-source physics simulator backed long-term by an established, research-driven company, DeepMind.

"Features that make MuJoCo particularly attractive for collaboration are: Full-featured simulator that can model complex mechanisms, readable, performant, portable code, easily extensible codebase, and detailed documentation: both user-facing and code comments."

Thumbnail
HanLP: Han Language Processing. So apparently this is a natural language processing library that can handle 10 tasks in 104 languages, but it looks like it was developed for Chinese so I'm guessing it's primarily useful for Chinese. The 10 tasks are tokenization, lemmatization, part-of-speech tagging, named entity recognition, dependency parsing, constituency parsing, semantic role labeling, semantic dependency analysis, abstract meaning representation parsing. Wait, that's 9. We need one of referential resolution, semantic text similarity, text style conversion, keyword phrase extraction, or extractive automatic summarization to make 10.

Thumbnail
"An aptronym is when a person has a name that is uniquely suited to its owner. Some examples: Usain Bolt, the fastest human ever [...] William Headline, Washington Bureau Chief for CNN [...] William Wordsworth, poet."

"Maybe it's nominative determinism, the hypothesis that people that people tend to gravitate towards areas of work that fit their names (an idea captured in the ancient Roman proverb 'nomen est omen' meaning 'the name is the sign')."

"The name Tom Brady just sounds more powerful than Blake Bortles (a top quarterback prospect who flopped -- I totally predicted that he was going to suck)."

"The virtually universal practice of assigning a permanent name at birth (which only exists so that governments can more easily tax us) has caused a species-wide shift towards a more narcissistic and egotistical mode of being."

"In these preconquest regions of New Guinea names were rarely binding. What one was called varied according to time, place, mood, and setting. Names were improvised, not formally bestowed, and naming (much like local language flexibility) was often a kind of humorous exploratory play."

"Peasants didn't like permanent surnames. Their own system was quite reasonable for them: John the baker was John Baker, John the blacksmith was John Smith, John who lived under the hill was John Underhill, John who was really short was John Short."

"Native American children are given names that suit their personalities. If a name is given and proves to be a bad fit, the child's name is changed. At adolescence, the given name may be changed again. As the adult progresses through life, new names can be awarded."

Thumbnail
SymphonyNet composes music using a language model like GPT-3.

Thumbnail
Superintelligence means many humans will have to radically rethink their purpose in life. "Having a sense of purpose involves having a goal and structuring one's life around that goal. There are different sorts of goals. Some are more subject-oriented and others more world-oriented. For example, one person may have as a goal to learn how to play the flute. It can be intrinsically satisfying to learn a new skill and to play an instrument well, and no one else can learn how to play the flute for that person -- that is something only they can do for themselves; nor does it matter that other people can play the flute better. But another person may have as a goal to provide for their family, or to make a great work of art, or to help the disadvantaged, or to advance a field of science or philosophy. Those are things that others can do, too; nobody with a goal like that has a monopoly on their goal."

"What will happen to those goals in the future? There is a real possibility that we create artificial superintelligence in our lifetimes." "That would mean there's an agent out there that is better than the best of us at providing economically, creating art, helping the disadvantaged, making scientific and philosophical progress and just about anything else that we may want to do. Then the entire second class of goals -- world-oriented goals -- will be meaningless to pursue, as the superintelligence can achieve them for us, far more easily, quickly and efficiently than we could ever hope to."

Thumbnail
"Lonestar emerges from stealth with plans for lunar data centers." The company has contracted a commercial lunar lander developer called Intuitive Machines to deploy a hardback novel-sized "data center in a box" with 16 terabytes of storage. It will get its electric power from the lander. Landing site is Oceanus Procellarum at western edge of the near side of the moon.

Thumbnail
A brain riding a rocketship heading towards the moon. An extremely angry bird. A small cactus wearing a straw hat and neon sunglasses in the Sahara desert. An alien octopus floats through a portal reading a newspaper. A marble statue of a Koala DJ in front of a marble statue of a turntable. The Koala has wearing large marble headphones. Teddy bears swimming at the Olympics 400m Butterfly event. A giant cobra snake on a farm. The snake is made out of corn. A dog looking curiously in the mirror, seeing a cat. A Pomeranian is sitting on the Kings throne wearing a crown. Two tiger soldiers are standing next to the throne. A dragon fruit wearing karate belt in the snow. A bald eagle made of chocolate powder, mango, and whipped cream. A photo of a Corgi dog riding a bike in Times Square. It is wearing sunglasses and a beach hat. A photo of a raccoon wearing an astronaut helmet, looking out of the window at night. The Toronto skyline with Google brain logo written in fireworks. A chrome-plated duck with a golden beak arguing with an angry turtle in a forest. An art gallery displaying Monet paintings. The art gallery is flooded. Robots are going around the art gallery using paddle boards. A strawberry mug filled with white sesame seeds. The mug is floating in a dark chocolate sea. A robot couple fine dining with Eiffel Tower in the background. A cute corgi lives in a house made out of sushi. A blue jay standing on a large basket of rainbow macarons. A transparent sculpture of a duck made out of glass. Android Mascot made from bamboo. Sprouts in the shape of text 'Imagen' coming out of a fairytale book. A wall in a royal castle. There are two paintings on the wall. The one on the left a detailed oil painting of the royal raccoon king. The one on the right a detailed oil painting of the royal raccoon queen. A majestic oil painting of a raccoon Queen wearing red French royal gown. The painting is hanging on an ornate wall decorated with wallpaper. A transparent sculpture of a duck made out of glass. The sculpture is in front of a painting of a landscape. A single beam of light enter the room from the ceiling. The beam of light is illuminating an easel. On the easel there is a Rembrandt painting of a raccoon. A bucket bag made of blue suede. The bag is decorated with intricate golden paisley patterns. The handle of the bag is made of rubies and pearls. Three spheres made of glass falling into ocean. Water is splashing. Sun is setting.

No, it's not DALL-E 2 from OpenAI. It's Imagen from DeepMind. Another AI system that generates images from your text descriptions.

Which is better? Hard to say. It's like comparing two human artists. Completely subjective.

I haven't yet had a chance to read the paper and learn how these new "diffusion networks" produce these artistic images from text. But I thought I'd go ahead and pass this on to you.

Thumbnail
"By combining naturalistic language at scale with structured program representations, we discover a fundamental information-theoretic tradeoff governing the part concepts people name: people favor a lexicon that allows concise descriptions of each object, while also minimizing the size of the lexicon itself."

So this wasn't an experiment on AI systems, it was an experiment on humans, with some natural language AI systems helping with the analysis. Basically they created images using programs and asked people to describe them in words. The drawings were either line drawings or block drawings, with lines being used to draw nuts & bolts, vehicles, gadgets (e.g. rows of dials), or furniture, and blocks being used to draw bridges, houses, castles, and other buildings. What they found was that as the length of the programs increased, the length of the human descriptions increased. If a drawing was simple, you could make a program that, for example, just repeated a pattern, so it would be a small, simple program. In some cases, though, human descriptions could be even more efficient than the programs, meaning the programs got bigger faster. This is because the programs built everything from low-level concepts (lines and blocks), but humans could use new words to introduce new concepts, like "window", "door", "story", "center", etc. These new words are at an intermediary conceptual level (in between "block" and "house").

The researchers conclude that humans actually do a good job of simultaneously economizing both the size of the descriptions and the size of the vocabulary. If you minimize descriptions you wouldn't minimize vocabulary, and vice-versa, and there's an optimal point where the combination is minimized, and that's how humans do it.