Meta lays off 600 from 'bloated' AI unit as Wang cements leadership (3 minute read) Meta will lay off roughly 600 employees to reduce layers and operate more nimbly. The cuts will impact workers across Meta's AI infrastructure units, the Fundamental Artificial Intelligence Research unit, and other product-related positions. They will not impact employees within TBD Labs. Meta will pay 16 weeks of severance plus two weeks for every completed year of service, minus the notice period. | Why Cohere's ex-AI research lead is betting against the scaling race (5 minute read) AI labs are racing to build giant data centers due to a deep belief that adding more compute power to existing AI training methods will eventually yield superintelligent systems. A growing group of AI researchers are saying that the scaling of large language models may be reaching its limits and that other breakthroughs may be needed to improve AI performance. Cohere's former VP of AI Research has launched a new startup, Adaption Labs, to build thinking machines that adapt and continuously learn. If the startup is right about the limitations of scaling, the implications could be huge - billions of dollars have already been invested in scaling. | Snapchat makes its first open prompt AI Lens available for free in the US (2 minute read) Snapchat's new "Imagine Lens" AI, initially exclusive to paid users, is now free in the US, allowing users to edit or generate Snaps by inputting custom prompts. This expansion comes amid competition from Meta and OpenAI's advanced AI video features. Snap aims to attract users by offering limited free AI-generated images with plans to expand access to other countries. | | Thoughts on the AI buildout (23 minute read) OpenAI's Sam Altman wants to create a factory that can produce a gigawatt of new AI infrastructure every week. To make this vision happen would require a lot of work. This article looks at whether the vision is physically feasible and what it could mean for different energy sources, upstream CapEx, and the US vs China competition. | How Well Does RL Scale? (14 minute read) RL-training for LLMs scales poorly. Most gains are from allowing LLMs to productively use longer chains of thought. This may be evidence that compute scaling will be less effective for AI progress than previously thought. The finding could lengthen timelines and affect strategies for AI governance and safety. | Smuggled Intelligence (6 minute read) GPT-5 Pro has solved complex problems in abstract algebra and aided in quantum computing research, showcasing AI's growing capability to perform expert-level tasks. Creating these benchmarks involves extensive human input, highlighting the ongoing need for human oversight in AI applications. | | Helion (19 minute read) Helion is a Python-embedded domain-specific language (DSL) for authoring machine learning kernels that compiles down to Triton. It makes it easier to write correct and efficient kernels while enabling more automation in the autotuning process. Helion combines a familiar and high-level PyTorch-like syntax with a powerful ahead-of-time autotuning engine to provide a unique balance of developer productivity, fine-grained control, and performance portability. It is currently in beta. | World Models for Embodied Agents (3 minute read) World-In-World introduces the first open benchmark platform for evaluating world models in closed-loop environments where agents actively interact with their surroundings. It shifts focus from visual fidelity to task performance. | | Reddit sues Perplexity for allegedly ripping its content to feed AI (3 minute read) Reddit is suing Perplexity, SerpApi, Oxylabs, and AWMProxy to stop them from scraping its data. It claims Perplexity will apparently do anything to get Reddit's data except enter into an agreement with Reddit directly. Reddit sent a cease-and-desist letter to Perplexity in May last year, demanding it stop scraping Reddit data, but Perplexity claimed it didn't use Reddit content. This was shown to be untrue after content posted on Reddit was produced by Perplexity within hours of the post being made. | | Statement on Superintelligence (5 minute read) Over 20,0000 signatories, including AI pioneers Geoffrey Hinton and Yoshua Bengio, Apple co-founder Steve Wozniak, Richard Branson, Prince Harry, Steve Bannon, Glenn Beck, five Nobel laureates, and Pope Francis's AI advisor, called for a prohibition on developing superintelligence until proven safe and controllable with strong public buy-in. | | | Love TLDR? Tell your friends and get rewards! | | Share your referral link below with friends to get free TLDR swag! | | | | Track your referrals here. | | Want to advertise in TLDR? 📰 If your company is interested in reaching an audience of AI professionals and decision makers, you may want to advertise with us. Want to work at TLDR? 💼 Apply here or send a friend's resume to jobs@tldr.tech and get $1k if we hire them! If you have any comments or feedback, just respond to this email! Thanks for reading, Andrew Tan, Ali Aminian, & Jacob Turner | | | |
0 Comments