Latest

6/recent/ticker-posts

Header Ads Widget

OpenAI workspace agents ๐Ÿค, Google Workspace Intelligence ๐ŸŒ, Qwen3.6-27B ๐Ÿค–

OpenAI introduced workspace agents in ChatGPT, allowing teams to create shared AI agents for complex tasks and workflows ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌  ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 

TLDR

Together With Google Cloud

TLDR AI 2026-04-23

Google Cloud Next is underway! (Sponsor)

If you're building for the agentic era you need AI-optimized infrastructure to deliver on new requirements.

We announced significant expansion of our AI infrastructure portfolio including the eighth generation of our Tensor Processing Units (TPUs), which for the first time includes two distinct chips and specialized systems, engineered specifically for the agentic era.

Ready to learn how to leverage TPUs for your own training and inference workloads? Start here with this course

๐Ÿš€

Headlines & Launches

Introducing workspace agents in ChatGPT (9 minute read)

OpenAI introduced workspace agents in ChatGPT, allowing teams to create shared AI agents for complex tasks and workflows. These agents, powered by Codex, perform tasks like generating reports, writing code, and managing communication, while integrating with various tools like Slack. Workspace agents are currently available in research preview for select ChatGPT plans, aiming to streamline collaboration and improve productivity.
Google debuts Workspace Intelligence for Gemini Workspace (4 minute read)

Google launched Workspace Intelligence, enhancing Google Workspace with a semantic layer to integrate emails, chats, files, and projects for Gemini-powered agents. This update includes major product enhancements like natural-language spreadsheet building in Sheets and AI-driven features in Docs, Slides, Gmail, and Drive. Workspace Intelligence aims to make Workspace a centralized control layer for business operations, emphasizing security, context integration, and cross-application functionality.
Ex-OpenAI researcher Jerry Tworek launches Core Automation to build the most automated AI lab in the world (1 minute read)

Core Automation is an AI lab started by Jerry Tworek, a former OpenAI researcher, that aims to build the most automated AI lab in the world. It will start by automating its own research before developing new algorithms that go beyond pre-training and reinforcement learning. The lab will also create architectures designed to scale better than transformers. The team contains experts in frontier models, optimization, and systems engineering.
๐Ÿง 

Deep Dives & Analysis

Advancing Search-Augmented Language Models (19 minute read)

Perplexity's two-stage pipeline for search-augmented language models uses initial Supervised Fine-Tuning (SFT) followed by Reinforcement Learning (RL) to optimize factual accuracy, user preference, and tool-use efficiency. This approach, starting with Qwen3 models, separates compliance from search improvement to achieve accuracy without compromising guardrails. The models showed enhanced accuracy on benchmarks like FRAMES and FACTS OPEN with reduced cost per query and improved efficiency in tool usage over existing models like GPT-5.4.
Benchmarking Inference Engines on Agentic Workloads (9 minute read)

Agentic workloads are reshaping inference engine benchmarks, demanding multi-turn, tool-using scenarios that strain KV cache management and scheduling due to longer traces and varied token distributions. Applied Compute introduced three workload profiles to aid in optimizing engine and accelerator performance. They released an open-source benchmarking tool to replay these scenarios, highlighting the need for solutions such as KV cache offloading and workload-aware routing to improve throughput and efficiency.
A good AGENTS.md is a model upgrade. A bad one is worse than no docs at all (11 minute read)

Most of what people put in AGENTS.md either doesn't help or actively hurts. The patterns that work are specific and learnable. This to post looks at which patterns work, which fail, and how to tell which is which for your codebase. Different patterns move different metrics, so pick patterns that target the problem you actually have.
๐Ÿง‘‍๐Ÿ’ป

Engineering & Research

Data hoarding is good, actually (Sponsor)

Valuable data is often fragmented across various SaaS tools, file shares, and other silos that sneak up on you when you're trying to ship fast. In this webinar, Backblaze's director of Applied AI explains how you can build a scalable storage foundation on object storage using Backblaze B2 and B2 Overdrive. for all the phases of the AI data pipeline. See how you can store, label, and use all of your data, without blowing up your budget. Watch on-demand
Qwen3.6-27B: Flagship-Level Coding in a 27B Dense Model (2 minute read)

Qwen3.6-27B delivers flagship-level agentic coding performance. The Qwen team claims that it surpasses the previous-generation flagship Qwen3.5-397B-A17B across all major coding benchmarks. The model is 55.6 GB on Hugging Face, and there are even smaller quantized versions available. Tests show that the model delivers outstanding results, even when quantized.
Introducing Gemini Enterprise Agent Platform, powering the next wave of agents (17 minute read)

The Gemini Enterprise Agent Platform is a comprehensive platform for building, scaling, governing, and optimizing agents. It brings together model selection, model building, and agent building capabilities together with new features for agent integration, DevOps, orchestration, and security. Agent Platform is a single destination for technical teams to build agents that can transform products, services, and operations. The agents can be delivered to employees through the Gemini Enterprise app.
Building agents that reach production systems with MCP (14 minute read)

Agents can connect to external systems through direct API calls, CLIs, and MCP. This post looks at where each fits and the patterns for building those integrations effectively. MCP becomes the critical compounding layer as production agents move to the cloud. Every integration built on MCP strengthens the ecosystem.
๐ŸŽ

Miscellaneous

Microsoft Moving All GitHub Copilot Subscribers To Token-Based Billing In June (2 minute read)

Microsoft plans to roll out token-based billing for all GitHub Copilot customers starting in June. Copilot Business Customers will pay $19 per-user-per-month and receive $30 of pooled AI credits. Copilot Enterprise customers will pay $39 per-user-per-month and receive $70 of pooled AI credits. It is unclear what will happen to individual subscribers.
When LLMs Get Personal (20 minute read)

Personalization in LLM responses introduces variation but often retains a stable semantic core across answers. This shared foundation results from common model priors, overlapping retrievals, and product constraints, with differences emerging in examples and emphasis. Understanding this allows businesses to optimize their presence in AI-generated content by focusing on being part of the model's core knowledge.
You're the Bread in the AI Sandwich (4 minute read)

AI is enhancing engineering workflows by handling execution, leaving humans to plan, review, and ensure quality output. Humans excel at diagnosing problems from multiple angles, a challenge for AI. Organizational AI strategies in the future will likely include personalized assistants for employees or a singular super-agent with departmental plugins.
How to really stop your agents from making the same mistakes (7 minute read)

Relying on prompts to correct recurring AI agent mistakes is an unreliable, "vibes-based" approach that decays as soon as conversations become complex. To solve this, Y Combinator CEO Garry Tan advocates for "skillification." Instead of letting an agent waste compute attempting to solve deterministic tasks (like historical calendar lookups) in its latent space, this framework forces the AI to execute precise local scripts.

Quick Links

TLDR is hiring a curator for TLDR AI (3-5 hrs/week, Fully Remote)

We're hiring an engineer/researcher at a major AI lab or startup to help write for 1M+ subscribers. Curators have been invited to Google I/O and OpenAI DevDay, scouted for Tier 1 VCs, and get early access to unreleased TLDR products. Learn more.
Nvidia backs AI company Vast Data at $30 billion valuation (2 minute read)

Nvidia backed Vast Data's $1 billion funding round, valuing the AI-focused infrastructure company at $30 billion.
Anker made its own AI chip (3 minute read)

Anker's custom Thus AI chip is designed for audio devices with local AI, computing directly where the model lives to enhance efficiency.
OpenAI Is Quietly Testing GPT Image 2, and the AI Image Market Will Never Be the Same (8 minute read)

OpenAI's unannounced testing of GPT Image 2 on LM Arena showcases its advancements in AI image generation.

Love TLDR? Tell your friends and get rewards!

Share your referral link below with friends to get free TLDR swag!
Track your referrals here.

Want to advertise in TLDR? ๐Ÿ“ฐ

If your company is interested in reaching an audience of AI professionals and decision makers, you may want to advertise with us.

Want to work at TLDR? ๐Ÿ’ผ

Apply here, create your own role or send a friend's resume to jobs@tldr.tech and get $1k if we hire them! TLDR is one of Inc.'s Best Bootstrapped businesses of 2025.

If you have any comments or feedback, just respond to this email!

Thanks for reading,
Andrew Tan, Ali Aminian, & Jacob Turner


Manage your subscriptions to our other newsletters on tech, startups, and programming. Or if TLDR AI isn't for you, please unsubscribe.

Post a Comment

0 Comments