
Google Launches Public Preview of Chrome DevTools MCP
Google has unveiled the public preview of 'Chrome DevTools MCP,' a Model Context Protocol server that enables AI coding agents to control and inspect a live Chrome instance. This tool allows for recording performance traces, inspecting DOM and CSS, executing JavaScript, reading console output, and automating user flows. The release addresses a common limitation in code-generating agents, which typically cannot observe the runtime behavior of the pages they create or modify. By integrating agents into Chrome’s DevTools via MCP, Google transforms static suggestion engines into loop-closed debuggers that run measurements in the browser before proposing fixes.
MCP is an open protocol for connecting LLMs to tools and data. Google's DevTools MCP acts as a specialized server that exposes Chrome’s debugging surface to MCP-compatible clients. The developer blog describes this as 'bringing the power of Chrome DevTools to AI coding assistants,' with specific workflows like initiating a performance trace against a target URL and analyzing the resulting trace to suggest optimizations.
The official GitHub repository documents a broad toolset. Beyond performance tracing, agents can execute navigation primitives, simulate user input, and interrogate runtime state. Screenshot and snapshot utilities provide visual and DOM-state capture to support diffs and regressions. The server uses Puppeteer for reliable automation and waiting semantics, communicating with Chrome via the Chrome DevTools Protocol.
Setup for MCP clients is intentionally minimal. Google recommends adding a single config stanza that shells out to npx, always tracking the latest server build. This server integrates with multiple agent front ends and targets Node.js ≥22 and current Chrome.
Google's announcement highlights practical prompts demonstrating end-to-end loops: verifying a proposed fix in a live browser, analyzing network failures, simulating user behaviors to reproduce bugs, inspecting layout issues, and running automated performance audits. These operations can now be validated with actual measurements rather than heuristics.