Figma launched an MCP (Model Context Protocol) server that enables AI agents — including Cursor, Claude, and custom-built agents — to create and edit designs directly on the Figma canvas. The launch accompanies built-in AI Image Tools, voice input, and text on path — the most significant AI feature expansion in Figma's history.

The Figma MCP server

The MCP server exposes Figma's canvas as a programmable interface for AI agents. A developer working in Cursor can instruct their coding agent to generate a UI component directly in Figma — the agent creates frames, applies styles, positions elements, and names layers without requiring the developer to switch tools or manually recreate an AI suggestion. Design systems and component libraries are accessible to agents through the MCP interface, enabling context-aware generation that respects established design tokens and patterns.

Built-in AI Image Tools

Four AI image operations are now native to Figma, removing the need to export to Photoshop for common tasks. Vectorize converts raster images into editable vector shapes. Remove Background isolates subjects with one click. Erase removes objects from images non-destructively. Expand extends image boundaries using AI inpainting. All four operations work on images embedded in the Figma canvas without leaving the application.

Voice input and text on path

Voice input allows designers to issue natural-language design commands using their microphone — describing modifications rather than navigating menus. Text on path is a long-requested typographic feature that allows text to follow curved paths or shape outlines, enabling logotype treatments and circular text layouts that previously required workarounds or third-party plugins.

Why it matters

The MCP server makes Figma the first major design tool to become a first-class target for AI agent design work. Rather than AI agents generating code that developers manually implement as designs, the design artifact itself can now be created and updated by agents. This closes the loop between AI-assisted development and design — enabling workflows where a single agent produces both the code and the corresponding UI design simultaneously.