I put Claude inside Blender's Text Editor
Tech

I put Claude inside Blender's Text Editor

I got tired of the alt-tab loop. Write a Blender script. Hit an error. Switch to a browser. Paste the traceback to Claude. Copy the fix back. Re-run. Repeat. So I built Claude Code for Blender, an extension that puts Claude in the Text Editor's sidebar with the active script as automatic context, scene-aware tools, and the ability to actually run the Python it generates. Here's what I learned building it. The shape of the thing The extension is pure Python — no native dependencies, no build step. About 5,200 lines across 12 files, packaged as a Blender 4.2+ extension manifest. You drop the folder into your extensions directory, enable it in preferences, hit N in the Text Editor, and Claude shows up next to your code. Two things made it more than a chat wrapper: Claude can run code in your Blender session — with undo support, and if the code raises an exception, the traceback gets sent back automatically and Claude tries again. Claude can edit your text blocks as if they were files on disk — not by generating diffs you copy-paste, but by actually opening and writing them. The second part is where the interesting engineering lives. Two backends, one UI I shipped with two backends and a toggle to switch between them: CLI backend — shells out to claude -p (the Claude Code CLI in headless mode). Uses your existing Pro/Max/Team subscription. No API key required. API backend — direct calls to the Messages API with prepaid credits. Implements its own agentic tool-use loop with Blender-specific tools. Why both? Because the constraints are different. With the CLI, you're piggybacking on a subscription you might already pay for, and you get all of Claude Code's built-in tools (Read, Edit, Write, Glob, Grep, Bash) for free. With the API, you can give Claude tools that talk directly to bpy — create_object, add_modifier, setup_camera, set_render_settings — instead of going through generated Python every time. The CLI route turned out trickier than the API route, which surprised me. More on that below. The main-thread problem Blender has one rule that shapes everything: bpy.* calls only work on the main thread. If you call them from anywhere else you either get garbage state or a segfault. That's a problem when your AI assistant streams responses for thirty seconds and you don't want the UI to freeze. So the extension runs the request on a background thread and uses Blender's bpy.app.timers to dispatch back to the main thread. The bridge is small but it earns its place: class MainThreadBridge: def __init__(self): self._queue = queue.Queue() self._lock = threading.Lock() self._streaming_text = "" def execute_on_main(self, fn, *args, **kwargs): """Run fn on main thread, block bg thread until done, return result.""" holder = _ResultHolder() def _wrapper(): try: holder.set(fn(*args, **kwargs)) except Exception as e: holder.set_error(e) self._queue.put(_wrapper) return holder # caller does holder.wait() A timer drains the queue every tick. Streaming text deltas use a separate lock-protected string so the UI redraw can read the latest chunk without serializing through the queue. Tool calls use the blocking variant — the background thread parks on _event.wait() while Blender executes the tool and returns the result. One subtle gotcha: bpy.context inside a timer callback has no window, screen, or area. If the tool needs UI context (switching the active text block, redrawing a region), you have to wrap it in temp_override(). I lost an afternoon to that one. The text-block VFS Here's the part I'm proud of. Blender's Text Editor stores scripts as in-memory bpy.types.Text datablocks. They're not files. They have no path. Claude Code CLI, on the other hand, expects to operate on files — that's what its Read/Edit/Write tools do. So I wrote a small virtual filesystem that mirrors text blocks to disk: On request, every text block in the .blend gets written to /tmp/blender_claude/<blend_name>/<block_name>.py. The CLI runs with cwd set to that directory, so when Claude says "read my_script.py", the file is right there. After the response completes, the workspace is scanned and changes are synced back to Blender's text blocks. A background poll fires every two seconds to catch external edits and keep the two sides in sync. The trick is that Claude doesn't need to know it's not editing real files. From its perspective the workspace looks like any other project directory. It uses Glob to find scripts, Edit to do find-and-replace, Write to create new ones. All of that just works, and the extension translates it back to Blender state on the way out. A few text-block names are reserved and excluded from the sync: Name Purpose @Prompt@ Multi-line prompt buffer @CLAUDE.md@ Per-project instructions, prepended to the system prompt ^... (caret prefix) Local scratch blocks, never synced @CLAUDE.md@ is the one users seem to like most. You write project-specific rules ("always validate with compile() before exec()", "prefer modifiers to bmesh", "this scene is for a music video, keep things stylized"), and they ride along with every prompt for that .blend file. The agentic loop on the API side For the API backend, I wrote a simple tool-use loop: 1. Send messages to /v1/messages with stream=True and the tool catalog. 2. Read SSE deltas: text → push to UI, tool_use → buffer the input JSON. 3. On message_complete, if stop_reason == "tool_use": - Run each tool on the main thread via the bridge. - Append the assistant message + tool_result content to messages. - Loop back to step 1. Otherwise: done. Streaming responses are the easy part — requests with stream=True, parse SSE lines, push text deltas to the UI immediately. The harder part is buffering partial tool input. Anthropic streams tool inputs as partial_json deltas, so you accumulate the string, parse it once on content_block_stop, and only then dispatch. The tools themselves are scoped narrowly. I started with a single execute_python tool and let Claude write code for everything. That worked, but it was slower (extra round-trips for code generation), more error-prone (subtle bpy 4.0+ API changes broke things), and harder to undo. So I added dedicated tools for the boring 80%: create an object, add a modifier, assign a material, set up a camera. Claude reaches for those first and only falls back to execute_python for genuinely custom logic. That single decision — preferring narrow tools over a code-execution sledgehammer — was the biggest quality win. Error self-correction When execute_python raises, the tool result includes the traceback. Claude reads it, sees what broke, and writes a fix. No human in the loop. try: exec(compile(code, '<claude>', 'exec'), namespace) return {"status": "ok"} except Exception: return {"status": "error", "traceback": traceback.format_exc()} Two details matter here. First, compile() before exec() — this catches syntax errors with line numbers so Claude can see exactly where the problem is. Second, every execution wraps bpy.ops.ed.undo_push() so a single chat turn is one undo step. If Claude breaks your scene, one Ctrl+Z makes it whole again. Blender 4.0+ API gotchas Claude knows Python. Claude doesn't necessarily know that Blender 4.0 renamed half the Principled BSDF sockets, or that mat.blend_method became mat.surface_render_method, or that mesh.use_auto_smooth doesn't exist anymore. Without guidance, it confidently writes code that worked in 3.6 and fails on 4.2. The fix is mundane: a chunk of API migration notes baked into the system prompt, plus runtime detection of the actual Blender/Python version so Claude knows what it's targeting: Blender version: 4.2.3 Python version: 3.11.7 Critical 4.0+ migrations: - Principled BSDF: "Subsurface" → "Subsurface Weight", "Specular" → "Specular IOR Level", ... - Material: mat.blend_method → mat.surface_render_method - Mesh: removed use_auto_smooth, use bpy.ops.object.shade_auto_smooth instead Boring, but it cut the failure rate on first-attempt scripts dramatically. What I'd do differently The CLI backend is the most popular path with users (subscription + zero setup), but it's also where I've spent the most debugging time. Sessions go stale. The CLI's NDJSON event format isn't documented as a public interface, so I had to read it empirically and add fallbacks for unknown event types. Bidirectional file sync is racy at the edges — if the user edits a text block while Claude is also editing the mirrored file, last write wins, and "last" depends on the poll interval. If I were starting again I'd probably build the API backend first, ship it, and add the CLI later as the second backend rather than the default. The API path is more constrained but more honest about what's happening, and the agentic loop with narrow tools is genuinely good. Try it Available on Gumroad for $10. Works with Blender 4.2+ and Bforartists 4.2+. You'll need a Claude subscription (CLI backend) or API credits (API backend). Licensed GPL-3.0; the repo is private for now, but the package ships with full source — once installed you can read everything in blender_claude/ and modify it under the GPL terms. If you build something with it — a procedural environment, a rigging tool, a render queue helper — I'd love to see it. The whole point of putting an AI in your DCC is that the boring scripting layer disappears. What you do with the time you get back is where the actual work happens.

Read full story →

Comments

Loading comments…

Related