Blueprint · AI / Developer Tools
AI Coding Workspace
A browser-based coding environment with file editing, AI help, model routing, code execution, and project memory.
PlannedVery HardAICode EditorSandboxingDiffsModel RoutingProject Memory
Overview
A Codex/Replit/Cursor-inspired workspace where a user can edit files, ask an AI assistant for changes, run code, and keep project context across sessions. The product is a workspace first, not a chat box with a file upload button.
Problem
AI coding help becomes fragile when the assistant cannot see the project shape, apply edits safely, remember previous decisions, or execute small checks in a sandbox.
Core users
- Developers prototyping small apps
- Students learning by editing and running code
- Builders who want an AI pair-programming workspace in the browser
- Technical writers creating reproducible code examples
MVP scope
- File tree with create, rename, delete, and open actions
- Code editor using Monaco or CodeMirror
- Chat panel with streaming AI responses
- AI-generated file edits with diff preview before applying
- Basic terminal/output panel for limited JS or Python execution
- Project memory for decisions, constraints, and recurring context
- Model selection or simple task-based routing
Non-goals
- Do not support arbitrary native binaries in the MVP
- Do not build multiplayer editing before single-user flows are stable
- Do not let the assistant write files without a visible diff or confirmation
- Do not attempt full IDE parity with VS Code initially
Core system components
- Workspace shell with resizable editor, file explorer, chat, and terminal panels
- File service for workspace files, versions, snapshots, and edit application
- Conversation service for assistant messages, tool calls, and memory references
- Model routing layer that selects providers by task type and budget
- Execution service backed by isolated containers or serverless sandboxes
- Usage meter for tokens, execution time, and model cost
- Diff renderer and patch application module
Suggested architecture
- Frontend: Next.js app with Monaco or CodeMirror, file explorer, command palette, diff viewer, chat panel, and terminal/output panel.
- Backend: APIs for projects, files, conversations, model calls, code execution requests, usage tracking, and memory updates.
- Storage: Postgres for workspace metadata, conversations, file versions, model calls, and usage; object storage for larger snapshots if needed.
- Execution: isolated containers or serverless sandboxes with timeouts, memory caps, and no host access. MVP can support limited JS/Python execution only.
- AI layer: provider adapter around OpenRouter or multiple LLM APIs with task routing for planning, editing, debugging, explanation, and refactor work.
- Deployment: single web app plus worker/execution service first; later split into separate API, execution, and model gateway services.
Data model
- Workspace: id, ownerId, name, defaultModel, createdAt
- File: id, workspaceId, path, language, currentVersionId, deletedAt
- FileVersion: id, fileId, contentRef, contentHash, createdBy, createdAt
- Conversation: id, workspaceId, title, activeModel, createdAt
- AssistantMessage: id, conversationId, role, content, toolCalls, createdAt
- ModelCall: id, conversationId, provider, model, inputTokens, outputTokens, cost, latencyMs
- ExecutionJob: id, workspaceId, command, status, stdoutRef, stderrRef, startedAt, finishedAt
- UsageRecord: id, userId, workspaceId, type, quantity, cost, createdAt
API design
- POST /api/workspaces - create a workspace
- GET /api/workspaces/:id/files - list files for the workspace
- PATCH /api/files/:id - update a file manually
- POST /api/ai/chat - stream assistant output
- POST /api/ai/apply-diff - validate and apply a model-generated patch
- POST /api/execute - run a supported command in a sandbox
- GET /api/execute/:jobId - fetch execution status and output
- GET /api/usage - show model and execution usage
Key technical challenges
- Keeping AI context small but useful
- Applying edits safely across multiple files
- Preventing destructive file changes
- Sandboxing execution without making the product feel slow
- Streaming model responses and execution logs cleanly
- Handling multi-file refactors and partial failures
- Showing diffs clearly enough that users trust the assistant
Tradeoffs
- Start with single-user workspaces so permissions and collaboration do not dominate the first architecture.
- Support a small set of languages first to make sandbox behavior predictable.
- Use explicit diff confirmation instead of direct file writes to build user trust.
- Route by broad task families before attempting learned routing.
Security considerations
- Never run user code directly on the host.
- Use sandboxed execution with timeouts, memory limits, filesystem limits, and network restrictions.
- Keep user secrets isolated from model prompts unless explicitly attached.
- Validate patches before applying and block path traversal writes.
- Audit assistant file changes and execution requests.
- Rate-limit model calls and execution jobs to prevent abuse.
Scaling path
- Start with single-user workspaces and a small execution pool.
- Add durable workspace snapshots and file history.
- Add team collaboration, real-time presence, file locking, and shared sessions.
- Split execution workers from the web app and scale them independently.
- Add model-routing policies by plan, task, latency, and budget.
Observability
- Trace every assistant request through context assembly, provider call, diff generation, and apply step.
- Track model latency, cost, edit acceptance rate, execution failures, and sandbox timeout rate.
- Store audit logs for file edits, deletes, execution jobs, and secret access.
- Expose per-workspace activity logs so users can understand what the assistant changed.
Future features
- Team workspaces
- Real-time multiplayer editing
- Persistent dev containers
- Automated test runs after AI edits
- Assistant memory controls
- Workspace templates for common stacks