Get Started with oh my opencode with claude Together

Published:

Updated:

oh my opencode with claude

Disclaimer

As an affiliate, we may earn a commission from qualifying purchases. We get commissions for purchases made through links on this website from Amazon and other third parties.

Can a set of collaborative agents really speed up our coding workflows without breaking quality?

We trace how community projects led by developers like Yeongyu Kim shaped multi-agent orchestration before platforms added formal support. Our approach reviews the evolution from early tools to the Agent Teams research preview Anthropic released in February 2026.

In this guide we map the shift after January 2026, when subscription OAuth tokens were restricted. We explain practical setup steps, configuration tips, and how to pick the right model and provider for daily development tasks.

We focus on clear workflows to manage codebase, files, and testing. Our goal is to help teams use agents and local models in a way that preserves quality while saving time.

Key Takeaways

  • Community-driven projects seeded multi-agent ideas before official features arrived.
  • Agent Teams offer coordinated instances for complex project tasks.
  • Configuration and environment setup are key to stable coding sessions.
  • Choose models and providers to balance cost, speed, and quality.
  • Practical workflows reduce mistakes when integrating LLM tools into the terminal and CI.

Getting Started with oh my opencode with claude

We begin by mapping how early community efforts redesigned agent workflows for real-world coding tasks.

Our team reviews the practical fallout from a major policy change on January 9, 2026, when Anthropic restricted Pro and Max subscription OAuth tokens to official interfaces only.

That shift hit projects that relied on community tools. Developer Yeongyu Kim had spent roughly $24,000 on tokens while researching multi-agent orchestration. His work showed how teams used agents and local models to speed testing and file changes.

What this means for your setup: many plugins that bypassed limits stopped working. We explain how to migrate to tools that use native hooks and how to protect your codebase and sessions from sudden interruptions.

  • Review subscription and API access before deploying agents.
  • Favor providers and models that offer official support.
  • Keep a local fallback for critical tasks and testing.
RiskCauseMitigation
Service interruptionThird-party OAuth blockedUse native subscription hooks and local models
Unexpected costsUntracked token useSet limits and monitor API usage
Compatibility breaksCommunity plugins lagPin versions and test in a staging environment

Essential Environment Setup

Before you open the desktop, verify the CLI so plugins can detect your environment during project initialization. Installing the command interface first avoids common verification failures and speeds up setup.

Installing OpenCode

We start by installing the OpenCode CLI. This command-line tool is mandatory even if you prefer the graphical desktop app.

Install the CLI, then confirm it appears in your PATH. Plugins check for that signal when they initialize a project. If the CLI is missing, plugin checks often fail and halt automation of tasks.

Desktop App Advantages

The desktop app gives an intuitive window for managing complex projects and files. It simplifies workspace selection so all workspace features initialize for your session.

On Windows 11 the desktop defaults to PowerShell. We recommend setting the SHELL environment variable to cmd.exe or WSL when needed. That avoids character encoding issues and prevents shell command failures in non-English locales.

  • Tip: Select your project directory inside the app to ensure plugins find the codebase and configuration files.
  • Verify terminal: Confirm command execution and encoding before running tests or agent workflows.

Our approach focuses on a fast, reliable environment so models, tools, and agents run predictably and preserve session quality.

Configuring Terminal and Model Providers

We tune the terminal and provider list so your agents run predictably and your coding sessions keep high quality.

Managing Model Providers

OpenCode supports over 75 providers through the Models.dev integration. That gives us flexibility to pick models for different tasks and contexts.

Access the Providers settings window from the app window or the command line. Click “Show more providers” to load extended listings and metadata like context size and pricing.

Provider config files live at ~/.local/share/opencode/auth.json. We locate provider IDs and API keys there when we add agents or multi-model plugins.

  • Use the official provider list to auto-populate context and pricing data.
  • If you add a custom endpoint, capture model metadata or some features—missing metadata can break automatic context compression.
  • Test connectivity after adding a provider so your chosen LLM is ready for high-stakes code tasks.
ConfigurationProsRisks
Official provider listAccurate metadata, pricing, context windowFewer custom models
Manual custom endpointSupports niche open-source modelsMissing metadata, broken features
Local fallbackResilient sessions during subscription outagesMay need extra setup

Keep API keys secure and limit file access in your directory. Our approach favors official providers first, then manual endpoints only when a model fills a project need.

Leveraging Workspaces for Parallel Development

We show how workspaces unlock parallel development so teams keep the main branch stable. The desktop feature creates isolated environments that manage branches and code directories automatically.

Enable a workspace by right-clicking the project icon in the top-left corner and selecting “Enable Workspace.” Once active, the app spawns a distinct workspace for each task and connects an agent to the context.

Workspaces let us handle multiple requirements at once. They clean up branches and files after you submit a pull request and close the workspace. This keeps the primary branch clean and reduces merge friction.

  • Use git worktrees to switch contexts without new repos.
  • Organize the conversation list inside each workspace to track milestones.
  • Let the AI work on background coding tasks while we focus on reviews and tests.
WorkflowProsWhen to use
WorkspaceIsolated branches, auto-cleanParallel features, short-lived tasks
Single-branchSimple historySmall bugfixes
WorktreeFast context switchLarge features, complex models

Our approach boosts productivity for coding projects. By combining workspaces, agents, and the right toolset, we keep sessions predictable and focused each month.

Mastering Agent Selection and Planning

Choosing the right agent is the first step to reliable builds. We start by defining requirements, so every agent understands scope and constraints before code is touched.

Primary Agents

The Plan agent asks clarifying questions and produces an execution plan. This prevents vague requests that lead to rework.

The Build agent gets full tool access and executes the plan. It should only run after the plan is reviewed.

The Plan Agent Approach

Always begin with the Plan agent. We save the plan as a Markdown file and reload it in a fresh session. That reduces positional bias and keeps the llm focused on the top priorities.

  • Use context compression to keep key requirements visible in the context window.
  • Create a new session after each milestone to avoid context rot.
  • Review the execution plan before handing it to the Build agent for implementation.
StageAgentOutcome
RequirementPlan agentClarifying questions and Markdown execution plan
ImplementationBuild agentCode changes using tools and model access
VerificationReview + new sessionReduced hallucination, aligned project architecture

Optimizing Context with Rules Files

A professional office environment with a focus on collaborative teamwork. In the foreground, a diverse group of three individuals in business attire huddles around a laptop, showcasing their engagement in discussing code optimization strategies. The middle ground features a large whiteboard filled with colorful diagrams and rules files, symbolizing optimized context for coding. In the background, floor-to-ceiling windows reveal a bright cityscape, with natural light illuminating the scene, creating a vibrant and productive atmosphere. The composition is captured with a slight depth of field, giving prominence to the people while gently blurring the background details, evoking a sense of creativity and innovation in technology.

We create a compact AGENTS.md to steer agent decisions and lock project standards. This file becomes long-term memory for the repo after /init. It tells the model exactly which patterns and constraints to follow during coding tasks.

Explicit syntax rules—for example requiring str | None—shift the llm’s probability distribution. That reduces hallucinations and makes generated code more consistent.

AGENTS.md also manages dependencies. We can specify project-level tools such as uv sync so agents pick compatible packages and avoid mismatched installs.

Use clear sections: architecture, style, dependency rules, approval workflow, and language preferences. Each short directive narrows choices and saves tokens by preventing repeated repo scans.

  • Harness: Provide direct instructions the agent must follow.
  • Lock architecture: Prevents the model from reanalyzing structure each session.
  • Enforce style: Force patterns like str | None and preferred imports.
RulePurposeEffect
Require str | NoneType consistencyFewer signature errors in generated code
Specify uv syncDependency managementPredictable installs and CI parity
Approval workflowQA gatesAgent outputs require explicit sign-off before merge

Improving Skill Loading Success Rates

To boost Skill reliability, we shift the model’s default reasoning toward retrieval and verification.

Prioritizing Retrieval-Led Reasoning

We add a single directive to AGENTS.md: Prioritize retrieval-led reasoning over pretrained-knowledge-led reasoning. This nudges the agent to search repo files and external sources before using internal assumptions.

That change helps the model use tools like glob and grep to verify structure. It makes the agent run a quick scan, confirm file layout, then apply a Skill.

  • Explain why many models skip Skills: they default to cached knowledge rather than executing repo tools.
  • Add the one-line AGENTS.md instruction to force retrieval-first behavior.
  • Write clear Skill front matter so the model knows when each tool or Skill applies.
ChangeResultWhy it matters
Retrieval-led directiveSkill load rate 90%Models use live context and tools to verify code
No directiveSkill load rate 60%Model relies on pretrained knowledge and misapplies Skills
Clear Skill front matterFaster tool selectionAgent selects the right tool for the task

We link to practical planning resources like online planning tools to help teams design Skill front matter and testing flows.

Implementing Multi-Agent Orchestration

A futuristic digital workspace showcasing the essence of "Claude," an AI code assistant. In the foreground, a sleek computer monitor displays colorful, complex code snippets and orchestration diagrams, emanating a soft glow. In the middle ground, a diverse group of professionals in business attire engages in collaborative discussions around a modern conference table, with holographic displays of multi-agent systems hovering above. The background features large windows revealing a bright cityscape, suggesting innovation and connectivity. The lighting is bright and inviting, creating a dynamic and inspiring atmosphere. The overall mood conveys teamwork and cutting-edge technology, emphasizing the theme of multi-agent orchestration and collaboration.

Here we explain how to add a slim multi-agent plugin and tune a Council of models for architecture work. Our focus is practical: install, configure, and verify a resilient orchestration layer that helps solve complex design problems.

Installing the Plugin

Install the Oh-My-OpenCode-Slim plugin to get a lightweight multi-agent layer and prebundled Skills.

Run the bunx command to fetch the latest release and include required Skills. Then run opencode auth login so agents can access configured model providers and tools.

Configuring the Council

The Council agent uses ensemble learning to synthesize answers from multiple models. We select a mix of high-capacity and efficient models to balance cost and accuracy.

Tip: pick diverse models and lock the context directives so the council aggregates varied perspectives rather than repeating the same bias.

Testing Connectivity

Verify members by selecting the Council agent and typing “test Council connectivity” in the chat. This reveals unreachable providers and auth issues not always visible in the UI.

Send the same bug or architecture prompt to multiple agents to see ensemble agreement. That technique helps track down tricky bugs and often matches higher-end single-model performance.

StepActionExpected result
Installbunx install latest pluginPlugin and Skills available
Authopencode auth loginAgents can call model endpoints
Verifytest Council connectivityAll members reachable

When configured, our Council gives us confidence on architecture choices and speeds routine coding tasks. We recommend rerunning connectivity checks after provider changes and before major deployments.

Adopting Spec-Driven Development Workflows

Before writing a single line of code, we formalize requirements through the OpenSpec phases. Spec-Driven Development (SDD) keeps teams from asking an agent to build complex features from a one-sentence prompt.

We use the opsx-explorer command to guide requirement clarifications. The explorer runs a Skill that surfaces trade-offs and generates charts to compare options. That lets us pick an approach with evidence rather than guesswork.

Configure OpenSpec by running openspec config profile and choosing the Workflows option in the interface. That links the toolchain to the three structured phases:

  • opsx-propose: create a scoped proposal and acceptance criteria.
  • opsx-apply: implement the approved plan using agents and tests.
  • opsx-archive: store the spec, decisions, and charts for future audits.

We also use visual comparisons to weigh pros and cons. Charts make trade-offs clear and speed decision-making. For a practical read on planning and tools, see our hands-on review and a guide on building online tools:

PhaseGoalKey Outcome
opsx-proposeClarify scope and acceptanceConcrete proposal and test cases
opsx-applyImplement and validateAgent-driven PRs and CI checks
opsx-archiveDocument decisionsReusable spec and audit trail

Future-Proofing Your AI Coding Environment

Future-proofing means choosing durable tools and keeping experiments isolated. We balance stable, officially supported features and nimble community tool experiments so critical work stays reliable.

We encourage ongoing tests of different models and providers. Small experiments help us refine the agent setup and tune how tools access project code. Treat the environment as a living system that needs regular updates and checks.

Keep a mixed strategy: rely on official features for production tasks and use community plugins for innovation. For guidance on building resilient toolchains and custom online tools, see our short guide on how to create an online tool. We hope this helps you build a robust, efficient claude code workflow that scales as agents and tools evolve.

About the author

Latest Posts