Could a careful server setup cut days off our deploy time? That question drove our team to update this guide to version 2.0 on July 11, 2025.
We describe how we rolled out over 45 specialized mcp servers and why we install the Core mcp server first. This order ensures proper orchestration of every tool and agent in our environment.
We rely on the official claude code CLI and the client from claude.ai/code to run each command. Using the -s project flag creates a .mcp.json file in the project root so we can track changes in version control.
Every step focused on security for our primary account and on reproducible setup so team members follow the same code and deployment routines. The result was faster fixes, clearer audits, and smoother server operations.
Key Takeaways
- We upgraded to version 2.0 to standardize more than 45 mcp servers for claude code.
- Installing the Core mcp server first gives stable orchestration across servers.
- The claude code CLI and client streamline commands and automation.
- Generating a .mcp.json file keeps project configuration under version control.
- We keep every step aligned to our account security rules and environment policies.
Understanding the Power of AWS MCP with Claude
We turned the Model Context Protocol into a bridge that gives our models direct reach into cloud resources.
The protocol lets the model access live services, search documentation, and fetch relevant context without manual steps. This direct integration reduces time spent on data retrieval and speeds our development cycles.
Authentication and control are core to our design. We rely on SigV4-style account authentication so every API request is signed and authorized. That keeps agents confined to the least-privilege IAM policies we define.
We also use the claude code client to issue complex commands and manage servers. The Anthropic-operated stack delivers the latest features faster than legacy Amazon Bedrock setups, so we get same-day updates and some beta access.
- Model context protocol gives direct access to our infrastructure and documentation.
- Integrated tools and clients reduce manual work and improve control.
- Clear IAM policy structures protect resources while enabling automation.
For scaling guidance and project-level management examples, we link to a practical guide on scaling project management software that informed our workflows: scaling project management software.
Essential Prerequisites for a Successful Integration
Successful setup depends on having the correct client tools, environment variables, and verified account access.
System Requirements
We require the latest Claude Code client installed on every developer workstation. This tool gives us the commands and prompts needed to drive the integration.
Team members must understand basic cloud services and the model context protocol so they can troubleshoot data and access issues.
AWS CLI Configuration
Credentials and environment variables are central to our authentication flow. Set AWS_PROFILE and AWS_REGION so each mcp server talks to the right account.
We test the CLI with the command ‘aws sts get-caller-identity’ to confirm credentials and IAM policy scope.
- Centralized config file keeps settings consistent across servers.
- Every user gets read access via strict IAM policy documents.
- Use the Claude Code client to run an initial API request and verify connectivity.
| Prerequisite | Purpose | Verification |
|---|---|---|
| Claude Code client | Command and client features | Run basic commands and sample prompts |
| CLI credentials | Authentication and signed requests | ‘aws sts get-caller-identity’ success |
| Environment vars | Account and region routing | Correct AWS_PROFILE and AWS_REGION values |
| Central config file | Consistent server configuration | Version-controlled file in repo |
Configuring Your Environment and Authentication
Our team enforces a small set of shell exports that make each server session repeatable and secure.
We persist profile variables by exporting AWS_PROFILE and AWS_REGION into each developer shell profile. This ensures every session uses the right credentials and account routing.
To keep logs tidy during active development, we set MCP_LOG_LEVEL to “ERROR”. That prevents noisy messages from masking real problems.
Authentication relies on the CLI to sign requests using SigV4 automatically. We then use the claude code client to verify connectivity and to run a simple command that confirms API access.
- Persist profile vars in shell profiles for consistent usage.
- Set log level to ERROR to reduce log noise.
- Verify client and credentials before any server setup.
| Item | Purpose | How to Verify |
|---|---|---|
| Profile exports | Persistent account and region | Open a new shell and run CLI identity check |
| Log level | Reduce dev noise | Confirm MCP_LOG_LEVEL shows ERROR in env |
| Client check | Tool to execute signed API requests | Run a sample command from the client and inspect response |
| Config file | Define server scope and control | Store file in repo and review IAM roles |
We standardize a single command to deploy each mcp server and keep a secure configuration file in version control. This practice reduces human error and ensures IAM roles are assumed correctly.
Installing the Core Infrastructure Server
We start by deploying the Core infrastructure node that anchors our entire server fleet.
The Core MCP Server is always installed first. It orchestrates every other server and provides centralized control. We run the install using the standard command to register the component and create the project file.
Use this exact command to add the core: ‘claude mcp add awslabs.core-mcp-server’. This ensures the server is registered in the project file and ready for immediate use.
Verifying Server Health
After installation we verify health immediately. The core ship includes built-in monitoring that reports status, credentials checks, and IAM validation.
- Confirm the service has account access and correct authentication.
- Inspect the generated file and environment values for accuracy.
- Use the cli client to run a simple status command and collect data for troubleshooting.
| Step | Command | Verification |
|---|---|---|
| Install core | claude mcp add awslabs.core-mcp-server | Project file created and service registered |
| Health check | core status –check | All services report OK, IAM checks pass |
| Credentials audit | client auth verify | Credentials valid and access confirmed |
We documented the process as an example so engineers can replicate setup in local environments. For a related configuration walk-through, see our short guide on donation setup example.
Streamlining Infrastructure as Code Workflows
We built a set of servers that let developers run CDK, Terraform, and CloudFormation tasks from the same interface.
CDK Patterns
Our CDK server enforces standards early in development. It runs automated scans such as CDK Nag to catch risky constructs before they reach production.
Terraform Automation
The Terraform server integrates Checkov for policy checks. That tool flags misconfigurations and produces actionable reports so the team fixes issues during code review.
CloudFormation Management
We manage stack create, update, and delete operations directly through the server. Drift detection and stack templates stay under version control in our project file.
- Security scans run as part of every push to reduce manual reviews.
- The client executes consistent commands across environments and accounts.
- Every change is recorded in the project file for traceable development history.
| Tool | Feature | Benefit |
|---|---|---|
| CDK server | CDK Nag scanning | Enforces best practices in code |
| Terraform server | Checkov integration | Automates security compliance |
| CloudFormation server | Drift detection | Prevents unmanaged resource changes |
Leveraging AI and Machine Learning Capabilities

We integrated several intelligent servers to accelerate tasks like visual generation and enterprise search.
Our stack now includes an Amazon Nova Canvas MCP server that does text-to-image generation. That tool lets teams create visual assets from simple prompts. We call it from pipelines to save time on mockups and docs.
We also use Bedrock Knowledge Base Retrieval to query internal data in plain language. This feature cuts lookup time and surfaces context from our documentation and databases.
For enterprise search, Amazon Q and Kendra servers index structured and unstructured sources. We make API calls from our client and log every request to track usage and costs.
- Faster asset creation: text-to-image server for product and marketing visuals.
- Smarter search: knowledge retrieval and index-based queries across data.
- Automated workflows: api calls and code commands that run at scale.
| Capability | Server | Benefit |
|---|---|---|
| Image generation | Amazon Nova Canvas MCP | Rapid, high-quality visual assets from prompts |
| Knowledge retrieval | Bedrock Knowledge Base Retrieval | Quick answers from internal documents and context |
| Enterprise search | Amazon Q / Kendra | Indexing across datasets for fast discovery |
We document examples and usage patterns so every user can adopt these tools. For a deeper operational view, see our note on running claude cowork.
Enhancing Security and Identity Management
We shifted key identity functions into managed servers to enforce consistent access rules.
Our IAM server handles identity and permission tasks so we can enforce strict policies across accounts. We run most production environments in a read-only mode to avoid accidental changes.
Secrets Manager integration keeps credentials out of code. Automated workflows fetch secrets just-in-time so engineers never paste keys into repos.
We also use Systems Manager to centralize parameter configuration. That makes environment settings secure and easy to audit.
Secrets Manager Integration
Secrets are retrieved by our automation only when a valid request is made. Each api call is governed by iam policies and recorded for audits.
- We enforce least-privilege on every account role and review policy changes regularly.
- Read-only production modes prevent accidental configuration drift.
- Our client runs management commands and logs each command for traceability.
| Server | Primary Role | Controls |
|---|---|---|
| IAM MCP Server | Identity and policy enforcement | Least-privilege, audit logs |
| Secrets Manager Server | Secure credential storage | Just-in-time retrieval, rotation |
| Systems Manager Server | Parameter and config management | Centralized config, versioned values |
| Security Operations | Incident response and audits | Access reviews, automated alerts |
Monitoring and Operational Efficiency

Real-time log and metric queries let us solve incidents faster and with less guesswork.
We maintain high operational efficiency by routing logs and metrics through CloudWatch Logs and Metrics mcp servers. Natural language prompts let engineers query raw data quickly.
We also use the AWS Health mcp server to track service events and alerts. That feed gives timely context for troubleshooting and reduces mean time to resolution.
Prometheus integration monitors containerized apps and custom metrics in production. The combined view helps us spot bottlenecks across every server and environment.
Our client of choice, the claude code client, manages these monitoring tools. It runs commands, issues api calls, and records each request for usage tracking and audit.
- Real-time analysis of logs and metrics saves time during incidents.
- Service event feeds improve situational awareness and response.
- Container metrics via Prometheus ensure full observability.
- Automated queries and tracked api calls help optimize costs and usage.
| Monitoring Component | Primary Role | Key Benefit |
|---|---|---|
| CloudWatch Logs & Metrics | Log analysis and metric retrieval | Natural language queries for fast troubleshooting |
| AWS Health Server | Service events and health status | Early warning of infrastructure issues |
| Prometheus Server | Container and custom metrics | Deep visibility into production systems |
| Monitoring Client | Command execution and audit logging | Consistent workflows and tracked api calls |
We document configuration and examples so every user can reproduce the setup. We also review alerts and thresholds regularly to keep our monitoring accurate and actionable.
Adopting Best Practices for Production Environments
We lock down production servers in read-only mode to stop accidental changes that could affect critical data.
Read-Only Mode Benefits
Read-only enforcement prevents unintended writes and preserves running services. It reduces blast radius during deployments and audit windows.
We only grant write permissions for targeted, automated tasks. Each request is reviewed and logged to maintain strict control.
Project-Level Configuration
Our team uses the -s project flag so the .mcp.json file lives in version control. That file standardizes configuration across development and production.
We follow proven patterns: pairing the Core mcp server, IAM, and Cost Analysis servers for infrastructure tasks. This combination gives clear policy, cost visibility, and control.
- Centralized file for consistent configuration and faster audits.
- Every production request goes through the claude code client for authenticated command execution.
- Regular usage reviews ensure policies and patterns stay current.
| Best Practice | Purpose | How we verify |
|---|---|---|
| Read-only production | Protect critical data | Audit logs and access reviews |
| Project file (.mcp.json) | Consistent configuration | Version control diffs and code review |
| Server pattern | Secure management | Test deployments and policy checks |
For a practical example of how we manage project-level settings, see our project configuration guide.
Elevating Your Cloud Operations Strategy
We raised our operational baseline by making automation and repeatability the default.
We have successfully elevated our cloud operations strategy by integrating mcp servers into a single, predictable workflow. This gave us cleaner data and faster decision cycles.
Standardizing command execution and account management cut manual steps and reduced errors. Every team member now runs the same command patterns against the same project file.
Our reliance on these servers has made infrastructure work feel professional and consistent. We keep exploring new uses so our cloud strategy stays modern and efficient.


