#
tokens: 22758/50000 28/29 files (page 1/2)
lines: on (toggle) GitHub
raw markdown copy reset
This is page 1 of 2. Use http://codebase.md/disler/aider-mcp-server?lines=true&page={x} to view the full context.

# Directory Structure

```
├── .claude
│   └── commands
│       ├── context_prime_w_aider.md
│       ├── context_prime.md
│       ├── jprompt_ultra_diff_review.md
│       └── multi_aider_sub_agent.md
├── .env.sample
├── .gitignore
├── .mcp.json
├── .python-version
├── ai_docs
│   ├── just-prompt-example-mcp-server.xml
│   └── programmable-aider-documentation.md
├── pyproject.toml
├── README.md
├── specs
│   └── init-aider-mcp-exp.md
├── src
│   └── aider_mcp_server
│       ├── __init__.py
│       ├── __main__.py
│       ├── atoms
│       │   ├── __init__.py
│       │   ├── data_types.py
│       │   ├── logging.py
│       │   ├── tools
│       │   │   ├── __init__.py
│       │   │   ├── aider_ai_code.py
│       │   │   └── aider_list_models.py
│       │   └── utils.py
│       ├── server.py
│       └── tests
│           ├── __init__.py
│           └── atoms
│               ├── __init__.py
│               ├── test_logging.py
│               └── tools
│                   ├── __init__.py
│                   ├── test_aider_ai_code.py
│                   └── test_aider_list_models.py
└── uv.lock
```

# Files

--------------------------------------------------------------------------------
/.python-version:
--------------------------------------------------------------------------------

```
1 | 3.12
2 | 
```

--------------------------------------------------------------------------------
/.mcp.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "mcpServers": {
 3 |     "aider-mcp-server": {
 4 |       "type": "stdio",
 5 |       "command": "uv",
 6 |       "args": [
 7 |         "--directory",
 8 |         ".",
 9 |         "run",
10 |         "aider-mcp-server",
11 |         "--editor-model",
12 |         "gemini/gemini-2.5-pro-preview-03-25",
13 |         "--current-working-dir",
14 |         "."
15 |       ],
16 |       "env": {}
17 |     }
18 |   }
19 | }
20 | 
```

--------------------------------------------------------------------------------
/.env.sample:
--------------------------------------------------------------------------------

```
 1 | # Environment Variables for just-prompt
 2 | 
 3 | # OpenAI API Key
 4 | OPENAI_API_KEY=your_openai_api_key_here
 5 | 
 6 | # Anthropic API Key
 7 | ANTHROPIC_API_KEY=your_anthropic_api_key_here
 8 | 
 9 | # Gemini API Key
10 | GEMINI_API_KEY=your_gemini_api_key_here
11 | 
12 | # Groq API Key
13 | GROQ_API_KEY=your_groq_api_key_here
14 | 
15 | # DeepSeek API Key
16 | DEEPSEEK_API_KEY=your_deepseek_api_key_here
17 | 
18 | # OpenRouter API Key
19 | OPENROUTER_API_KEY=your_openrouter_api_key_here
20 | 
21 | # Ollama endpoint (if not default)
22 | OLLAMA_HOST=http://localhost:11434
23 | 
24 | FIREWORKS_API_KEY=
```

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
 1 | # Python-generated files
 2 | __pycache__/
 3 | *.py[oc]
 4 | build/
 5 | dist/
 6 | wheels/
 7 | *.egg-info
 8 | 
 9 | # Virtual environments
10 | .venv
11 | 
12 | .env
13 | 
14 | # Byte-compiled / optimized / DLL files
15 | __pycache__/
16 | *.py[cod]
17 | *$py.class
18 | 
19 | # Distribution / packaging
20 | dist/
21 | build/
22 | *.egg-info/
23 | *.egg
24 | 
25 | # Unit test / coverage reports
26 | htmlcov/
27 | .tox/
28 | .nox/
29 | .coverage
30 | .coverage.*
31 | .cache
32 | nosetests.xml
33 | coverage.xml
34 | *.cover
35 | .hypothesis/
36 | .pytest_cache/
37 | 
38 | # Jupyter Notebook
39 | .ipynb_checkpoints
40 | 
41 | # Environments
42 | .env
43 | .venv
44 | env/
45 | venv/
46 | ENV/
47 | env.bak/
48 | venv.bak/
49 | 
50 | # mypy
51 | .mypy_cache/
52 | .dmypy.json
53 | dmypy.json
54 | 
55 | # IDE specific files
56 | .idea/
57 | .vscode/
58 | *.swp
59 | *.swo
60 | .DS_Store
61 | 
62 | 
63 | prompts/responses
64 | .aider*
65 | 
66 | focus_output/
67 | 
68 | # Log files
69 | logs/
70 | *.log
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Aider MCP Server - Experimental
  2 | > Model context protocol server for offloading AI coding work to Aider, enhancing development efficiency and flexibility.
  3 | 
  4 | ## Overview
  5 | 
  6 | This server allows Claude Code to offload AI coding tasks to Aider, the best open source AI coding assistant. By delegating certain coding tasks to Aider, we can reduce costs, gain control over our coding model and operate Claude Code in a more orchestrative way to review and revise code.
  7 | 
  8 | ## Setup
  9 | 
 10 | 0. Clone the repository:
 11 | 
 12 | ```bash
 13 | git clone https://github.com/disler/aider-mcp-server.git
 14 | ```
 15 | 
 16 | 1. Install dependencies:
 17 | 
 18 | ```bash
 19 | uv sync
 20 | ```
 21 | 
 22 | 2. Create your environment file:
 23 | 
 24 | ```bash
 25 | cp .env.sample .env
 26 | ```
 27 | 
 28 | 3. Configure your API keys in the `.env` file (or use the mcpServers "env" section) to have the api key needed for the model you want to use in aider:
 29 | 
 30 | ```
 31 | GEMINI_API_KEY=your_gemini_api_key_here
 32 | OPENAI_API_KEY=your_openai_api_key_here
 33 | ANTHROPIC_API_KEY=your_anthropic_api_key_here
 34 | ...see .env.sample for more
 35 | ```
 36 | 
 37 | 4. Copy and fill out the the `.mcp.json` into the root of your project and update the `--directory` to point to this project's root directory and the `--current-working-dir` to point to the root of your project.
 38 | 
 39 | ```json
 40 | {
 41 |   "mcpServers": {
 42 |     "aider-mcp-server": {
 43 |       "type": "stdio",
 44 |       "command": "uv",
 45 |       "args": [
 46 |         "--directory",
 47 |         "<path to this project>",
 48 |         "run",
 49 |         "aider-mcp-server",
 50 |         "--editor-model",
 51 |         "gpt-4o",
 52 |         "--current-working-dir",
 53 |         "<path to your project>"
 54 |       ],
 55 |       "env": {
 56 |         "GEMINI_API_KEY": "<your gemini api key>",
 57 |         "OPENAI_API_KEY": "<your openai api key>",
 58 |         "ANTHROPIC_API_KEY": "<your anthropic api key>",
 59 |         ...see .env.sample for more
 60 |       }
 61 |     }
 62 |   }
 63 | }
 64 | ```
 65 | 
 66 | ## Testing
 67 | > Tests run with gemini-2.5-pro-exp-03-25
 68 | 
 69 | To run all tests:
 70 | 
 71 | ```bash
 72 | uv run pytest
 73 | ```
 74 | 
 75 | To run specific tests:
 76 | 
 77 | ```bash
 78 | # Test listing models
 79 | uv run pytest src/aider_mcp_server/tests/atoms/tools/test_aider_list_models.py
 80 | 
 81 | # Test AI coding
 82 | uv run pytest src/aider_mcp_server/tests/atoms/tools/test_aider_ai_code.py
 83 | ```
 84 | 
 85 | Note: The AI coding tests require a valid API key for the Gemini model. Make sure to set it in your `.env` file before running the tests.
 86 | 
 87 | ## Add this MCP server to Claude Code
 88 | 
 89 | ### Add with `gemini-2.5-pro-exp-03-25`
 90 | 
 91 | ```bash
 92 | claude mcp add aider-mcp-server -s local \
 93 |   -- \
 94 |   uv --directory "<path to the aider mcp server project>" \
 95 |   run aider-mcp-server \
 96 |   --editor-model "gemini/gemini-2.5-pro-exp-03-25" \
 97 |   --current-working-dir "<path to your project>"
 98 | ```
 99 | 
100 | ### Add with `gemini-2.5-pro-preview-03-25`
101 | 
102 | ```bash
103 | claude mcp add aider-mcp-server -s local \
104 |   -- \
105 |   uv --directory "<path to the aider mcp server project>" \
106 |   run aider-mcp-server \
107 |   --editor-model "gemini/gemini-2.5-pro-preview-03-25" \
108 |   --current-working-dir "<path to your project>"
109 | ```
110 | 
111 | ### Add with `quasar-alpha`
112 | 
113 | ```bash
114 | claude mcp add aider-mcp-server -s local \
115 |   -- \
116 |   uv --directory "<path to the aider mcp server project>" \
117 |   run aider-mcp-server \
118 |   --editor-model "openrouter/openrouter/quasar-alpha" \
119 |   --current-working-dir "<path to your project>"
120 | ```
121 | 
122 | ### Add with `llama4-maverick-instruct-basic`
123 | 
124 | ```bash
125 | claude mcp add aider-mcp-server -s local \
126 |   -- \
127 |   uv --directory "<path to the aider mcp server project>" \
128 |   run aider-mcp-server \
129 |   --editor-model "fireworks_ai/accounts/fireworks/models/llama4-maverick-instruct-basic" \
130 |   --current-working-dir "<path to your project>"
131 | ```
132 | 
133 | ## Usage
134 | 
135 | This MCP server provides the following functionalities:
136 | 
137 | 1. **Offload AI coding tasks to Aider**:
138 |    - Takes a prompt and file paths
139 |    - Uses Aider to implement the requested changes
140 |    - Returns success or failure
141 | 
142 | 2. **List available models**:
143 |    - Provides a list of models matching a substring
144 |    - Useful for discovering supported models
145 | 
146 | 
147 | ## Available Tools
148 | 
149 | This MCP server exposes the following tools:
150 | 
151 | ### 1. `aider_ai_code`
152 | 
153 | This tool allows you to run Aider to perform AI coding tasks based on a provided prompt and specified files.
154 | 
155 | **Parameters:**
156 | 
157 | - `ai_coding_prompt` (string, required): The natural language instruction for the AI coding task.
158 | - `relative_editable_files` (list of strings, required): A list of file paths (relative to the `current_working_dir`) that Aider is allowed to modify. If a file doesn't exist, it will be created.
159 | - `relative_readonly_files` (list of strings, optional): A list of file paths (relative to the `current_working_dir`) that Aider can read for context but cannot modify. Defaults to an empty list `[]`.
160 | - `model` (string, optional): The primary AI model Aider should use for generating code. Defaults to `"gemini/gemini-2.5-pro-exp-03-25"`. You can use the `list_models` tool to find other available models.
161 | - `editor_model` (string, optional): The AI model Aider should use for editing/refining code, particularly when using architect mode. If not provided, the primary `model` might be used depending on Aider's internal logic. Defaults to `None`.
162 | 
163 | **Example Usage (within an MCP request):**
164 | 
165 | Claude Code Prompt:
166 | ```
167 | Use the Aider AI Code tool to: Refactor the calculate_sum function in calculator.py to handle potential TypeError exceptions.
168 | ```
169 | 
170 | Result:
171 | ```json
172 | {
173 |   "name": "aider_ai_code",
174 |   "parameters": {
175 |     "ai_coding_prompt": "Refactor the calculate_sum function in calculator.py to handle potential TypeError exceptions.",
176 |     "relative_editable_files": ["src/calculator.py"],
177 |     "relative_readonly_files": ["docs/requirements.txt"],
178 |     "model": "openai/gpt-4o"
179 |   }
180 | }
181 | ```
182 | 
183 | **Returns:**
184 | 
185 | - A simple dict: {success, diff}
186 |   - `success`: boolean - Whether the operation was successful.
187 |   - `diff`: string - The diff of the changes made to the file.
188 | 
189 | ### 2. `list_models`
190 | 
191 | This tool lists available AI models supported by Aider that match a given substring.
192 | 
193 | **Parameters:**
194 | 
195 | - `substring` (string, required): The substring to search for within the names of available models.
196 | 
197 | **Example Usage (within an MCP request):**
198 | 
199 | Claude Code Prompt:
200 | ```
201 | Use the Aider List Models tool to: List models that contain the substring "gemini".
202 | ```
203 | 
204 | Result:
205 | ```json
206 | {
207 |   "name": "list_models",
208 |   "parameters": {
209 |     "substring": "gemini"
210 |   }
211 | }
212 | ```
213 | 
214 | **Returns:**
215 | 
216 | - A list of model name strings that match the provided substring. Example: `["gemini/gemini-1.5-flash", "gemini/gemini-1.5-pro", "gemini/gemini-pro"]`
217 | 
218 | ## Architecture
219 | 
220 | The server is structured as follows:
221 | 
222 | - **Server layer**: Handles MCP protocol communication
223 | - **Atoms layer**: Individual, pure functional components
224 |   - **Tools**: Specific capabilities (AI coding, listing models)
225 |   - **Utils**: Constants and helper functions
226 |   - **Data Types**: Type definitions using Pydantic
227 | 
228 | All components are thoroughly tested for reliability.
229 | 
230 | ## Codebase Structure
231 | 
232 | The project is organized into the following main directories and files:
233 | 
234 | ```
235 | .
236 | ├── ai_docs                   # Documentation related to AI models and examples
237 | │   ├── just-prompt-example-mcp-server.xml
238 | │   └── programmable-aider-documentation.md
239 | ├── pyproject.toml            # Project metadata and dependencies
240 | ├── README.md                 # This file
241 | ├── specs                     # Specification documents
242 | │   └── init-aider-mcp-exp.md
243 | ├── src                       # Source code directory
244 | │   └── aider_mcp_server      # Main package for the server
245 | │       ├── __init__.py       # Package initializer
246 | │       ├── __main__.py       # Main entry point for the server executable
247 | │       ├── atoms             # Core, reusable components (pure functions)
248 | │       │   ├── __init__.py
249 | │       │   ├── data_types.py # Pydantic models for data structures
250 | │       │   ├── logging.py    # Custom logging setup
251 | │       │   ├── tools         # Individual tool implementations
252 | │       │   │   ├── __init__.py
253 | │       │   │   ├── aider_ai_code.py # Logic for the aider_ai_code tool
254 | │       │   │   └── aider_list_models.py # Logic for the list_models tool
255 | │       │   └── utils.py      # Utility functions and constants (like default models)
256 | │       ├── server.py         # MCP server logic, tool registration, request handling
257 | │       └── tests             # Unit and integration tests
258 | │           ├── __init__.py
259 | │           └── atoms         # Tests for the atoms layer
260 | │               ├── __init__.py
261 | │               ├── test_logging.py # Tests for logging
262 | │               └── tools     # Tests for the tools
263 | │                   ├── __init__.py
264 | │                   ├── test_aider_ai_code.py # Tests for AI coding tool
265 | │                   └── test_aider_list_models.py # Tests for model listing tool
266 | ```
267 | 
268 | - **`src/aider_mcp_server`**: Contains the main application code.
269 |   - **`atoms`**: Holds the fundamental building blocks. These are designed to be pure functions or simple classes with minimal dependencies.
270 |     - **`tools`**: Each file here implements the core logic for a specific MCP tool (`aider_ai_code`, `list_models`).
271 |     - **`utils.py`**: Contains shared constants like default model names.
272 |     - **`data_types.py`**: Defines Pydantic models for request/response structures, ensuring data validation.
273 |     - **`logging.py`**: Sets up a consistent logging format for console and file output.
274 |   - **`server.py`**: Orchestrates the MCP server. It initializes the server, registers the tools defined in the `atoms/tools` directory, handles incoming requests, routes them to the appropriate tool logic, and sends back responses according to the MCP protocol.
275 |   - **`__main__.py`**: Provides the command-line interface entry point (`aider-mcp-server`), parsing arguments like `--editor-model` and starting the server defined in `server.py`.
276 |   - **`tests`**: Contains tests mirroring the structure of the `src` directory, ensuring that each component (especially atoms) works as expected.
277 | 
278 | 
```

--------------------------------------------------------------------------------
/src/aider_mcp_server/atoms/__init__.py:
--------------------------------------------------------------------------------

```python
1 | # Atoms package initialization
```

--------------------------------------------------------------------------------
/src/aider_mcp_server/atoms/tools/__init__.py:
--------------------------------------------------------------------------------

```python
1 | # Tools package initialization
```

--------------------------------------------------------------------------------
/src/aider_mcp_server/tests/__init__.py:
--------------------------------------------------------------------------------

```python
1 | # Tests package initialization
```

--------------------------------------------------------------------------------
/src/aider_mcp_server/tests/atoms/__init__.py:
--------------------------------------------------------------------------------

```python
1 | # Atoms tests package initialization
```

--------------------------------------------------------------------------------
/src/aider_mcp_server/tests/atoms/tools/__init__.py:
--------------------------------------------------------------------------------

```python
1 | # Tools tests package initialization
```

--------------------------------------------------------------------------------
/src/aider_mcp_server/atoms/utils.py:
--------------------------------------------------------------------------------

```python
1 | DEFAULT_EDITOR_MODEL = "openai/gpt-4.1"
2 | DEFAULT_TESTING_MODEL = "openai/gpt-4.1"
3 | 
4 | 
```

--------------------------------------------------------------------------------
/src/aider_mcp_server/__init__.py:
--------------------------------------------------------------------------------

```python
1 | from aider_mcp_server.__main__ import main
2 | 
3 | # This just re-exports the main function from __main__.py
```

--------------------------------------------------------------------------------
/.claude/commands/context_prime.md:
--------------------------------------------------------------------------------

```markdown
1 | ## Context
2 | 
3 | READ README.md, THEN run git ls-files and eza --git-ignore --tree to understand the context of the project don't read any other files.
4 | 
5 | ## Commands & Feedback Loops
6 | 
7 | We're using `uv run pytest` to run tests.
8 | 
9 | You can validate the app works with `uv run aider-mcp-server --help`.
```

--------------------------------------------------------------------------------
/.claude/commands/multi_aider_sub_agent.md:
--------------------------------------------------------------------------------

```markdown
1 | Run a multi aider_ai_code call with a sub agent call to fulfill the following tasks back to back in the most sensible order. If the given task(s) can be broken down into smaller tasks, do that. If tasks are dependent on certain changes to be made first, make sure to run the dependent tasks first. $ARGUMENTS
```

--------------------------------------------------------------------------------
/.claude/commands/context_prime_w_aider.md:
--------------------------------------------------------------------------------

```markdown
 1 | ## Context
 2 | 
 3 | READ README.md, THEN run git ls-files and eza --git-ignore --tree to understand the context of the project don't read any other files.
 4 | 
 5 | ## Commands & Feedback Loops
 6 | 
 7 | To validate code use `uv run pytest` to run tests. (don't run this now)
 8 | 
 9 | You can validate the app works with `uv run aider-mcp-server --help`.
10 | 
11 | ## Coding
12 | 
13 | For coding always use the aider_ai_code tool.
```

--------------------------------------------------------------------------------
/src/aider_mcp_server/atoms/tools/aider_list_models.py:
--------------------------------------------------------------------------------

```python
 1 | from typing import List
 2 | from aider.models import fuzzy_match_models
 3 | 
 4 | def list_models(substring: str) -> List[str]:
 5 |     """
 6 |     List available models that match the provided substring.
 7 |     
 8 |     Args:
 9 |         substring (str): Substring to match against available models.
10 |     
11 |     Returns:
12 |         List[str]: List of model names matching the substring.
13 |     """
14 |     return fuzzy_match_models(substring)
```

--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------

```toml
 1 | [project]
 2 | name = "aider-mcp-server"
 3 | version = "0.1.0"
 4 | description = "Model context protocol server for offloading ai coding work to Aider"
 5 | readme = "README.md"
 6 | authors = [
 7 |     { name = "IndyDevDan", email = "[email protected]" }
 8 | ]
 9 | requires-python = ">=3.12"
10 | dependencies = [
11 |     "aider-chat>=0.81.0",
12 |     "boto3>=1.37.27",
13 |     "mcp>=1.6.0",
14 |     "pydantic>=2.11.2",
15 |     "pytest>=8.3.5",
16 |     "rich>=14.0.0",
17 | ]
18 | 
19 | [project.scripts]
20 | aider-mcp-server = "aider_mcp_server:main"
21 | 
22 | [build-system]
23 | requires = ["hatchling"]
24 | build-backend = "hatchling.build"
25 | 
```

--------------------------------------------------------------------------------
/src/aider_mcp_server/__main__.py:
--------------------------------------------------------------------------------

```python
 1 | import argparse
 2 | import asyncio
 3 | from aider_mcp_server.server import serve
 4 | from aider_mcp_server.atoms.utils import DEFAULT_EDITOR_MODEL
 5 | 
 6 | def main():
 7 |     # Create the argument parser
 8 |     parser = argparse.ArgumentParser(description="Aider MCP Server - Offload AI coding tasks to Aider")
 9 |     
10 |     # Add arguments
11 |     parser.add_argument(
12 |         "--editor-model", 
13 |         type=str, 
14 |         default=DEFAULT_EDITOR_MODEL,
15 |         help=f"Editor model to use (default: {DEFAULT_EDITOR_MODEL})"
16 |     )
17 |     parser.add_argument(
18 |         "--current-working-dir", 
19 |         type=str, 
20 |         required=True,
21 |         help="Current working directory (must be a valid git repository)"
22 |     )
23 |     
24 |     args = parser.parse_args()
25 |     
26 |     # Run the server asynchronously
27 |     asyncio.run(serve(
28 |         editor_model=args.editor_model,
29 |         current_working_dir=args.current_working_dir
30 |     ))
31 | 
32 | if __name__ == "__main__":
33 |     main()
```

--------------------------------------------------------------------------------
/src/aider_mcp_server/tests/atoms/tools/test_aider_list_models.py:
--------------------------------------------------------------------------------

```python
 1 | import pytest
 2 | from aider_mcp_server.atoms.tools.aider_list_models import list_models
 3 | 
 4 | def test_list_models_openai():
 5 |     """Test that list_models returns GPT-4o model when searching for openai."""
 6 |     models = list_models("openai")
 7 |     assert any("gpt-4o" in model for model in models), "Expected to find GPT-4o model in the list"
 8 |     
 9 | def test_list_models_gemini():
10 |     """Test that list_models returns Gemini models when searching for gemini."""
11 |     models = list_models("gemini")
12 |     assert any("gemini" in model.lower() for model in models), "Expected to find Gemini models in the list"
13 |     
14 | def test_list_models_empty():
15 |     """Test that list_models with an empty string returns all models."""
16 |     models = list_models("")
17 |     assert len(models) > 0, "Expected to get at least some models with empty string"
18 |     
19 | def test_list_models_nonexistent():
20 |     """Test that list_models with a nonexistent model returns an empty list."""
21 |     models = list_models("this_model_does_not_exist_12345")
22 |     assert len(models) == 0, "Expected to get no models with a nonexistent model name"
```

--------------------------------------------------------------------------------
/.claude/commands/jprompt_ultra_diff_review.md:
--------------------------------------------------------------------------------

```markdown
 1 | # Ultra Diff Review
 2 | > Execute each task in the order given to conduct a thorough code review.
 3 | 
 4 | ## Task 1: Create diff.txt
 5 | 
 6 | Create a new file called diff.md.
 7 | 
 8 | At the top of the file, add the following markdown:
 9 | 
10 | ```md
11 | # Code Review
12 | - Review the diff, report on issues, bugs, and improvements. 
13 | - End with a concise markdown table of any issues found, their solutions, and a risk assessment for each issue if applicable.
14 | - Use emojis to convey the severity of each issue.
15 | 
16 | ## Diff
17 | 
18 | ```
19 | 
20 | ## Task 2: git diff and append
21 | 
22 | Then run git diff and append the output to the file.
23 | 
24 | ## Task 3: just-prompt multi-llm tool call
25 | 
26 | Then use that file as the input to this just-prompt tool call.
27 | 
28 | prompts_from_file_to_file(
29 |     from_file = diff.md,
30 |     models = "openai:o3-mini, anthropic:claude-3-7-sonnet-20250219:4k, gemini:gemini-2.0-flash-thinking-exp"
31 |     output_dir = ultra_diff_review/
32 | )
33 | 
34 | ## Task 4: Read the output files and synthesize
35 | 
36 | Then read the output files and think hard to synthesize the results into a new single file called `ultra_diff_review/fusion_ultra_diff_review.md` following the original instructions plus any additional instructions or callouts you think are needed to create the best possible review.
37 | 
38 | ## Task 5: Present the results
39 | 
40 | Then let me know which issues you think are worth resolving and we'll proceed from there.
```

--------------------------------------------------------------------------------
/src/aider_mcp_server/atoms/data_types.py:
--------------------------------------------------------------------------------

```python
 1 | from typing import List, Optional, Dict, Any, Union
 2 | from pydantic import BaseModel, Field
 3 | 
 4 | # MCP Protocol Base Types
 5 | class MCPRequest(BaseModel):
 6 |     """Base class for MCP protocol requests."""
 7 |     name: str
 8 |     parameters: Dict[str, Any]
 9 | 
10 | class MCPResponse(BaseModel):
11 |     """Base class for MCP protocol responses."""
12 |     pass
13 | 
14 | class MCPErrorResponse(MCPResponse):
15 |     """Error response for MCP protocol."""
16 |     error: str
17 | 
18 | # Tool-specific request parameter models
19 | class AICodeParams(BaseModel):
20 |     """Parameters for the aider_ai_code tool."""
21 |     ai_coding_prompt: str
22 |     relative_editable_files: List[str]
23 |     relative_readonly_files: List[str] = Field(default_factory=list)
24 | 
25 | class ListModelsParams(BaseModel):
26 |     """Parameters for the list_models tool."""
27 |     substring: str = ""
28 | 
29 | # Tool-specific response models
30 | class AICodeResponse(MCPResponse):
31 |     """Response for the aider_ai_code tool."""
32 |     status: str  # 'success' or 'failure'
33 |     message: Optional[str] = None
34 | 
35 | class ListModelsResponse(MCPResponse):
36 |     """Response for the list_models tool."""
37 |     models: List[str]
38 | 
39 | # Specific request types
40 | class AICodeRequest(MCPRequest):
41 |     """Request for the aider_ai_code tool."""
42 |     name: str = "aider_ai_code"
43 |     parameters: AICodeParams
44 | 
45 | class ListModelsRequest(MCPRequest):
46 |     """Request for the list_models tool."""
47 |     name: str = "list_models"
48 |     parameters: ListModelsParams
49 | 
50 | # Union type for all possible MCP responses
51 | MCPToolResponse = Union[AICodeResponse, ListModelsResponse, MCPErrorResponse]
```

--------------------------------------------------------------------------------
/ai_docs/programmable-aider-documentation.md:
--------------------------------------------------------------------------------

```markdown
 1 | # Aider is a programmable AI coding assistant
 2 | 
 3 | Here's how to use it in python to build tools that allow us to offload ai coding tasks to aider.
 4 | 
 5 | ## Code Examples
 6 | 
 7 | ```
 8 | 
 9 | class AICodeParams(BaseModel):
10 |     architect: bool = True
11 |     prompt: str
12 |     model: str
13 |     editor_model: Optional[str] = None
14 |     editable_context: List[str]
15 |     readonly_context: List[str] = []
16 |     settings: Optional[dict]
17 |     use_git: bool = True
18 | 
19 | 
20 | def build_ai_coding_assistant(params: AICodeParams) -> Coder:
21 |     """Create and configure a Coder instance based on provided parameters"""
22 |     settings = params.settings or {}
23 |     auto_commits = settings.get("auto_commits", False)
24 |     suggest_shell_commands = settings.get("suggest_shell_commands", False)
25 |     detect_urls = settings.get("detect_urls", False)
26 | 
27 |     # Extract budget_tokens setting once for both models
28 |     budget_tokens = settings.get("budget_tokens")
29 | 
30 |     if params.architect:
31 |         model = Model(model=params.model, editor_model=params.editor_model)
32 |         extra_params = {}
33 | 
34 |         # Add reasoning_effort if available
35 |         if settings.get("reasoning_effort"):
36 |             extra_params["reasoning_effort"] = settings["reasoning_effort"]
37 | 
38 |         # Add thinking budget if specified
39 |         if budget_tokens is not None:
40 |             extra_params = add_thinking_budget_to_params(extra_params, budget_tokens)
41 | 
42 |         model.extra_params = extra_params
43 |         return Coder.create(
44 |             main_model=model,
45 |             edit_format="architect",
46 |             io=InputOutput(yes=True),
47 |             fnames=params.editable_context,
48 |             read_only_fnames=params.readonly_context,
49 |             auto_commits=auto_commits,
50 |             suggest_shell_commands=suggest_shell_commands,
51 |             detect_urls=detect_urls,
52 |             use_git=params.use_git,
53 |         )
54 |     else:
55 |         model = Model(params.model)
56 |         extra_params = {}
57 | 
58 |         # Add reasoning_effort if available
59 |         if settings.get("reasoning_effort"):
60 |             extra_params["reasoning_effort"] = settings["reasoning_effort"]
61 | 
62 |         # Add thinking budget if specified (consistent for both modes)
63 |         if budget_tokens is not None:
64 |             extra_params = add_thinking_budget_to_params(extra_params, budget_tokens)
65 | 
66 |         model.extra_params = extra_params
67 |         return Coder.create(
68 |             main_model=model,
69 |             io=InputOutput(yes=True),
70 |             fnames=params.editable_context,
71 |             read_only_fnames=params.readonly_context,
72 |             auto_commits=auto_commits,
73 |             suggest_shell_commands=suggest_shell_commands,
74 |             detect_urls=detect_urls,
75 |             use_git=params.use_git,
76 |         )
77 | 
78 | 
79 | def ai_code(coder: Coder, params: AICodeParams):
80 |     """Execute AI coding using provided coder instance and parameters"""
81 |     # Execute the AI coding with the provided prompt
82 |     coder.run(params.prompt)
83 | 
84 | 
85 | ```
```

--------------------------------------------------------------------------------
/specs/init-aider-mcp-exp.md:
--------------------------------------------------------------------------------

```markdown
 1 | # Aider Model Context Protocol (MCP) Experimental Server
 2 | > Here we detail how we'll build the experimental ai coding aider mcp server.
 3 | 
 4 | ## Why?
 5 | Claude Code is a new, powerful agentic coding tool that is currently in beta. It's great but it's incredibly expensive.
 6 | We can offload some of the work to a simpler ai coding tool: Aider. The original AI Coding Assistant.
 7 | 
 8 | By discretely offloading work to Aider, we can not only reduce costs but use Claude Code (and auxillary LLM calls combined with aider) to better create more, reliable code through multiple - focused - LLM calls.
 9 | 
10 | ## Resources to ingest
11 | > To understand how we'll build this, READ these files
12 | 
13 | ai_docs/just-prompt-example-mcp-server.xml
14 | ai_docs/programmable-aider-documentation.md
15 | 
16 | ## Implementation Notes
17 | 
18 | - We want to mirror the exact structure of the just-prompt codebase as closely as possible. Minus of course the tools that are specific to just-prompt.
19 | - Every atom must be tested in a respective tests/*_test.py file.
20 | - every atom/tools/*.py must only have a single responsibility - one method.
21 | - when we run aider run in no commit mode, we should not commit any changes to the codebase.
22 | - if architect_model is not provided, don't use architect mode.
23 | 
24 | ## Application Structure
25 | 
26 | - src/
27 |   - aider_mcp_server/
28 |     - __init__.py
29 |     - __main__.py
30 |     - server.py
31 |       - serve(editor_model: str = DEFAULT_EDITOR_MODEL, current_working_dir: str = ".", architect_model: str = None) -> None
32 |     - atoms/
33 |       - __init__.py
34 |       - tools/
35 |         - __init__.py
36 |         - aider_ai_code.py
37 |           - code_with_aider(ai_coding_prompt: str, relative_editable_files: List[str], relative_readonly_files: List[str] = []) -> str
38 |             - runs one shot aider based on ai_docs/programmable-aider-documentation.md
39 |             - outputs 'success' or 'failure'
40 |         - aider_list_models.py
41 |           - list_models(substring: str) -> List[str]
42 |             - calls aider.models.fuzzy_match_models(substr: str) and returns the list of models
43 |       - utils.py
44 |         - DEFAULT_EDITOR_MODEL = "gemini/gemini-2.5-pro-exp-03-25"
45 |         - DEFAULT_ARCHITECT_MODEL = "gemini/gemini-2.5-pro-exp-03-25"
46 |       - data_types.py
47 |     - tests/
48 |       - __init__.py
49 |       - atoms/
50 |         - __init__.py
51 |         - tools/
52 |           - __init__.py
53 |           - test_aider_ai_code.py
54 |             - here create tests for basic 'math' functionality: 'add, 'subtract', 'multiply', 'divide'. Use temp dirs.
55 |           - test_aider_list_models.py
56 |             - here create a real call to list_models(openai) and assert gpt-4o substr in list.
57 | 
58 | ## Commands
59 | 
60 | - if for whatever reason you need additional python packages use `uv add <package_name>`.
61 | 
62 | ## Validation
63 | - Use `uv run pytest <path_to_test_file.py>` to run tests. Every atom/ must be tested.
64 | - Don't mock any tests - run real LLM calls. Make sure to test for failure paths.
65 | - At the end run `uv run aider-mcp-server --help` to validate the server is working.
66 | 
67 | 
```

--------------------------------------------------------------------------------
/src/aider_mcp_server/atoms/logging.py:
--------------------------------------------------------------------------------

```python
  1 | import os
  2 | import logging
  3 | import time
  4 | from pathlib import Path
  5 | from typing import Optional, Union
  6 | 
  7 | 
  8 | class Logger:
  9 |     """Custom logger that writes to both console and file."""
 10 |     
 11 |     def __init__(
 12 |         self,
 13 |         name: str,
 14 |         log_dir: Optional[Union[str, Path]] = None,
 15 |         level: int = logging.INFO,
 16 |     ):
 17 |         """
 18 |         Initialize the logger.
 19 |         
 20 |         Args:
 21 |             name: Logger name
 22 |             log_dir: Directory to store log files (defaults to ./logs)
 23 |             level: Logging level
 24 |         """
 25 |         self.name = name
 26 |         self.level = level
 27 |         
 28 |         # Set up the logger
 29 |         self.logger = logging.getLogger(name)
 30 |         self.logger.setLevel(level)
 31 |         self.logger.propagate = False
 32 |         
 33 |         # Clear any existing handlers
 34 |         if self.logger.handlers:
 35 |             self.logger.handlers.clear()
 36 | 
 37 |         # Define a standard formatter
 38 |         log_formatter = logging.Formatter(
 39 |             '%(asctime)s [%(levelname)s] %(name)s: %(message)s',
 40 |             datefmt='%Y-%m-%d %H:%M:%S'
 41 |         )
 42 | 
 43 |         # Add console handler with standard formatting
 44 |         console_handler = logging.StreamHandler()
 45 |         console_handler.setFormatter(log_formatter)
 46 |         console_handler.setLevel(level)
 47 |         self.logger.addHandler(console_handler)
 48 | 
 49 |         # Add file handler if log_dir is provided
 50 |         if log_dir is not None:
 51 |             # Create log directory if it doesn't exist
 52 |             log_dir = Path(log_dir)
 53 |             log_dir.mkdir(parents=True, exist_ok=True)
 54 |             
 55 |             # Use a fixed log file name
 56 |             log_file_name = "aider_mcp_server.log"
 57 |             log_file_path = log_dir / log_file_name
 58 | 
 59 |             # Set up file handler to append
 60 |             file_handler = logging.FileHandler(log_file_path, mode='a')
 61 |             # Use the same formatter as the console handler
 62 |             file_handler.setFormatter(log_formatter)
 63 |             file_handler.setLevel(level)
 64 |             self.logger.addHandler(file_handler)
 65 | 
 66 |             self.log_file_path = log_file_path
 67 |             self.logger.info(f"Logging to: {log_file_path}")
 68 | 
 69 |     def debug(self, message: str, **kwargs):
 70 |         """Log a debug message."""
 71 |         self.logger.debug(message, **kwargs)
 72 |     
 73 |     def info(self, message: str, **kwargs):
 74 |         """Log an info message."""
 75 |         self.logger.info(message, **kwargs)
 76 |     
 77 |     def warning(self, message: str, **kwargs):
 78 |         """Log a warning message."""
 79 |         self.logger.warning(message, **kwargs)
 80 |     
 81 |     def error(self, message: str, **kwargs):
 82 |         """Log an error message."""
 83 |         self.logger.error(message, **kwargs)
 84 |     
 85 |     def critical(self, message: str, **kwargs):
 86 |         """Log a critical message."""
 87 |         self.logger.critical(message, **kwargs)
 88 |     
 89 |     def exception(self, message: str, **kwargs):
 90 |         """Log an exception message with traceback."""
 91 |         self.logger.exception(message, **kwargs)
 92 | 
 93 | 
 94 | def get_logger(
 95 |     name: str,
 96 |     log_dir: Optional[Union[str, Path]] = None,
 97 |     level: int = logging.INFO,
 98 | ) -> Logger:
 99 |     """
100 |     Get a configured logger instance.
101 |     
102 |     Args:
103 |         name: Logger name
104 |         log_dir: Directory to store log files (defaults to ./logs)
105 |         level: Logging level
106 | 
107 |     Returns:
108 |         Configured Logger instance
109 |     """
110 |     if log_dir is None:
111 |         # Default log directory is ./logs
112 |         log_dir = Path("./logs")
113 |     
114 |     return Logger(
115 |         name=name,
116 |         log_dir=log_dir,
117 |         level=level,
118 |     )
119 | 
```

--------------------------------------------------------------------------------
/src/aider_mcp_server/tests/atoms/test_logging.py:
--------------------------------------------------------------------------------

```python
  1 | import pytest
  2 | import logging
  3 | from pathlib import Path
  4 | 
  5 | from aider_mcp_server.atoms.logging import Logger, get_logger
  6 | 
  7 | 
  8 | def test_logger_creation_and_file_output(tmp_path):
  9 |     """Test Logger instance creation using get_logger and log file existence with fixed name."""
 10 |     log_dir = tmp_path / "logs"
 11 |     logger_name = "test_logger_creation"
 12 |     expected_log_file = log_dir / "aider_mcp_server.log" # Fixed log file name
 13 | 
 14 |     # --- Test get_logger ---
 15 |     logger = get_logger(
 16 |         name=logger_name,
 17 |         log_dir=log_dir,
 18 |         level=logging.INFO,
 19 |     )
 20 |     assert logger is not None, "Logger instance from get_logger should be created"
 21 |     assert logger.name == logger_name
 22 | 
 23 |     # Log a message to ensure file handling is triggered
 24 |     logger.info("Initial log message.")
 25 | 
 26 |     # Verify log directory and file exist
 27 |     assert log_dir.exists(), f"Log directory should be created by get_logger at {log_dir}"
 28 |     assert log_dir.is_dir(), f"Log path created by get_logger should be a directory"
 29 |     assert expected_log_file.exists(), f"Log file should be created by get_logger at {expected_log_file}"
 30 |     assert expected_log_file.is_file(), f"Log path created by get_logger should point to a file"
 31 | 
 32 | 
 33 | def test_log_levels_and_output(tmp_path):
 34 |     """Test logging at different levels to the fixed log file using get_logger."""
 35 |     log_dir = tmp_path / "logs"
 36 |     logger_name = "test_logger_levels"
 37 |     expected_log_file = log_dir / "aider_mcp_server.log" # Fixed log file name
 38 | 
 39 |     # Instantiate our custom logger with DEBUG level using get_logger
 40 |     logger = get_logger(
 41 |         name=logger_name,
 42 |         log_dir=log_dir,
 43 |         level=logging.DEBUG,
 44 |     )
 45 | 
 46 |     # Log messages at different levels
 47 |     messages = {
 48 |         logging.DEBUG: "This is a debug message.",
 49 |         logging.INFO: "This is an info message.",
 50 |         logging.WARNING: "This is a warning message.",
 51 |         logging.ERROR: "This is an error message.",
 52 |         logging.CRITICAL: "This is a critical message.",
 53 |     }
 54 | 
 55 |     logger.debug(messages[logging.DEBUG])
 56 |     logger.info(messages[logging.INFO])
 57 |     logger.warning(messages[logging.WARNING])
 58 |     logger.error(messages[logging.ERROR])
 59 |     logger.critical(messages[logging.CRITICAL])
 60 | 
 61 |     # Verify file output
 62 |     assert expected_log_file.exists(), "Log file should exist for level testing"
 63 | 
 64 |     file_content = expected_log_file.read_text()
 65 | 
 66 |     # Verify file output contains messages and level indicators
 67 |     for level, msg in messages.items():
 68 |         level_name = logging.getLevelName(level)
 69 |         assert msg in file_content, f"Message '{msg}' not found in file content"
 70 |         assert level_name in file_content, f"Level '{level_name}' not found in file content"
 71 |         assert logger_name in file_content, f"Logger name '{logger_name}' not found in file content"
 72 | 
 73 | 
 74 | def test_log_level_filtering(tmp_path):
 75 |     """Test that messages below the set log level are filtered using get_logger."""
 76 |     log_dir = tmp_path / "logs"
 77 |     logger_name = "test_logger_filtering"
 78 |     expected_log_file = log_dir / "aider_mcp_server.log" # Fixed log file name
 79 | 
 80 |     # Instantiate the logger with WARNING level using get_logger
 81 |     logger = get_logger(
 82 |         name=logger_name,
 83 |         log_dir=log_dir,
 84 |         level=logging.WARNING,
 85 |     )
 86 | 
 87 |     # Log messages at different levels
 88 |     debug_msg = "This debug message should NOT appear."
 89 |     info_msg = "This info message should NOT appear."
 90 |     warning_msg = "This warning message SHOULD appear."
 91 |     error_msg = "This error message SHOULD appear."
 92 |     critical_msg = "This critical message SHOULD appear." # Add critical for completeness
 93 | 
 94 |     logger.debug(debug_msg)
 95 |     logger.info(info_msg)
 96 |     logger.warning(warning_msg)
 97 |     logger.error(error_msg)
 98 |     logger.critical(critical_msg)
 99 | 
100 |     # Verify file output filtering
101 |     assert expected_log_file.exists(), "Log file should exist for filtering testing"
102 | 
103 |     file_content = expected_log_file.read_text()
104 | 
105 |     assert debug_msg not in file_content, "Debug message should be filtered from file"
106 |     assert info_msg not in file_content, "Info message should be filtered from file"
107 |     assert warning_msg in file_content, "Warning message should appear in file"
108 |     assert error_msg in file_content, "Error message should appear in file"
109 |     assert critical_msg in file_content, "Critical message should appear in file"
110 |     assert logging.getLevelName(logging.DEBUG) not in file_content, "DEBUG level indicator should be filtered from file"
111 |     assert logging.getLevelName(logging.INFO) not in file_content, "INFO level indicator should be filtered from file"
112 |     assert logging.getLevelName(logging.WARNING) in file_content, "WARNING level indicator should appear in file"
113 |     assert logging.getLevelName(logging.ERROR) in file_content, "ERROR level indicator should appear in file"
114 |     assert logging.getLevelName(logging.CRITICAL) in file_content, "CRITICAL level indicator should appear in file"
115 |     assert logger_name in file_content, f"Logger name '{logger_name}' should appear in file content"
116 | 
117 | 
118 | def test_log_appending(tmp_path):
119 |     """Test that log messages are appended to the existing log file."""
120 |     log_dir = tmp_path / "logs"
121 |     logger_name_1 = "test_logger_append_1"
122 |     logger_name_2 = "test_logger_append_2"
123 |     expected_log_file = log_dir / "aider_mcp_server.log" # Fixed log file name
124 | 
125 |     # First logger instance and message
126 |     logger1 = get_logger(
127 |         name=logger_name_1,
128 |         log_dir=log_dir,
129 |         level=logging.INFO,
130 |     )
131 |     message1 = "First message to append."
132 |     logger1.info(message1)
133 | 
134 |     # Ensure some time passes or context switches if needed, though file handler should manage appending
135 |     # Second logger instance (or could reuse logger1) and message
136 |     logger2 = get_logger(
137 |         name=logger_name_2, # Can use a different name or the same
138 |         log_dir=log_dir,
139 |         level=logging.INFO,
140 |     )
141 |     message2 = "Second message to append."
142 |     logger2.info(message2)
143 | 
144 |     # Verify both messages are in the file
145 |     assert expected_log_file.exists(), "Log file should exist for appending test"
146 |     file_content = expected_log_file.read_text()
147 | 
148 |     assert message1 in file_content, "First message not found in appended log file"
149 |     assert logger_name_1 in file_content, "First logger name not found in appended log file"
150 |     assert message2 in file_content, "Second message not found in appended log file"
151 |     assert logger_name_2 in file_content, "Second logger name not found in appended log file"
152 | 
```

--------------------------------------------------------------------------------
/src/aider_mcp_server/atoms/tools/aider_ai_code.py:
--------------------------------------------------------------------------------

```python
  1 | import json
  2 | from typing import List, Optional, Dict, Any, Union
  3 | import os
  4 | import os.path
  5 | import subprocess
  6 | from aider.models import Model
  7 | from aider.coders import Coder
  8 | from aider.io import InputOutput
  9 | from aider_mcp_server.atoms.logging import get_logger
 10 | from aider_mcp_server.atoms.utils import DEFAULT_EDITOR_MODEL
 11 | 
 12 | # Configure logging for this module
 13 | logger = get_logger(__name__)
 14 | 
 15 | # Type alias for response dictionary
 16 | ResponseDict = Dict[str, Union[bool, str]]
 17 | 
 18 | 
 19 | def _get_changes_diff_or_content(
 20 |     relative_editable_files: List[str], working_dir: str = None
 21 | ) -> str:
 22 |     """
 23 |     Get the git diff for the specified files, or their content if git fails.
 24 | 
 25 |     Args:
 26 |         relative_editable_files: List of files to check for changes
 27 |         working_dir: The working directory where the git repo is located
 28 |     """
 29 |     diff = ""
 30 |     # Log current directory for debugging
 31 |     current_dir = os.getcwd()
 32 |     logger.info(f"Current directory during diff: {current_dir}")
 33 |     if working_dir:
 34 |         logger.info(f"Using working directory: {working_dir}")
 35 | 
 36 |     # Always attempt to use git
 37 |     files_arg = " ".join(relative_editable_files)
 38 |     logger.info(f"Attempting to get git diff for: {' '.join(relative_editable_files)}")
 39 | 
 40 |     try:
 41 |         # Use git -C to specify the repository directory
 42 |         if working_dir:
 43 |             diff_cmd = f"git -C {working_dir} diff -- {files_arg}"
 44 |         else:
 45 |             diff_cmd = f"git diff -- {files_arg}"
 46 | 
 47 |         logger.info(f"Running git command: {diff_cmd}")
 48 |         diff = subprocess.check_output(
 49 |             diff_cmd, shell=True, text=True, stderr=subprocess.PIPE
 50 |         )
 51 |         logger.info("Successfully obtained git diff.")
 52 |     except subprocess.CalledProcessError as e:
 53 |         logger.warning(
 54 |             f"Git diff command failed with exit code {e.returncode}. Error: {e.stderr.strip()}"
 55 |         )
 56 |         logger.warning("Falling back to reading file contents.")
 57 |         diff = "Git diff failed. Current file contents:\n\n"
 58 |         for file_path in relative_editable_files:
 59 |             full_path = (
 60 |                 os.path.join(working_dir, file_path) if working_dir else file_path
 61 |             )
 62 |             if os.path.exists(full_path):
 63 |                 try:
 64 |                     with open(full_path, "r") as f:
 65 |                         content = f.read()
 66 |                         diff += f"--- {file_path} ---\n{content}\n\n"
 67 |                         logger.info(f"Read content for {file_path}")
 68 |                 except Exception as read_e:
 69 |                     logger.error(
 70 |                         f"Failed reading file {full_path} for content fallback: {read_e}"
 71 |                     )
 72 |                     diff += f"--- {file_path} --- (Error reading file)\n\n"
 73 |             else:
 74 |                 logger.warning(f"File {full_path} not found during content fallback.")
 75 |                 diff += f"--- {file_path} --- (File not found)\n\n"
 76 |     except Exception as e:
 77 |         logger.error(f"Unexpected error getting git diff: {str(e)}")
 78 |         diff = f"Error getting git diff: {str(e)}\n\n"  # Provide error in diff string as fallback
 79 |     return diff
 80 | 
 81 | 
 82 | def _check_for_meaningful_changes(
 83 |     relative_editable_files: List[str], working_dir: str = None
 84 | ) -> bool:
 85 |     """
 86 |     Check if the edited files contain meaningful content.
 87 | 
 88 |     Args:
 89 |         relative_editable_files: List of files to check
 90 |         working_dir: The working directory where files are located
 91 |     """
 92 |     for file_path in relative_editable_files:
 93 |         # Use the working directory if provided
 94 |         full_path = os.path.join(working_dir, file_path) if working_dir else file_path
 95 |         logger.info(f"Checking for meaningful content in: {full_path}")
 96 | 
 97 |         if os.path.exists(full_path):
 98 |             try:
 99 |                 with open(full_path, "r") as f:
100 |                     content = f.read()
101 |                     # Check if the file has more than just whitespace or a single comment line,
102 |                     # or contains common code keywords. This is a heuristic.
103 |                     stripped_content = content.strip()
104 |                     if stripped_content and (
105 |                         len(stripped_content.split("\n")) > 1
106 |                         or any(
107 |                             kw in content
108 |                             for kw in [
109 |                                 "def ",
110 |                                 "class ",
111 |                                 "import ",
112 |                                 "from ",
113 |                                 "async def",
114 |                             ]
115 |                         )
116 |                     ):
117 |                         logger.info(f"Meaningful content found in: {file_path}")
118 |                         return True
119 |             except Exception as e:
120 |                 logger.error(
121 |                     f"Failed reading file {full_path} during meaningful change check: {e}"
122 |                 )
123 |                 # If we can't read it, we can't confirm meaningful change from this file
124 |                 continue
125 |         else:
126 |             logger.info(
127 |                 f"File not found or empty, skipping meaningful check: {full_path}"
128 |             )
129 | 
130 |     logger.info("No meaningful changes detected in any editable files.")
131 |     return False
132 | 
133 | 
134 | def _process_coder_results(
135 |     relative_editable_files: List[str], working_dir: str = None
136 | ) -> ResponseDict:
137 |     """
138 |     Process the results after Aider has run, checking for meaningful changes
139 |     and retrieving the diff or content.
140 | 
141 |     Args:
142 |         relative_editable_files: List of files that were edited
143 |         working_dir: The working directory where the git repo is located
144 | 
145 |     Returns:
146 |         Dictionary with success status and diff output
147 |     """
148 |     diff_output = _get_changes_diff_or_content(relative_editable_files, working_dir)
149 |     logger.info("Checking for meaningful changes in edited files...")
150 |     has_meaningful_content = _check_for_meaningful_changes(
151 |         relative_editable_files, working_dir
152 |     )
153 | 
154 |     if has_meaningful_content:
155 |         logger.info("Meaningful changes found. Processing successful.")
156 |         return {"success": True, "diff": diff_output}
157 |     else:
158 |         logger.warning(
159 |             "No meaningful changes detected. Processing marked as unsuccessful."
160 |         )
161 |         # Even if no meaningful content, provide the diff/content if available
162 |         return {
163 |             "success": False,
164 |             "diff": diff_output
165 |             or "No meaningful changes detected and no diff/content available.",
166 |         }
167 | 
168 | 
169 | def _format_response(response: ResponseDict) -> str:
170 |     """
171 |     Format the response dictionary as a JSON string.
172 | 
173 |     Args:
174 |         response: Dictionary containing success status and diff output
175 | 
176 |     Returns:
177 |         JSON string representation of the response
178 |     """
179 |     return json.dumps(response, indent=4)
180 | 
181 | 
182 | def code_with_aider(
183 |     ai_coding_prompt: str,
184 |     relative_editable_files: List[str],
185 |     relative_readonly_files: List[str] = [],
186 |     model: str = DEFAULT_EDITOR_MODEL,
187 |     working_dir: str = None,
188 | ) -> str:
189 |     """
190 |     Run Aider to perform AI coding tasks based on the provided prompt and files.
191 | 
192 |     Args:
193 |         ai_coding_prompt (str): The prompt for the AI to execute.
194 |         relative_editable_files (List[str]): List of files that can be edited.
195 |         relative_readonly_files (List[str], optional): List of files that can be read but not edited. Defaults to [].
196 |         model (str, optional): The model to use. Defaults to DEFAULT_EDITOR_MODEL.
197 |         working_dir (str, required): The working directory where git repository is located and files are stored.
198 | 
199 |     Returns:
200 |         Dict[str, Any]: {'success': True/False, 'diff': str with git diff output}
201 |     """
202 |     logger.info("Starting code_with_aider process.")
203 |     logger.info(f"Prompt: '{ai_coding_prompt}'")
204 | 
205 |     # Working directory must be provided
206 |     if not working_dir:
207 |         error_msg = "Error: working_dir is required for code_with_aider"
208 |         logger.error(error_msg)
209 |         return json.dumps({"success": False, "diff": error_msg})
210 | 
211 |     logger.info(f"Working directory: {working_dir}")
212 |     logger.info(f"Editable files: {relative_editable_files}")
213 |     logger.info(f"Readonly files: {relative_readonly_files}")
214 |     logger.info(f"Model: {model}")
215 | 
216 |     try:
217 |         # Configure the model
218 |         logger.info("Configuring AI model...")  # Point 1: Before init
219 |         ai_model = Model(model)
220 |         logger.info(f"Configured model: {model}")
221 |         logger.info("AI model configured.")  # Point 2: After init
222 | 
223 |         # Create the coder instance
224 |         logger.info("Creating Aider coder instance...")
225 |         # Use working directory for chat history file if provided
226 |         history_dir = working_dir
227 |         abs_editable_files = [
228 |             os.path.join(working_dir, file) for file in relative_editable_files
229 |         ]
230 |         abs_readonly_files = [
231 |             os.path.join(working_dir, file) for file in relative_readonly_files
232 |         ]
233 |         chat_history_file = os.path.join(history_dir, ".aider.chat.history.md")
234 |         logger.info(f"Using chat history file: {chat_history_file}")
235 | 
236 |         coder = Coder.create(
237 |             main_model=ai_model,
238 |             io=InputOutput(
239 |                 yes=True,
240 |                 chat_history_file=chat_history_file,
241 |             ),
242 |             fnames=abs_editable_files,
243 |             read_only_fnames=abs_readonly_files,
244 |             auto_commits=False,  # We'll handle commits separately
245 |             suggest_shell_commands=False,
246 |             detect_urls=False,
247 |             use_git=True,  # Always use git
248 |         )
249 |         logger.info("Aider coder instance created successfully.")
250 | 
251 |         # Run the coding session
252 |         logger.info("Starting Aider coding session...")  # Point 3: Before run
253 |         result = coder.run(ai_coding_prompt)
254 |         logger.info(f"Aider coding session result: {result}")
255 |         logger.info("Aider coding session finished.")  # Point 4: After run
256 | 
257 |         # Process the results after the coder has run
258 |         logger.info("Processing coder results...")  # Point 5: Processing results
259 |         try:
260 |             response = _process_coder_results(relative_editable_files, working_dir)
261 |             logger.info("Coder results processed.")
262 |         except Exception as e:
263 |             logger.exception(
264 |                 f"Error processing coder results: {str(e)}"
265 |             )  # Point 6: Error
266 |             response = {
267 |                 "success": False,
268 |                 "diff": f"Error processing files after execution: {str(e)}",
269 |             }
270 | 
271 |     except Exception as e:
272 |         logger.exception(
273 |             f"Critical Error in code_with_aider: {str(e)}"
274 |         )  # Point 6: Error
275 |         response = {
276 |             "success": False,
277 |             "diff": f"Unhandled Error during Aider execution: {str(e)}",
278 |         }
279 | 
280 |     formatted_response = _format_response(response)
281 |     logger.info(
282 |         f"code_with_aider process completed. Success: {response.get('success')}"
283 |     )
284 |     logger.info(
285 |         f"Formatted response: {formatted_response}"
286 |     )  # Log complete response for debugging
287 |     return formatted_response
288 | 
```

--------------------------------------------------------------------------------
/src/aider_mcp_server/server.py:
--------------------------------------------------------------------------------

```python
  1 | import json
  2 | import sys
  3 | import os
  4 | import asyncio
  5 | import subprocess
  6 | import logging
  7 | from typing import Dict, Any, Optional, List, Tuple, Union
  8 | 
  9 | import mcp
 10 | from mcp.server import Server
 11 | from mcp.server.stdio import stdio_server
 12 | from mcp.types import Tool, TextContent
 13 | 
 14 | from aider_mcp_server.atoms.logging import get_logger
 15 | from aider_mcp_server.atoms.utils import DEFAULT_EDITOR_MODEL
 16 | from aider_mcp_server.atoms.tools.aider_ai_code import code_with_aider
 17 | from aider_mcp_server.atoms.tools.aider_list_models import list_models
 18 | 
 19 | # Configure logging
 20 | logger = get_logger(__name__)
 21 | 
 22 | # Define MCP tools
 23 | AIDER_AI_CODE_TOOL = Tool(
 24 |     name="aider_ai_code",
 25 |     description="Run Aider to perform AI coding tasks based on the provided prompt and files",
 26 |     inputSchema={
 27 |         "type": "object",
 28 |         "properties": {
 29 |             "ai_coding_prompt": {
 30 |                 "type": "string",
 31 |                 "description": "The prompt for the AI to execute",
 32 |             },
 33 |             "relative_editable_files": {
 34 |                 "type": "array",
 35 |                 "description": "LIST of relative paths to files that can be edited",
 36 |                 "items": {"type": "string"},
 37 |             },
 38 |             "relative_readonly_files": {
 39 |                 "type": "array",
 40 |                 "description": "LIST of relative paths to files that can be read but not edited, add files that are not editable but useful for context",
 41 |                 "items": {"type": "string"},
 42 |             },
 43 |             "model": {
 44 |                 "type": "string",
 45 |                 "description": "The primary AI model Aider should use for generating code, leave blank unless model is specified in the request",
 46 |             },
 47 |         },
 48 |         "required": ["ai_coding_prompt", "relative_editable_files"],
 49 |     },
 50 | )
 51 | 
 52 | LIST_MODELS_TOOL = Tool(
 53 |     name="list_models",
 54 |     description="List available models that match the provided substring",
 55 |     inputSchema={
 56 |         "type": "object",
 57 |         "properties": {
 58 |             "substring": {
 59 |                 "type": "string",
 60 |                 "description": "Substring to match against available models",
 61 |             }
 62 |         },
 63 |     },
 64 | )
 65 | 
 66 | 
 67 | def is_git_repository(directory: str) -> Tuple[bool, Union[str, None]]:
 68 |     """
 69 |     Check if the specified directory is a git repository.
 70 | 
 71 |     Args:
 72 |         directory (str): The directory to check.
 73 | 
 74 |     Returns:
 75 |         Tuple[bool, Union[str, None]]: A tuple containing a boolean indicating if it's a git repo,
 76 |                                       and an error message if it's not.
 77 |     """
 78 |     try:
 79 |         # Make sure the directory exists
 80 |         if not os.path.isdir(directory):
 81 |             return False, f"Directory does not exist: {directory}"
 82 | 
 83 |         # Use the git command with -C option to specify the working directory
 84 |         # This way we don't need to change our current directory
 85 |         result = subprocess.run(
 86 |             ["git", "-C", directory, "rev-parse", "--is-inside-work-tree"],
 87 |             capture_output=True,
 88 |             text=True,
 89 |             check=False,
 90 |         )
 91 | 
 92 |         if result.returncode == 0 and result.stdout.strip() == "true":
 93 |             return True, None
 94 |         else:
 95 |             return False, result.stderr.strip() or "Directory is not a git repository"
 96 | 
 97 |     except subprocess.SubprocessError as e:
 98 |         return False, f"Error checking git repository: {str(e)}"
 99 |     except Exception as e:
100 |         return False, f"Unexpected error checking git repository: {str(e)}"
101 | 
102 | 
103 | def process_aider_ai_code_request(
104 |     params: Dict[str, Any],
105 |     editor_model: str,
106 |     current_working_dir: str,
107 | ) -> Dict[str, Any]:
108 |     """
109 |     Process an aider_ai_code request.
110 | 
111 |     Args:
112 |         params (Dict[str, Any]): The request parameters.
113 |         editor_model (str): The editor model to use.
114 |         current_working_dir (str): The current working directory where git repo is located.
115 | 
116 |     Returns:
117 |         Dict[str, Any]: The response data.
118 |     """
119 |     ai_coding_prompt = params.get("ai_coding_prompt", "")
120 |     relative_editable_files = params.get("relative_editable_files", [])
121 |     relative_readonly_files = params.get("relative_readonly_files", [])
122 | 
123 |     # Ensure relative_editable_files is a list
124 |     if isinstance(relative_editable_files, str):
125 |         logger.info(
126 |             f"Converting single editable file string to list: {relative_editable_files}"
127 |         )
128 |         relative_editable_files = [relative_editable_files]
129 | 
130 |     # Ensure relative_readonly_files is a list
131 |     if isinstance(relative_readonly_files, str):
132 |         logger.info(
133 |             f"Converting single readonly file string to list: {relative_readonly_files}"
134 |         )
135 |         relative_readonly_files = [relative_readonly_files]
136 | 
137 |     # Get the model from request parameters if provided
138 |     request_model = params.get("model")
139 | 
140 |     # Log the request details
141 |     logger.info(f"AI Coding Request: Prompt: '{ai_coding_prompt}'")
142 |     logger.info(f"Editable files: {relative_editable_files}")
143 |     logger.info(f"Readonly files: {relative_readonly_files}")
144 |     logger.info(f"Editor model: {editor_model}")
145 |     if request_model:
146 |         logger.info(f"Request-specified model: {request_model}")
147 | 
148 |     # Use the model specified in the request if provided, otherwise use the editor model
149 |     model_to_use = request_model if request_model else editor_model
150 | 
151 |     # Use the passed-in current_working_dir parameter
152 |     logger.info(f"Using working directory for code_with_aider: {current_working_dir}")
153 | 
154 |     result_json = code_with_aider(
155 |         ai_coding_prompt=ai_coding_prompt,
156 |         relative_editable_files=relative_editable_files,
157 |         relative_readonly_files=relative_readonly_files,
158 |         model=model_to_use,
159 |         working_dir=current_working_dir,
160 |     )
161 | 
162 |     # Parse the JSON string result
163 |     try:
164 |         result_dict = json.loads(result_json)
165 |     except json.JSONDecodeError as e:
166 |         logger.error(f"Error: Failed to parse JSON response from code_with_aider: {e}")
167 |         logger.error(f"Received raw response: {result_json}")
168 |         return {"error": "Failed to process AI coding result"}
169 | 
170 |     logger.info(
171 |         f"AI Coding Request Completed. Success: {result_dict.get('success', False)}"
172 |     )
173 |     return {
174 |         "success": result_dict.get("success", False),
175 |         "diff": result_dict.get("diff", "Error retrieving diff"),
176 |     }
177 | 
178 | 
179 | def process_list_models_request(params: Dict[str, Any]) -> Dict[str, Any]:
180 |     """
181 |     Process a list_models request.
182 | 
183 |     Args:
184 |         params (Dict[str, Any]): The request parameters.
185 | 
186 |     Returns:
187 |         Dict[str, Any]: The response data.
188 |     """
189 |     substring = params.get("substring", "")
190 | 
191 |     # Log the request details
192 |     logger.info(f"List Models Request: Substring: '{substring}'")
193 | 
194 |     models = list_models(substring)
195 |     logger.info(f"Found {len(models)} models matching '{substring}'")
196 | 
197 |     return {"models": models}
198 | 
199 | 
200 | def handle_request(
201 |     request: Dict[str, Any],
202 |     current_working_dir: str,
203 |     editor_model: str,
204 | ) -> Dict[str, Any]:
205 |     """
206 |     Handle incoming MCP requests according to the MCP protocol.
207 | 
208 |     Args:
209 |         request (Dict[str, Any]): The request JSON.
210 |         current_working_dir (str): The current working directory. Must be a valid git repository.
211 |         editor_model (str): The editor model to use.
212 | 
213 |     Returns:
214 |         Dict[str, Any]: The response JSON.
215 |     """
216 |     try:
217 |         # Validate current_working_dir is provided and is a git repository
218 |         if not current_working_dir:
219 |             error_msg = "Error: current_working_dir is required. Please provide a valid git repository path."
220 |             logger.error(error_msg)
221 |             return {"error": error_msg}
222 | 
223 |         # MCP protocol requires 'name' and 'parameters' fields
224 |         if "name" not in request:
225 |             logger.error("Error: Received request missing 'name' field.")
226 |             return {"error": "Missing 'name' field in request"}
227 | 
228 |         request_type = request.get("name")
229 |         params = request.get("parameters", {})
230 | 
231 |         logger.info(
232 |             f"Received request: Type='{request_type}', CWD='{current_working_dir}'"
233 |         )
234 | 
235 |         # Validate that the current_working_dir is a git repository before changing to it
236 |         is_git_repo, error_message = is_git_repository(current_working_dir)
237 |         if not is_git_repo:
238 |             error_msg = f"Error: The specified directory '{current_working_dir}' is not a valid git repository: {error_message}"
239 |             logger.error(error_msg)
240 |             return {"error": error_msg}
241 | 
242 |         # Set working directory
243 |         logger.info(f"Changing working directory to: {current_working_dir}")
244 |         os.chdir(current_working_dir)
245 | 
246 |         # Route to the appropriate handler based on request type
247 |         if request_type == "aider_ai_code":
248 |             return process_aider_ai_code_request(
249 |                 params, editor_model, current_working_dir
250 |             )
251 | 
252 |         elif request_type == "list_models":
253 |             return process_list_models_request(params)
254 | 
255 |         else:
256 |             # Unknown request type
257 |             logger.warning(f"Warning: Unknown request type received: {request_type}")
258 |             return {"error": f"Unknown request type: {request_type}"}
259 | 
260 |     except Exception as e:
261 |         # Handle any errors
262 |         logger.exception(
263 |             f"Critical Error: Unhandled exception during request processing: {str(e)}"
264 |         )
265 |         return {"error": f"Internal server error: {str(e)}"}
266 | 
267 | 
268 | async def serve(
269 |     editor_model: str = DEFAULT_EDITOR_MODEL,
270 |     current_working_dir: str = None,
271 | ) -> None:
272 |     """
273 |     Start the MCP server following the Model Context Protocol.
274 | 
275 |     The server reads JSON requests from stdin and writes JSON responses to stdout.
276 |     Each request should contain a 'name' field indicating the tool to invoke, and
277 |     a 'parameters' field with the tool-specific parameters.
278 | 
279 |     Args:
280 |         editor_model (str, optional): The editor model to use. Defaults to DEFAULT_EDITOR_MODEL.
281 |         current_working_dir (str, required): The current working directory. Must be a valid git repository.
282 | 
283 |     Raises:
284 |         ValueError: If current_working_dir is not provided or is not a git repository.
285 |     """
286 |     logger.info(f"Starting Aider MCP Server")
287 |     logger.info(f"Editor Model: {editor_model}")
288 | 
289 |     # Validate current_working_dir is provided
290 |     if not current_working_dir:
291 |         error_msg = "Error: current_working_dir is required. Please provide a valid git repository path."
292 |         logger.error(error_msg)
293 |         raise ValueError(error_msg)
294 | 
295 |     logger.info(f"Initial Working Directory: {current_working_dir}")
296 | 
297 |     # Validate that the current_working_dir is a git repository
298 |     is_git_repo, error_message = is_git_repository(current_working_dir)
299 |     if not is_git_repo:
300 |         error_msg = f"Error: The specified directory '{current_working_dir}' is not a valid git repository: {error_message}"
301 |         logger.error(error_msg)
302 |         raise ValueError(error_msg)
303 | 
304 |     logger.info(f"Validated git repository at: {current_working_dir}")
305 | 
306 |     # Set working directory
307 |     logger.info(f"Setting working directory to: {current_working_dir}")
308 |     os.chdir(current_working_dir)
309 | 
310 |     # Create the MCP server
311 |     server = Server("aider-mcp-server")
312 | 
313 |     @server.list_tools()
314 |     async def list_tools() -> List[Tool]:
315 |         """Register all available tools with the MCP server."""
316 |         return [AIDER_AI_CODE_TOOL, LIST_MODELS_TOOL]
317 | 
318 |     @server.call_tool()
319 |     async def call_tool(name: str, arguments: Dict[str, Any]) -> List[TextContent]:
320 |         """Handle tool calls from the MCP client."""
321 |         logger.info(f"Received Tool Call: Name='{name}'")
322 |         logger.info(f"Arguments: {arguments}")
323 | 
324 |         try:
325 |             if name == "aider_ai_code":
326 |                 logger.info(f"Processing 'aider_ai_code' tool call...")
327 |                 result = process_aider_ai_code_request(
328 |                     arguments, editor_model, current_working_dir
329 |                 )
330 |                 return [TextContent(type="text", text=json.dumps(result))]
331 | 
332 |             elif name == "list_models":
333 |                 logger.info(f"Processing 'list_models' tool call...")
334 |                 result = process_list_models_request(arguments)
335 |                 return [TextContent(type="text", text=json.dumps(result))]
336 | 
337 |             else:
338 |                 logger.warning(f"Warning: Received call for unknown tool: {name}")
339 |                 return [
340 |                     TextContent(
341 |                         type="text", text=json.dumps({"error": f"Unknown tool: {name}"})
342 |                     )
343 |                 ]
344 | 
345 |         except Exception as e:
346 |             logger.exception(f"Error: Exception during tool call '{name}': {e}")
347 |             return [
348 |                 TextContent(
349 |                     type="text",
350 |                     text=json.dumps(
351 |                         {"error": f"Error processing tool {name}: {str(e)}"}
352 |                     ),
353 |                 )
354 |             ]
355 | 
356 |     # Initialize and run the server
357 |     try:
358 |         options = server.create_initialization_options()
359 |         logger.info("Initializing stdio server connection...")
360 |         async with stdio_server() as (read_stream, write_stream):
361 |             logger.info("Server running. Waiting for requests...")
362 |             await server.run(read_stream, write_stream, options, raise_exceptions=True)
363 |     except Exception as e:
364 |         logger.exception(
365 |             f"Critical Error: Server stopped due to unhandled exception: {e}"
366 |         )
367 |         raise
368 |     finally:
369 |         logger.info("Aider MCP Server shutting down.")
370 | 
```

--------------------------------------------------------------------------------
/src/aider_mcp_server/tests/atoms/tools/test_aider_ai_code.py:
--------------------------------------------------------------------------------

```python
  1 | import os
  2 | import json
  3 | import tempfile
  4 | import pytest
  5 | import shutil
  6 | import subprocess
  7 | from aider_mcp_server.atoms.tools.aider_ai_code import code_with_aider
  8 | from aider_mcp_server.atoms.utils import DEFAULT_TESTING_MODEL
  9 | 
 10 | @pytest.fixture
 11 | def temp_dir():
 12 |     """Create a temporary directory with an initialized Git repository for testing."""
 13 |     tmp_dir = tempfile.mkdtemp()
 14 |     
 15 |     # Initialize git repository in the temp directory
 16 |     subprocess.run(["git", "init"], cwd=tmp_dir, capture_output=True, text=True, check=True)
 17 |     
 18 |     # Configure git user for the repository
 19 |     subprocess.run(["git", "config", "user.name", "Test User"], cwd=tmp_dir, capture_output=True, text=True, check=True)
 20 |     subprocess.run(["git", "config", "user.email", "[email protected]"], cwd=tmp_dir, capture_output=True, text=True, check=True)
 21 |     
 22 |     # Create and commit an initial file to have a valid git history
 23 |     with open(os.path.join(tmp_dir, "README.md"), "w") as f:
 24 |         f.write("# Test Repository\nThis is a test repository for Aider MCP Server tests.")
 25 |     
 26 |     subprocess.run(["git", "add", "README.md"], cwd=tmp_dir, capture_output=True, text=True, check=True)
 27 |     subprocess.run(["git", "commit", "-m", "Initial commit"], cwd=tmp_dir, capture_output=True, text=True, check=True)
 28 |     
 29 |     yield tmp_dir
 30 |     
 31 |     # Clean up
 32 |     shutil.rmtree(tmp_dir)
 33 | 
 34 | def test_addition(temp_dir):
 35 |     """Test that code_with_aider can create a file that adds two numbers."""
 36 |     # Create the test file
 37 |     test_file = os.path.join(temp_dir, "math_add.py")
 38 |     with open(test_file, "w") as f:
 39 |         f.write("# This file should implement addition\n")
 40 |     
 41 |     prompt = "Implement a function add(a, b) that returns the sum of a and b in the math_add.py file."
 42 |     
 43 |     # Run code_with_aider with working_dir
 44 |     result = code_with_aider(
 45 |         ai_coding_prompt=prompt,
 46 |         relative_editable_files=[test_file],
 47 |         working_dir=temp_dir  # Pass the temp directory as working_dir
 48 |     )
 49 |     
 50 |     # Parse the JSON result
 51 |     result_dict = json.loads(result)
 52 |     
 53 |     # Check that it succeeded
 54 |     assert result_dict["success"] is True, "Expected code_with_aider to succeed"
 55 |     assert "diff" in result_dict, "Expected diff to be in result"
 56 |     
 57 |     # Check that the file was modified correctly
 58 |     with open(test_file, "r") as f:
 59 |         content = f.read()
 60 |     
 61 |     assert any(x in content for x in ["def add(a, b):", "def add(a:"]), "Expected to find add function in the file"
 62 |     assert "return a + b" in content, "Expected to find return statement in the file"
 63 |     
 64 |     # Try to import and use the function
 65 |     import sys
 66 |     sys.path.append(temp_dir)
 67 |     from math_add import add
 68 |     assert add(2, 3) == 5, "Expected add(2, 3) to return 5"
 69 | 
 70 | def test_subtraction(temp_dir):
 71 |     """Test that code_with_aider can create a file that subtracts two numbers."""
 72 |     # Create the test file
 73 |     test_file = os.path.join(temp_dir, "math_subtract.py")
 74 |     with open(test_file, "w") as f:
 75 |         f.write("# This file should implement subtraction\n")
 76 |     
 77 |     prompt = "Implement a function subtract(a, b) that returns a minus b in the math_subtract.py file."
 78 |     
 79 |     # Run code_with_aider with working_dir
 80 |     result = code_with_aider(
 81 |         ai_coding_prompt=prompt,
 82 |         relative_editable_files=[test_file],
 83 |         working_dir=temp_dir  # Pass the temp directory as working_dir
 84 |     )
 85 |     
 86 |     # Parse the JSON result
 87 |     result_dict = json.loads(result)
 88 |     
 89 |     # Check that it succeeded
 90 |     assert result_dict["success"] is True, "Expected code_with_aider to succeed"
 91 |     assert "diff" in result_dict, "Expected diff to be in result"
 92 |     
 93 |     # Check that the file was modified correctly
 94 |     with open(test_file, "r") as f:
 95 |         content = f.read()
 96 |     
 97 |     assert any(x in content for x in ["def subtract(a, b):", "def subtract(a:"]), "Expected to find subtract function in the file"
 98 |     assert "return a - b" in content, "Expected to find return statement in the file"
 99 |     
100 |     # Try to import and use the function
101 |     import sys
102 |     sys.path.append(temp_dir)
103 |     from math_subtract import subtract
104 |     assert subtract(5, 3) == 2, "Expected subtract(5, 3) to return 2"
105 | 
106 | def test_multiplication(temp_dir):
107 |     """Test that code_with_aider can create a file that multiplies two numbers."""
108 |     # Create the test file
109 |     test_file = os.path.join(temp_dir, "math_multiply.py")
110 |     with open(test_file, "w") as f:
111 |         f.write("# This file should implement multiplication\n")
112 |     
113 |     prompt = "Implement a function multiply(a, b) that returns the product of a and b in the math_multiply.py file."
114 |     
115 |     # Run code_with_aider with working_dir
116 |     result = code_with_aider(
117 |         ai_coding_prompt=prompt,
118 |         relative_editable_files=[test_file],
119 |         working_dir=temp_dir  # Pass the temp directory as working_dir
120 |     )
121 |     
122 |     # Parse the JSON result
123 |     result_dict = json.loads(result)
124 |     
125 |     # Check that it succeeded
126 |     assert result_dict["success"] is True, "Expected code_with_aider to succeed"
127 |     assert "diff" in result_dict, "Expected diff to be in result"
128 |     
129 |     # Check that the file was modified correctly
130 |     with open(test_file, "r") as f:
131 |         content = f.read()
132 |     
133 |     assert any(x in content for x in ["def multiply(a, b):", "def multiply(a:"]), "Expected to find multiply function in the file"
134 |     assert "return a * b" in content, "Expected to find return statement in the file"
135 |     
136 |     # Try to import and use the function
137 |     import sys
138 |     sys.path.append(temp_dir)
139 |     from math_multiply import multiply
140 |     assert multiply(2, 3) == 6, "Expected multiply(2, 3) to return 6"
141 | 
142 | def test_division(temp_dir):
143 |     """Test that code_with_aider can create a file that divides two numbers."""
144 |     # Create the test file
145 |     test_file = os.path.join(temp_dir, "math_divide.py")
146 |     with open(test_file, "w") as f:
147 |         f.write("# This file should implement division\n")
148 |     
149 |     prompt = "Implement a function divide(a, b) that returns a divided by b in the math_divide.py file. Handle division by zero by returning None."
150 |     
151 |     # Run code_with_aider with working_dir
152 |     result = code_with_aider(
153 |         ai_coding_prompt=prompt,
154 |         relative_editable_files=[test_file],
155 |         working_dir=temp_dir  # Pass the temp directory as working_dir
156 |     )
157 |     
158 |     # Parse the JSON result
159 |     result_dict = json.loads(result)
160 |     
161 |     # Check that it succeeded
162 |     assert result_dict["success"] is True, "Expected code_with_aider to succeed"
163 |     assert "diff" in result_dict, "Expected diff to be in result"
164 |     
165 |     # Check that the file was modified correctly
166 |     with open(test_file, "r") as f:
167 |         content = f.read()
168 |     
169 |     assert any(x in content for x in ["def divide(a, b):", "def divide(a:"]), "Expected to find divide function in the file"
170 |     assert "return" in content, "Expected to find return statement in the file"
171 |     
172 |     # Try to import and use the function
173 |     import sys
174 |     sys.path.append(temp_dir)
175 |     from math_divide import divide
176 |     assert divide(6, 3) == 2, "Expected divide(6, 3) to return 2"
177 |     assert divide(1, 0) is None, "Expected divide(1, 0) to return None"
178 | 
179 | def test_failure_case(temp_dir):
180 |     """Test that code_with_aider returns error information for a failure scenario."""
181 |     
182 |     try:
183 |         # Ensure this test runs in a non-git directory
184 |         os.chdir(temp_dir)
185 |         
186 |         # Create a test file in the temp directory
187 |         test_file = os.path.join(temp_dir, "failure_test.py")
188 |         with open(test_file, "w") as f:
189 |             f.write("# This file should trigger a failure\n")
190 |         
191 |         # Use an invalid model name to ensure a failure
192 |         prompt = "This prompt should fail because we're using a non-existent model."
193 |         
194 |         # Run code_with_aider with an invalid model name
195 |         result = code_with_aider(
196 |             ai_coding_prompt=prompt,
197 |             relative_editable_files=[test_file],
198 |             model="non_existent_model_123456789",  # This model doesn't exist
199 |             working_dir=temp_dir  # Pass the temp directory as working_dir
200 |         )
201 |         
202 |         # Parse the JSON result
203 |         result_dict = json.loads(result)
204 | 
205 |         # Check the result - we're still expecting success=False but the important part
206 |         # is that we get a diff that explains the error.
207 |         # The diff should indicate that no meaningful changes were made,
208 |         # often because the model couldn't be reached or produced no output.
209 |         assert "diff" in result_dict, "Expected diff to be in result"
210 |         diff_content = result_dict["diff"]
211 |         assert "File contents after editing (git not used):" in diff_content or "No meaningful changes detected" in diff_content, \
212 |                f"Expected error information like 'File contents after editing' or 'No meaningful changes' in diff, but got: {diff_content}"
213 |     finally:
214 |         # Make sure we go back to the main directory
215 |         os.chdir("/Users/indydevdan/Documents/projects/aider-mcp-exp")
216 | 
217 | def test_complex_tasks(temp_dir):
218 |     """Test that code_with_aider correctly implements more complex tasks."""
219 |     # Create the test file for a calculator class
220 |     test_file = os.path.join(temp_dir, "calculator.py")
221 |     with open(test_file, "w") as f:
222 |         f.write("# This file should implement a calculator class\n")
223 |     
224 |     # More complex prompt suitable for architect mode
225 |     prompt = """
226 |     Create a Calculator class with the following features:
227 |     1. Basic operations: add, subtract, multiply, divide methods
228 |     2. Memory functions: memory_store, memory_recall, memory_clear
229 |     3. A history feature that keeps track of operations 
230 |     4. A method to show_history
231 |     5. Error handling for division by zero
232 |     
233 |     All methods should be well-documented with docstrings.
234 |     """
235 |     
236 |     # Run code_with_aider with explicit model
237 |     result = code_with_aider(
238 |         ai_coding_prompt=prompt,
239 |         relative_editable_files=[test_file],
240 |         model=DEFAULT_TESTING_MODEL,  # Main model
241 |         working_dir=temp_dir  # Pass the temp directory as working_dir
242 |     )
243 |     
244 |     # Parse the JSON result
245 |     result_dict = json.loads(result)
246 |     
247 |     # Check that it succeeded
248 |     assert result_dict["success"] is True, "Expected code_with_aider with architect mode to succeed"
249 |     assert "diff" in result_dict, "Expected diff to be in result"
250 |     
251 |     # Check that the file was modified correctly with expected elements
252 |     with open(test_file, "r") as f:
253 |         content = f.read()
254 |     
255 |     # Check for class definition and methods - relaxed assertions to accommodate type hints
256 |     assert "class Calculator" in content, "Expected to find Calculator class definition"
257 |     assert "add" in content, "Expected to find add method"
258 |     assert "subtract" in content, "Expected to find subtract method"
259 |     assert "multiply" in content, "Expected to find multiply method"
260 |     assert "divide" in content, "Expected to find divide method"
261 |     assert "memory_" in content, "Expected to find memory functions"
262 |     assert "history" in content, "Expected to find history functionality"
263 |     
264 |     # Import and test basic calculator functionality
265 |     import sys
266 |     sys.path.append(temp_dir)
267 |     from calculator import Calculator
268 |     
269 |     # Test the calculator
270 |     calc = Calculator()
271 |     
272 |     # Test basic operations
273 |     assert calc.add(2, 3) == 5, "Expected add(2, 3) to return 5"
274 |     assert calc.subtract(5, 3) == 2, "Expected subtract(5, 3) to return 2"
275 |     assert calc.multiply(2, 3) == 6, "Expected multiply(2, 3) to return 6"
276 |     assert calc.divide(6, 3) == 2, "Expected divide(6, 3) to return 2"
277 |     
278 |     # Test division by zero error handling
279 |     try:
280 |         result = calc.divide(5, 0)
281 |         assert result is None or isinstance(result, (str, type(None))), \
282 |             "Expected divide by zero to return None, error message, or raise exception"
283 |     except Exception:
284 |         # It's fine if it raises an exception - that's valid error handling too
285 |         pass
286 |     
287 |     # Test memory functions if implemented as expected
288 |     try:
289 |         calc.memory_store(10)
290 |         assert calc.memory_recall() == 10, "Expected memory_recall() to return stored value"
291 |         calc.memory_clear()
292 |         assert calc.memory_recall() == 0 or calc.memory_recall() is None, \
293 |             "Expected memory_recall() to return 0 or None after clearing"
294 |     except (AttributeError, TypeError):
295 |         # Some implementations might handle memory differently
296 |         pass
297 | 
298 | def test_diff_output(temp_dir):
299 |     """Test that code_with_aider produces proper git diff output when modifying existing files."""
300 |     # Create an initial math file
301 |     test_file = os.path.join(temp_dir, "math_operations.py")
302 |     initial_content = """# Math operations module
303 | def add(a, b):
304 |     return a + b
305 | 
306 | def subtract(a, b):
307 |     return a - b
308 | """
309 |     
310 |     with open(test_file, "w") as f:
311 |         f.write(initial_content)
312 |     
313 |     # Commit the initial file to git
314 |     subprocess.run(["git", "add", "math_operations.py"], cwd=temp_dir, capture_output=True, text=True, check=True)
315 |     subprocess.run(["git", "commit", "-m", "Add initial math operations"], cwd=temp_dir, capture_output=True, text=True, check=True)
316 |     
317 |     # Now modify the file using Aider
318 |     prompt = "Add a multiply function that takes two parameters and returns their product. Also add a docstring to the existing add function."
319 |     
320 |     result = code_with_aider(
321 |         ai_coding_prompt=prompt,
322 |         relative_editable_files=["math_operations.py"],
323 |         model=DEFAULT_TESTING_MODEL,
324 |         working_dir=temp_dir
325 |     )
326 |     
327 |     # Parse the JSON result
328 |     result_dict = json.loads(result)
329 |     
330 |     # Check that it succeeded
331 |     assert result_dict["success"] is True, "Expected code_with_aider to succeed"
332 |     assert "diff" in result_dict, "Expected diff to be in result"
333 |     
334 |     # Verify the diff contains expected git diff markers
335 |     diff_content = result_dict["diff"]
336 |     assert "diff --git" in diff_content, "Expected git diff header in diff output"
337 |     assert "@@" in diff_content, "Expected hunk headers (@@) in diff output"
338 |     assert "+++ b/math_operations.py" in diff_content, "Expected new file marker in diff"
339 |     assert "--- a/math_operations.py" in diff_content, "Expected old file marker in diff"
340 |     
341 |     # Verify the diff shows additions (lines starting with +)
342 |     diff_lines = diff_content.split('\n')
343 |     added_lines = [line for line in diff_lines if line.startswith('+') and not line.startswith('+++')]
344 |     assert len(added_lines) > 0, "Expected to find added lines in diff"
345 |     
346 |     # Check that multiply function was actually added to the file
347 |     with open(test_file, "r") as f:
348 |         final_content = f.read()
349 |     
350 |     assert "def multiply" in final_content, "Expected multiply function to be added"
351 |     assert "docstring" in final_content.lower() or '"""' in final_content, "Expected docstring to be added"
352 | 
```
Page 1/2FirstPrevNextLast