#
tokens: 46534/50000 55/76 files (page 3/5)
lines: on (toggle) GitHub
raw markdown copy reset
This is page 3 of 5. Use http://codebase.md/modelcontextprotocol/servers?lines=true&page={x} to view the full context.

# Directory Structure

```
├── .gitattributes
├── .github
│   ├── pull_request_template.md
│   └── workflows
│       ├── claude.yml
│       ├── python.yml
│       ├── release.yml
│       └── typescript.yml
├── .gitignore
├── .npmrc
├── .vscode
│   └── settings.json
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── LICENSE
├── package-lock.json
├── package.json
├── README.md
├── scripts
│   └── release.py
├── SECURITY.md
├── src
│   ├── everything
│   │   ├── CLAUDE.md
│   │   ├── Dockerfile
│   │   ├── everything.ts
│   │   ├── index.ts
│   │   ├── instructions.md
│   │   ├── package.json
│   │   ├── README.md
│   │   ├── sse.ts
│   │   ├── stdio.ts
│   │   ├── streamableHttp.ts
│   │   └── tsconfig.json
│   ├── fetch
│   │   ├── .python-version
│   │   ├── Dockerfile
│   │   ├── LICENSE
│   │   ├── pyproject.toml
│   │   ├── README.md
│   │   ├── src
│   │   │   └── mcp_server_fetch
│   │   │       ├── __init__.py
│   │   │       ├── __main__.py
│   │   │       └── server.py
│   │   └── uv.lock
│   ├── filesystem
│   │   ├── __tests__
│   │   │   ├── directory-tree.test.ts
│   │   │   ├── lib.test.ts
│   │   │   ├── path-utils.test.ts
│   │   │   ├── path-validation.test.ts
│   │   │   └── roots-utils.test.ts
│   │   ├── Dockerfile
│   │   ├── index.ts
│   │   ├── jest.config.cjs
│   │   ├── lib.ts
│   │   ├── package.json
│   │   ├── path-utils.ts
│   │   ├── path-validation.ts
│   │   ├── README.md
│   │   ├── roots-utils.ts
│   │   └── tsconfig.json
│   ├── git
│   │   ├── .gitignore
│   │   ├── .python-version
│   │   ├── Dockerfile
│   │   ├── LICENSE
│   │   ├── pyproject.toml
│   │   ├── README.md
│   │   ├── src
│   │   │   └── mcp_server_git
│   │   │       ├── __init__.py
│   │   │       ├── __main__.py
│   │   │       ├── py.typed
│   │   │       └── server.py
│   │   ├── tests
│   │   │   └── test_server.py
│   │   └── uv.lock
│   ├── memory
│   │   ├── Dockerfile
│   │   ├── index.ts
│   │   ├── package.json
│   │   ├── README.md
│   │   └── tsconfig.json
│   ├── sequentialthinking
│   │   ├── Dockerfile
│   │   ├── index.ts
│   │   ├── package.json
│   │   ├── README.md
│   │   └── tsconfig.json
│   └── time
│       ├── .python-version
│       ├── Dockerfile
│       ├── pyproject.toml
│       ├── README.md
│       ├── src
│       │   └── mcp_server_time
│       │       ├── __init__.py
│       │       ├── __main__.py
│       │       └── server.py
│       ├── test
│       │   └── time_server_test.py
│       └── uv.lock
└── tsconfig.json
```

# Files

--------------------------------------------------------------------------------
/SECURITY.md:
--------------------------------------------------------------------------------

```markdown
 1 | # Security Policy
 2 | Thank you for helping us keep our MCP servers secure.
 3 | 
 4 | The **reference servers** in this repo are maintained by [Anthropic](https://www.anthropic.com/) as part of the Model Context Protocol project.
 5 | 
 6 | The security of our systems and user data is Anthropic’s top priority. We appreciate the work of security researchers acting in good faith in identifying and reporting potential vulnerabilities.
 7 | 
 8 | ## Vulnerability Disclosure Program
 9 | 
10 | Our Vulnerability Program guidelines are defined on our [HackerOne program page](https://hackerone.com/anthropic-vdp). We ask that any validated vulnerability in this functionality be reported through the [submission form](https://hackerone.com/anthropic-vdp/reports/new?type=team&report_type=vulnerability).
11 | 
```

--------------------------------------------------------------------------------
/src/everything/CLAUDE.md:
--------------------------------------------------------------------------------

```markdown
 1 | # MCP "Everything" Server - Development Guidelines
 2 | 
 3 | ## Build, Test & Run Commands
 4 | - Build: `npm run build` - Compiles TypeScript to JavaScript
 5 | - Watch mode: `npm run watch` - Watches for changes and rebuilds automatically
 6 | - Run server: `npm run start` - Starts the MCP server using stdio transport
 7 | - Run SSE server: `npm run start:sse` - Starts the MCP server with SSE transport
 8 | - Prepare release: `npm run prepare` - Builds the project for publishing
 9 | 
10 | ## Code Style Guidelines
11 | - Use ES modules with `.js` extension in import paths
12 | - Strictly type all functions and variables with TypeScript
13 | - Follow zod schema patterns for tool input validation
14 | - Prefer async/await over callbacks and Promise chains
15 | - Place all imports at top of file, grouped by external then internal
16 | - Use descriptive variable names that clearly indicate purpose
17 | - Implement proper cleanup for timers and resources in server shutdown
18 | - Follow camelCase for variables/functions, PascalCase for types/classes, UPPER_CASE for constants
19 | - Handle errors with try/catch blocks and provide clear error messages
20 | - Use consistent indentation (2 spaces) and trailing commas in multi-line objects
```

--------------------------------------------------------------------------------
/CONTRIBUTING.md:
--------------------------------------------------------------------------------

```markdown
 1 | # Contributing to MCP Servers
 2 | 
 3 | Thanks for your interest in contributing! Here's how you can help make this repo better.
 4 | 
 5 | We accept changes through [the standard GitHub flow model](https://docs.github.com/en/get-started/using-github/github-flow).
 6 | 
 7 | ## Server Listings
 8 | 
 9 | We welcome PRs that add links to your servers in the [README.md](./README.md)!
10 | 
11 | ## Server Implementations
12 | 
13 | We welcome:
14 | - **Bug fixes** — Help us squash those pesky bugs.
15 | - **Usability improvements** — Making servers easier to use for humans and agents.
16 | - **Enhancements that demonstrate MCP protocol features** — We encourage contributions that help reference servers better illustrate underutilized aspects of the MCP protocol beyond just Tools, such as Resources, Prompts, or Roots. For example, adding Roots support to filesystem-server helps showcase this important but lesser-known feature.
17 | 
18 | We're more selective about:
19 | - **Other new features** — Especially if they're not crucial to the server's core purpose or are highly opinionated. The existing servers are reference servers meant to inspire the community. If you need specific features, we encourage you to build enhanced versions! We think a diverse ecosystem of servers is beneficial for everyone, and would love to link to your improved server in our README.
20 | 
21 | We don't accept:
22 | - **New server implementations** — We encourage you to publish them yourself, and link to them from the README.
23 | 
24 | ## Documentation
25 | 
26 | Improvements to existing documentation is welcome - although generally we'd prefer ergonomic improvements than documenting pain points if possible!
27 | 
28 | We're more selective about adding wholly new documentation, especially in ways that aren't vendor neutral (e.g. how to run a particular server with a particular client).
29 | 
30 | ## Community
31 | 
32 | [Learn how the MCP community communicates](https://modelcontextprotocol.io/community/communication).
33 | 
34 | Thank you for helping make MCP servers better for everyone!
```

--------------------------------------------------------------------------------
/CODE_OF_CONDUCT.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Contributor Covenant Code of Conduct
  2 | 
  3 | ## Our Pledge
  4 | 
  5 | We as members, contributors, and leaders pledge to make participation in our
  6 | community a harassment-free experience for everyone, regardless of age, body
  7 | size, visible or invisible disability, ethnicity, sex characteristics, gender
  8 | identity and expression, level of experience, education, socio-economic status,
  9 | nationality, personal appearance, race, religion, or sexual identity
 10 | and orientation.
 11 | 
 12 | We pledge to act and interact in ways that contribute to an open, welcoming,
 13 | diverse, inclusive, and healthy community.
 14 | 
 15 | ## Our Standards
 16 | 
 17 | Examples of behavior that contributes to a positive environment for our
 18 | community include:
 19 | 
 20 | * Demonstrating empathy and kindness toward other people
 21 | * Being respectful of differing opinions, viewpoints, and experiences
 22 | * Giving and gracefully accepting constructive feedback
 23 | * Accepting responsibility and apologizing to those affected by our mistakes,
 24 |   and learning from the experience
 25 | * Focusing on what is best not just for us as individuals, but for the
 26 |   overall community
 27 | 
 28 | Examples of unacceptable behavior include:
 29 | 
 30 | * The use of sexualized language or imagery, and sexual attention or
 31 |   advances of any kind
 32 | * Trolling, insulting or derogatory comments, and personal or political attacks
 33 | * Public or private harassment
 34 | * Publishing others' private information, such as a physical or email
 35 |   address, without their explicit permission
 36 | * Other conduct which could reasonably be considered inappropriate in a
 37 |   professional setting
 38 | 
 39 | ## Enforcement Responsibilities
 40 | 
 41 | Community leaders are responsible for clarifying and enforcing our standards of
 42 | acceptable behavior and will take appropriate and fair corrective action in
 43 | response to any behavior that they deem inappropriate, threatening, offensive,
 44 | or harmful.
 45 | 
 46 | Community leaders have the right and responsibility to remove, edit, or reject
 47 | comments, commits, code, wiki edits, issues, and other contributions that are
 48 | not aligned to this Code of Conduct, and will communicate reasons for moderation
 49 | decisions when appropriate.
 50 | 
 51 | ## Scope
 52 | 
 53 | This Code of Conduct applies within all community spaces, and also applies when
 54 | an individual is officially representing the community in public spaces.
 55 | Examples of representing our community include using an official e-mail address,
 56 | posting via an official social media account, or acting as an appointed
 57 | representative at an online or offline event.
 58 | 
 59 | ## Enforcement
 60 | 
 61 | Instances of abusive, harassing, or otherwise unacceptable behavior may be
 62 | reported to the community leaders responsible for enforcement at
 63 | [email protected].
 64 | All complaints will be reviewed and investigated promptly and fairly.
 65 | 
 66 | All community leaders are obligated to respect the privacy and security of the
 67 | reporter of any incident.
 68 | 
 69 | ## Enforcement Guidelines
 70 | 
 71 | Community leaders will follow these Community Impact Guidelines in determining
 72 | the consequences for any action they deem in violation of this Code of Conduct:
 73 | 
 74 | ### 1. Correction
 75 | 
 76 | **Community Impact**: Use of inappropriate language or other behavior deemed
 77 | unprofessional or unwelcome in the community.
 78 | 
 79 | **Consequence**: A private, written warning from community leaders, providing
 80 | clarity around the nature of the violation and an explanation of why the
 81 | behavior was inappropriate. A public apology may be requested.
 82 | 
 83 | ### 2. Warning
 84 | 
 85 | **Community Impact**: A violation through a single incident or series
 86 | of actions.
 87 | 
 88 | **Consequence**: A warning with consequences for continued behavior. No
 89 | interaction with the people involved, including unsolicited interaction with
 90 | those enforcing the Code of Conduct, for a specified period of time. This
 91 | includes avoiding interactions in community spaces as well as external channels
 92 | like social media. Violating these terms may lead to a temporary or
 93 | permanent ban.
 94 | 
 95 | ### 3. Temporary Ban
 96 | 
 97 | **Community Impact**: A serious violation of community standards, including
 98 | sustained inappropriate behavior.
 99 | 
100 | **Consequence**: A temporary ban from any sort of interaction or public
101 | communication with the community for a specified period of time. No public or
102 | private interaction with the people involved, including unsolicited interaction
103 | with those enforcing the Code of Conduct, is allowed during this period.
104 | Violating these terms may lead to a permanent ban.
105 | 
106 | ### 4. Permanent Ban
107 | 
108 | **Community Impact**: Demonstrating a pattern of violation of community
109 | standards, including sustained inappropriate behavior,  harassment of an
110 | individual, or aggression toward or disparagement of classes of individuals.
111 | 
112 | **Consequence**: A permanent ban from any sort of public interaction within
113 | the community.
114 | 
115 | ## Attribution
116 | 
117 | This Code of Conduct is adapted from the [Contributor Covenant][homepage],
118 | version 2.0, available at
119 | https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
120 | 
121 | Community Impact Guidelines were inspired by [Mozilla's code of conduct
122 | enforcement ladder](https://github.com/mozilla/diversity).
123 | 
124 | [homepage]: https://www.contributor-covenant.org
125 | 
126 | For answers to common questions about this code of conduct, see the FAQ at
127 | https://www.contributor-covenant.org/faq. Translations are available at
128 | https://www.contributor-covenant.org/translations.
129 | 
```

--------------------------------------------------------------------------------
/.vscode/settings.json:
--------------------------------------------------------------------------------

```json
1 | {}
```

--------------------------------------------------------------------------------
/src/time/src/mcp_server_time/__main__.py:
--------------------------------------------------------------------------------

```python
1 | from mcp_server_time import main
2 | 
3 | main()
4 | 
```

--------------------------------------------------------------------------------
/src/git/src/mcp_server_git/__main__.py:
--------------------------------------------------------------------------------

```python
1 | # __main__.py
2 | 
3 | from mcp_server_git import main
4 | 
5 | main()
6 | 
```

--------------------------------------------------------------------------------
/src/fetch/src/mcp_server_fetch/__main__.py:
--------------------------------------------------------------------------------

```python
1 | # __main__.py
2 | 
3 | from mcp_server_fetch import main
4 | 
5 | main()
6 | 
```

--------------------------------------------------------------------------------
/src/everything/tsconfig.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "extends": "../../tsconfig.json",
 3 |   "compilerOptions": {
 4 |     "outDir": "./dist",
 5 |     "rootDir": "."
 6 |   },
 7 |   "include": [
 8 |     "./**/*.ts"
 9 |   ]
10 | }
11 | 
```

--------------------------------------------------------------------------------
/src/memory/tsconfig.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |     "extends": "../../tsconfig.json",
 3 |     "compilerOptions": {
 4 |       "outDir": "./dist",
 5 |       "rootDir": "."
 6 |     },
 7 |     "include": [
 8 |       "./**/*.ts"
 9 |     ]
10 |   }
11 |   
```

--------------------------------------------------------------------------------
/src/sequentialthinking/tsconfig.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "extends": "../../tsconfig.json",
 3 |   "compilerOptions": {
 4 |     "outDir": "./dist",
 5 |     "rootDir": ".",
 6 |     "moduleResolution": "NodeNext",
 7 |     "module": "NodeNext"
 8 |   },
 9 |   "include": ["./**/*.ts"]
10 | }
11 | 
```

--------------------------------------------------------------------------------
/src/filesystem/tsconfig.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "extends": "../../tsconfig.json",
 3 |   "compilerOptions": {
 4 |     "outDir": "./dist",
 5 |     "rootDir": ".",
 6 |     "moduleResolution": "NodeNext",
 7 |     "module": "NodeNext"
 8 |   },
 9 |   "include": [
10 |     "./**/*.ts"
11 |   ]
12 | }
13 | 
```

--------------------------------------------------------------------------------
/tsconfig.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "compilerOptions": {
 3 |     "target": "ES2022",
 4 |     "module": "Node16",
 5 |     "moduleResolution": "Node16",
 6 |     "strict": true,
 7 |     "esModuleInterop": true,
 8 |     "skipLibCheck": true,
 9 |     "forceConsistentCasingInFileNames": true,
10 |     "resolveJsonModule": true
11 |   },
12 |   "include": ["src/**/*"],
13 |   "exclude": ["node_modules"]
14 | }
15 | 
```

--------------------------------------------------------------------------------
/src/filesystem/jest.config.cjs:
--------------------------------------------------------------------------------

```
 1 | /** @type {import('ts-jest').JestConfigWithTsJest} */
 2 | module.exports = {
 3 |   preset: 'ts-jest',
 4 |   testEnvironment: 'node',
 5 |   extensionsToTreatAsEsm: ['.ts'],
 6 |   moduleNameMapper: {
 7 |     '^(\\.{1,2}/.*)\\.js$': '$1',
 8 |   },
 9 |   transform: {
10 |     '^.+\\.tsx?$': [
11 |       'ts-jest',
12 |       {
13 |         useESM: true,
14 |       },
15 |     ],
16 |   },
17 |   testMatch: ['**/__tests__/**/*.test.ts'],
18 |   collectCoverageFrom: [
19 |     '**/*.ts',
20 |     '!**/__tests__/**',
21 |     '!**/dist/**',
22 |   ],
23 | }
24 | 
```

--------------------------------------------------------------------------------
/src/everything/Dockerfile:
--------------------------------------------------------------------------------

```dockerfile
 1 | FROM node:22.12-alpine AS builder
 2 | 
 3 | COPY src/everything /app
 4 | COPY tsconfig.json /tsconfig.json
 5 | 
 6 | WORKDIR /app
 7 | 
 8 | RUN --mount=type=cache,target=/root/.npm npm install
 9 | 
10 | FROM node:22-alpine AS release
11 | 
12 | WORKDIR /app
13 | 
14 | COPY --from=builder /app/dist /app/dist
15 | COPY --from=builder /app/package.json /app/package.json
16 | COPY --from=builder /app/package-lock.json /app/package-lock.json
17 | 
18 | ENV NODE_ENV=production
19 | 
20 | RUN npm ci --ignore-scripts --omit-dev
21 | 
22 | CMD ["node", "dist/index.js"]
```

--------------------------------------------------------------------------------
/src/time/src/mcp_server_time/__init__.py:
--------------------------------------------------------------------------------

```python
 1 | from .server import serve
 2 | 
 3 | 
 4 | def main():
 5 |     """MCP Time Server - Time and timezone conversion functionality for MCP"""
 6 |     import argparse
 7 |     import asyncio
 8 | 
 9 |     parser = argparse.ArgumentParser(
10 |         description="give a model the ability to handle time queries and timezone conversions"
11 |     )
12 |     parser.add_argument("--local-timezone", type=str, help="Override local timezone")
13 | 
14 |     args = parser.parse_args()
15 |     asyncio.run(serve(args.local_timezone))
16 | 
17 | 
18 | if __name__ == "__main__":
19 |     main()
20 | 
```

--------------------------------------------------------------------------------
/src/memory/Dockerfile:
--------------------------------------------------------------------------------

```dockerfile
 1 | FROM node:22.12-alpine AS builder
 2 | 
 3 | COPY src/memory /app
 4 | COPY tsconfig.json /tsconfig.json
 5 | 
 6 | WORKDIR /app
 7 | 
 8 | RUN --mount=type=cache,target=/root/.npm npm install
 9 | 
10 | RUN --mount=type=cache,target=/root/.npm-production npm ci --ignore-scripts --omit-dev
11 | 
12 | FROM node:22-alpine AS release
13 | 
14 | COPY --from=builder /app/dist /app/dist
15 | COPY --from=builder /app/package.json /app/package.json
16 | COPY --from=builder /app/package-lock.json /app/package-lock.json
17 | 
18 | ENV NODE_ENV=production
19 | 
20 | WORKDIR /app
21 | 
22 | RUN npm ci --ignore-scripts --omit-dev
23 | 
24 | ENTRYPOINT ["node", "dist/index.js"]
```

--------------------------------------------------------------------------------
/src/filesystem/Dockerfile:
--------------------------------------------------------------------------------

```dockerfile
 1 | FROM node:22.12-alpine AS builder
 2 | 
 3 | WORKDIR /app
 4 | 
 5 | COPY src/filesystem /app
 6 | COPY tsconfig.json /tsconfig.json
 7 | 
 8 | RUN --mount=type=cache,target=/root/.npm npm install
 9 | 
10 | RUN --mount=type=cache,target=/root/.npm-production npm ci --ignore-scripts --omit-dev
11 | 
12 | 
13 | FROM node:22-alpine AS release
14 | 
15 | WORKDIR /app
16 | 
17 | COPY --from=builder /app/dist /app/dist
18 | COPY --from=builder /app/package.json /app/package.json
19 | COPY --from=builder /app/package-lock.json /app/package-lock.json
20 | 
21 | ENV NODE_ENV=production
22 | 
23 | RUN npm ci --ignore-scripts --omit-dev
24 | 
25 | ENTRYPOINT ["node", "/app/dist/index.js"]
```

--------------------------------------------------------------------------------
/src/sequentialthinking/Dockerfile:
--------------------------------------------------------------------------------

```dockerfile
 1 | FROM node:22.12-alpine AS builder
 2 | 
 3 | COPY src/sequentialthinking /app
 4 | COPY tsconfig.json /tsconfig.json
 5 | 
 6 | WORKDIR /app
 7 | 
 8 | RUN --mount=type=cache,target=/root/.npm npm install
 9 | 
10 | RUN --mount=type=cache,target=/root/.npm-production npm ci --ignore-scripts --omit-dev
11 | 
12 | FROM node:22-alpine AS release
13 | 
14 | COPY --from=builder /app/dist /app/dist
15 | COPY --from=builder /app/package.json /app/package.json
16 | COPY --from=builder /app/package-lock.json /app/package-lock.json
17 | 
18 | ENV NODE_ENV=production
19 | 
20 | WORKDIR /app
21 | 
22 | RUN npm ci --ignore-scripts --omit-dev
23 | 
24 | ENTRYPOINT ["node", "dist/index.js"]
25 | 
```

--------------------------------------------------------------------------------
/src/git/src/mcp_server_git/__init__.py:
--------------------------------------------------------------------------------

```python
 1 | import click
 2 | from pathlib import Path
 3 | import logging
 4 | import sys
 5 | from .server import serve
 6 | 
 7 | @click.command()
 8 | @click.option("--repository", "-r", type=Path, help="Git repository path")
 9 | @click.option("-v", "--verbose", count=True)
10 | def main(repository: Path | None, verbose: bool) -> None:
11 |     """MCP Git Server - Git functionality for MCP"""
12 |     import asyncio
13 | 
14 |     logging_level = logging.WARN
15 |     if verbose == 1:
16 |         logging_level = logging.INFO
17 |     elif verbose >= 2:
18 |         logging_level = logging.DEBUG
19 | 
20 |     logging.basicConfig(level=logging_level, stream=sys.stderr)
21 |     asyncio.run(serve(repository))
22 | 
23 | if __name__ == "__main__":
24 |     main()
25 | 
```

--------------------------------------------------------------------------------
/src/everything/stdio.ts:
--------------------------------------------------------------------------------

```typescript
 1 | #!/usr/bin/env node
 2 | 
 3 | import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
 4 | import { createServer } from "./everything.js";
 5 | 
 6 | console.error('Starting default (STDIO) server...');
 7 | 
 8 | async function main() {
 9 |     const transport = new StdioServerTransport();
10 |     const {server, cleanup, startNotificationIntervals} = createServer();
11 | 
12 |     await server.connect(transport);
13 |     startNotificationIntervals();
14 | 
15 |     // Cleanup on exit
16 |     process.on("SIGINT", async () => {
17 |         await cleanup();
18 |         await server.close();
19 |         process.exit(0);
20 |     });
21 | }
22 | 
23 | main().catch((error) => {
24 |   console.error("Server error:", error);
25 |   process.exit(1);
26 | });
27 | 
28 | 
```

--------------------------------------------------------------------------------
/src/fetch/src/mcp_server_fetch/__init__.py:
--------------------------------------------------------------------------------

```python
 1 | from .server import serve
 2 | 
 3 | 
 4 | def main():
 5 |     """MCP Fetch Server - HTTP fetching functionality for MCP"""
 6 |     import argparse
 7 |     import asyncio
 8 | 
 9 |     parser = argparse.ArgumentParser(
10 |         description="give a model the ability to make web requests"
11 |     )
12 |     parser.add_argument("--user-agent", type=str, help="Custom User-Agent string")
13 |     parser.add_argument(
14 |         "--ignore-robots-txt",
15 |         action="store_true",
16 |         help="Ignore robots.txt restrictions",
17 |     )
18 |     parser.add_argument("--proxy-url", type=str, help="Proxy URL to use for requests")
19 | 
20 |     args = parser.parse_args()
21 |     asyncio.run(serve(args.user_agent, args.ignore_robots_txt, args.proxy_url))
22 | 
23 | 
24 | if __name__ == "__main__":
25 |     main()
26 | 
```

--------------------------------------------------------------------------------
/src/memory/package.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "name": "@modelcontextprotocol/server-memory",
 3 |   "version": "0.6.3",
 4 |   "description": "MCP server for enabling memory for Claude through a knowledge graph",
 5 |   "license": "MIT",
 6 |   "author": "Anthropic, PBC (https://anthropic.com)",
 7 |   "homepage": "https://modelcontextprotocol.io",
 8 |   "bugs": "https://github.com/modelcontextprotocol/servers/issues",
 9 |   "type": "module",
10 |   "bin": {
11 |     "mcp-server-memory": "dist/index.js"
12 |   },
13 |   "files": [
14 |     "dist"
15 |   ],
16 |   "scripts": {
17 |     "build": "tsc && shx chmod +x dist/*.js",
18 |     "prepare": "npm run build",
19 |     "watch": "tsc --watch"
20 |   },
21 |   "dependencies": {
22 |     "@modelcontextprotocol/sdk": "1.0.1"
23 |   },
24 |   "devDependencies": {
25 |     "@types/node": "^22",
26 |     "shx": "^0.3.4",
27 |     "typescript": "^5.6.2"
28 |   }
29 | }
```

--------------------------------------------------------------------------------
/src/sequentialthinking/package.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "name": "@modelcontextprotocol/server-sequential-thinking",
 3 |   "version": "0.6.2",
 4 |   "description": "MCP server for sequential thinking and problem solving",
 5 |   "license": "MIT",
 6 |   "author": "Anthropic, PBC (https://anthropic.com)",
 7 |   "homepage": "https://modelcontextprotocol.io",
 8 |   "bugs": "https://github.com/modelcontextprotocol/servers/issues",
 9 |   "type": "module",
10 |   "bin": {
11 |     "mcp-server-sequential-thinking": "dist/index.js"
12 |   },
13 |   "files": [
14 |     "dist"
15 |   ],
16 |   "scripts": {
17 |     "build": "tsc && shx chmod +x dist/*.js",
18 |     "prepare": "npm run build",
19 |     "watch": "tsc --watch"
20 |   },
21 |   "dependencies": {
22 |     "@modelcontextprotocol/sdk": "0.5.0",
23 |     "chalk": "^5.3.0",
24 |     "yargs": "^17.7.2"
25 |   },
26 |   "devDependencies": {
27 |     "@types/node": "^22",
28 |     "@types/yargs": "^17.0.32",
29 |     "shx": "^0.3.4",
30 |     "typescript": "^5.3.3"
31 |   }
32 | }
```

--------------------------------------------------------------------------------
/package.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "name": "@modelcontextprotocol/servers",
 3 |   "private": true,
 4 |   "version": "0.6.2",
 5 |   "description": "Model Context Protocol servers",
 6 |   "license": "MIT",
 7 |   "author": "Anthropic, PBC (https://anthropic.com)",
 8 |   "homepage": "https://modelcontextprotocol.io",
 9 |   "bugs": "https://github.com/modelcontextprotocol/servers/issues",
10 |   "type": "module",
11 |   "workspaces": [
12 |     "src/*"
13 |   ],
14 |   "files": [],
15 |   "scripts": {
16 |     "build": "npm run build --workspaces",
17 |     "watch": "npm run watch --workspaces",
18 |     "publish-all": "npm publish --workspaces --access public",
19 |     "link-all": "npm link --workspaces"
20 |   },
21 |   "dependencies": {
22 |     "@modelcontextprotocol/server-everything": "*",
23 |     "@modelcontextprotocol/server-memory": "*",
24 |     "@modelcontextprotocol/server-filesystem": "*",
25 |     "@modelcontextprotocol/server-sequential-thinking": "*"
26 |   }
27 | }
28 | 
```

--------------------------------------------------------------------------------
/src/time/pyproject.toml:
--------------------------------------------------------------------------------

```toml
 1 | [project]
 2 | name = "mcp-server-time"
 3 | version = "0.6.2"
 4 | description = "A Model Context Protocol server providing tools for time queries and timezone conversions for LLMs"
 5 | readme = "README.md"
 6 | requires-python = ">=3.10"
 7 | authors = [
 8 |     { name = "Mariusz 'maledorak' Korzekwa", email = "[email protected]" },
 9 | ]
10 | keywords = ["time", "timezone", "mcp", "llm"]
11 | license = { text = "MIT" }
12 | classifiers = [
13 |     "Development Status :: 4 - Beta",
14 |     "Intended Audience :: Developers",
15 |     "License :: OSI Approved :: MIT License",
16 |     "Programming Language :: Python :: 3",
17 |     "Programming Language :: Python :: 3.10",
18 | ]
19 | dependencies = [
20 |     "mcp>=1.0.0",
21 |     "pydantic>=2.0.0",
22 |     "tzdata>=2024.2",
23 |     "tzlocal>=5.3.1"
24 | ]
25 | 
26 | [project.scripts]
27 | mcp-server-time = "mcp_server_time:main"
28 | 
29 | [build-system]
30 | requires = ["hatchling"]
31 | build-backend = "hatchling.build"
32 | 
33 | [tool.uv]
34 | dev-dependencies = [
35 |     "freezegun>=1.5.1",
36 |     "pyright>=1.1.389",
37 |     "pytest>=8.3.3",
38 |     "ruff>=0.8.1",
39 | ]
40 | 
```

--------------------------------------------------------------------------------
/src/fetch/pyproject.toml:
--------------------------------------------------------------------------------

```toml
 1 | [project]
 2 | name = "mcp-server-fetch"
 3 | version = "0.6.3"
 4 | description = "A Model Context Protocol server providing tools to fetch and convert web content for usage by LLMs"
 5 | readme = "README.md"
 6 | requires-python = ">=3.10"
 7 | authors = [{ name = "Anthropic, PBC." }]
 8 | maintainers = [{ name = "Jack Adamson", email = "[email protected]" }]
 9 | keywords = ["http", "mcp", "llm", "automation"]
10 | license = { text = "MIT" }
11 | classifiers = [
12 |     "Development Status :: 4 - Beta",
13 |     "Intended Audience :: Developers",
14 |     "License :: OSI Approved :: MIT License",
15 |     "Programming Language :: Python :: 3",
16 |     "Programming Language :: Python :: 3.10",
17 | ]
18 | dependencies = [
19 |     "httpx<0.28",
20 |     "markdownify>=0.13.1",
21 |     "mcp>=1.1.3",
22 |     "protego>=0.3.1",
23 |     "pydantic>=2.0.0",
24 |     "readabilipy>=0.2.0",
25 |     "requests>=2.32.3",
26 | ]
27 | 
28 | [project.scripts]
29 | mcp-server-fetch = "mcp_server_fetch:main"
30 | 
31 | [build-system]
32 | requires = ["hatchling"]
33 | build-backend = "hatchling.build"
34 | 
35 | [tool.uv]
36 | dev-dependencies = ["pyright>=1.1.389", "ruff>=0.7.3"]
37 | 
```

--------------------------------------------------------------------------------
/src/everything/package.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "name": "@modelcontextprotocol/server-everything",
 3 |   "version": "0.6.2",
 4 |   "description": "MCP server that exercises all the features of the MCP protocol",
 5 |   "license": "MIT",
 6 |   "author": "Anthropic, PBC (https://anthropic.com)",
 7 |   "homepage": "https://modelcontextprotocol.io",
 8 |   "bugs": "https://github.com/modelcontextprotocol/servers/issues",
 9 |   "type": "module",
10 |   "bin": {
11 |     "mcp-server-everything": "dist/index.js"
12 |   },
13 |   "files": [
14 |     "dist"
15 |   ],
16 |   "scripts": {
17 |     "build": "tsc && shx cp instructions.md dist/ && shx chmod +x dist/*.js",
18 |     "prepare": "npm run build",
19 |     "watch": "tsc --watch",
20 |     "start": "node dist/index.js",
21 |     "start:sse": "node dist/sse.js",
22 |     "start:streamableHttp": "node dist/streamableHttp.js"
23 |   },
24 |   "dependencies": {
25 |     "@modelcontextprotocol/sdk": "^1.18.0",
26 |     "cors": "^2.8.5",
27 |     "express": "^4.21.1",
28 |     "jszip": "^3.10.1",
29 |     "zod": "^3.23.8",
30 |     "zod-to-json-schema": "^3.23.5"
31 |   },
32 |   "devDependencies": {
33 |     "@types/cors": "^2.8.19",
34 |     "@types/express": "^5.0.0",
35 |     "shx": "^0.3.4",
36 |     "typescript": "^5.6.2"
37 |   }
38 | }
39 | 
```

--------------------------------------------------------------------------------
/src/filesystem/package.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "name": "@modelcontextprotocol/server-filesystem",
 3 |   "version": "0.6.3",
 4 |   "description": "MCP server for filesystem access",
 5 |   "license": "MIT",
 6 |   "author": "Anthropic, PBC (https://anthropic.com)",
 7 |   "homepage": "https://modelcontextprotocol.io",
 8 |   "bugs": "https://github.com/modelcontextprotocol/servers/issues",
 9 |   "type": "module",
10 |   "bin": {
11 |     "mcp-server-filesystem": "dist/index.js"
12 |   },
13 |   "files": [
14 |     "dist"
15 |   ],
16 |   "scripts": {
17 |     "build": "tsc && shx chmod +x dist/*.js",
18 |     "prepare": "npm run build",
19 |     "watch": "tsc --watch",
20 |     "test": "jest --config=jest.config.cjs --coverage"
21 |   },
22 |   "dependencies": {
23 |     "@modelcontextprotocol/sdk": "^1.17.0",
24 |     "diff": "^5.1.0",
25 |     "glob": "^10.3.10",
26 |     "minimatch": "^10.0.1",
27 |     "zod-to-json-schema": "^3.23.5"
28 |   },
29 |   "devDependencies": {
30 |     "@jest/globals": "^29.7.0",
31 |     "@types/diff": "^5.0.9",
32 |     "@types/jest": "^29.5.14",
33 |     "@types/minimatch": "^5.1.2",
34 |     "@types/node": "^22",
35 |     "jest": "^29.7.0",
36 |     "shx": "^0.3.4",
37 |     "ts-jest": "^29.1.1",
38 |     "ts-node": "^10.9.2",
39 |     "typescript": "^5.8.2"
40 |   }
41 | }
42 | 
```

--------------------------------------------------------------------------------
/src/git/pyproject.toml:
--------------------------------------------------------------------------------

```toml
 1 | [project]
 2 | name = "mcp-server-git"
 3 | version = "0.6.2"
 4 | description = "A Model Context Protocol server providing tools to read, search, and manipulate Git repositories programmatically via LLMs"
 5 | readme = "README.md"
 6 | requires-python = ">=3.10"
 7 | authors = [{ name = "Anthropic, PBC." }]
 8 | maintainers = [{ name = "David Soria Parra", email = "[email protected]" }]
 9 | keywords = ["git", "mcp", "llm", "automation"]
10 | license = { text = "MIT" }
11 | classifiers = [
12 |     "Development Status :: 4 - Beta",
13 |     "Intended Audience :: Developers",
14 |     "License :: OSI Approved :: MIT License",
15 |     "Programming Language :: Python :: 3",
16 |     "Programming Language :: Python :: 3.10",
17 | ]
18 | dependencies = [
19 |     "click>=8.1.7",
20 |     "gitpython>=3.1.43",
21 |     "mcp>=1.0.0",
22 |     "pydantic>=2.0.0",
23 | ]
24 | 
25 | [project.scripts]
26 | mcp-server-git = "mcp_server_git:main"
27 | 
28 | [build-system]
29 | requires = ["hatchling"]
30 | build-backend = "hatchling.build"
31 | 
32 | [tool.uv]
33 | dev-dependencies = ["pyright>=1.1.389", "ruff>=0.7.3", "pytest>=8.0.0"]
34 | 
35 | [tool.pytest.ini_options]
36 | testpaths = ["tests"]
37 | python_files = "test_*.py"
38 | python_classes = "Test*"
39 | python_functions = "test_*"
40 | 
```

--------------------------------------------------------------------------------
/src/everything/index.ts:
--------------------------------------------------------------------------------

```typescript
 1 | #!/usr/bin/env node
 2 | 
 3 | // Parse command line arguments first
 4 | const args = process.argv.slice(2);
 5 | const scriptName = args[0] || 'stdio';
 6 | 
 7 | async function run() {
 8 |     try {
 9 |         // Dynamically import only the requested module to prevent all modules from initializing
10 |         switch (scriptName) {
11 |             case 'stdio':
12 |                 // Import and run the default server
13 |                 await import('./stdio.js');
14 |                 break;
15 |             case 'sse':
16 |                 // Import and run the SSE server
17 |                 await import('./sse.js');
18 |                 break;
19 |             case 'streamableHttp':
20 |                 // Import and run the streamable HTTP server
21 |                 await import('./streamableHttp.js');
22 |                 break;
23 |             default:
24 |                 console.error(`Unknown script: ${scriptName}`);
25 |                 console.log('Available scripts:');
26 |                 console.log('- stdio');
27 |                 console.log('- sse');
28 |                 console.log('- streamableHttp');
29 |                 process.exit(1);
30 |         }
31 |     } catch (error) {
32 |         console.error('Error running script:', error);
33 |         process.exit(1);
34 |     }
35 | }
36 | 
37 | run();
38 | 
```

--------------------------------------------------------------------------------
/src/fetch/Dockerfile:
--------------------------------------------------------------------------------

```dockerfile
 1 | # Use a Python image with uv pre-installed
 2 | FROM ghcr.io/astral-sh/uv:python3.12-bookworm-slim AS uv
 3 | 
 4 | # Install the project into `/app`
 5 | WORKDIR /app
 6 | 
 7 | # Enable bytecode compilation
 8 | ENV UV_COMPILE_BYTECODE=1
 9 | 
10 | # Copy from the cache instead of linking since it's a mounted volume
11 | ENV UV_LINK_MODE=copy
12 | 
13 | # Install the project's dependencies using the lockfile and settings
14 | RUN --mount=type=cache,target=/root/.cache/uv \
15 |     --mount=type=bind,source=uv.lock,target=uv.lock \
16 |     --mount=type=bind,source=pyproject.toml,target=pyproject.toml \
17 |     uv sync --frozen --no-install-project --no-dev --no-editable
18 | 
19 | # Then, add the rest of the project source code and install it
20 | # Installing separately from its dependencies allows optimal layer caching
21 | ADD . /app
22 | RUN --mount=type=cache,target=/root/.cache/uv \
23 |     uv sync --frozen --no-dev --no-editable
24 | 
25 | FROM python:3.12-slim-bookworm
26 | 
27 | WORKDIR /app
28 |  
29 | COPY --from=uv /root/.local /root/.local
30 | COPY --from=uv --chown=app:app /app/.venv /app/.venv
31 | 
32 | # Place executables in the environment at the front of the path
33 | ENV PATH="/app/.venv/bin:$PATH"
34 | 
35 | # when running the container, add --db-path and a bind mount to the host's db file
36 | ENTRYPOINT ["mcp-server-fetch"]
37 | 
```

--------------------------------------------------------------------------------
/src/everything/instructions.md:
--------------------------------------------------------------------------------

```markdown
 1 | Testing and demonstration server for MCP protocol features.
 2 | 
 3 | ## Resources
 4 | 
 5 | Resources 1-100 follow pattern: even IDs contain text, odd IDs contain binary data. Resources are paginated at 10 items per page with cursor-based navigation.
 6 | 
 7 | ## Key dependencies
 8 | 
 9 | Progress notifications require `_meta.progressToken` in tool calls. Resource subscriptions generate updates every 10 seconds.
10 | 
11 | ## Performance characteristics
12 | 
13 | Server generates automatic log messages every 20 seconds (filtered by current log level), stderr notifications every 30 seconds, and resource update notifications every 10 seconds for active subscriptions. Log level changes affect message filtering in real-time.
14 | 
15 | ## Multi-modal testing
16 | 
17 | `complex_prompt` includes both text arguments and image content for testing client multi-modal handling. `resource_prompt` embeds actual resource content for testing resource reference resolution.
18 | 
19 | Argument completion is available for prompt parameters and resource IDs. Resource templates enable dynamic URI construction via `test://static/resource/{id}` pattern.
20 | 
21 | ## Easter egg
22 | 
23 | If asked about server instructions, respond with "🎉 Server instructions are working! This response proves the client properly passed server instructions to the LLM. This demonstrates MCP's instructions feature in action."
24 | 
```

--------------------------------------------------------------------------------
/src/git/Dockerfile:
--------------------------------------------------------------------------------

```dockerfile
 1 | # Use a Python image with uv pre-installed
 2 | FROM ghcr.io/astral-sh/uv:python3.12-bookworm-slim AS uv
 3 | 
 4 | # Install the project into `/app`
 5 | WORKDIR /app
 6 | 
 7 | # Enable bytecode compilation
 8 | ENV UV_COMPILE_BYTECODE=1
 9 | 
10 | # Copy from the cache instead of linking since it's a mounted volume
11 | ENV UV_LINK_MODE=copy
12 | 
13 | # Install the project's dependencies using the lockfile and settings
14 | RUN --mount=type=cache,target=/root/.cache/uv \
15 |     --mount=type=bind,source=uv.lock,target=uv.lock \
16 |     --mount=type=bind,source=pyproject.toml,target=pyproject.toml \
17 |     uv sync --frozen --no-install-project --no-dev --no-editable
18 | 
19 | # Then, add the rest of the project source code and install it
20 | # Installing separately from its dependencies allows optimal layer caching
21 | ADD . /app
22 | RUN --mount=type=cache,target=/root/.cache/uv \
23 |     uv sync --frozen --no-dev --no-editable
24 | 
25 | FROM python:3.12-slim-bookworm
26 | 
27 | RUN apt-get update && apt-get install -y git git-lfs && rm -rf /var/lib/apt/lists/* \
28 |     && git lfs install --system
29 | 
30 | WORKDIR /app
31 |  
32 | COPY --from=uv /root/.local /root/.local
33 | COPY --from=uv --chown=app:app /app/.venv /app/.venv
34 | 
35 | # Place executables in the environment at the front of the path
36 | ENV PATH="/app/.venv/bin:$PATH"
37 | 
38 | # when running the container, add --db-path and a bind mount to the host's db file
39 | ENTRYPOINT ["mcp-server-git"]
40 | 
```

--------------------------------------------------------------------------------
/src/time/Dockerfile:
--------------------------------------------------------------------------------

```dockerfile
 1 | # Use a Python image with uv pre-installed
 2 | FROM ghcr.io/astral-sh/uv:python3.12-bookworm-slim AS uv
 3 | 
 4 | # Install the project into `/app`
 5 | WORKDIR /app
 6 | 
 7 | # Enable bytecode compilation
 8 | ENV UV_COMPILE_BYTECODE=1
 9 | 
10 | # Copy from the cache instead of linking since it's a mounted volume
11 | ENV UV_LINK_MODE=copy
12 | 
13 | # Install the project's dependencies using the lockfile and settings
14 | RUN --mount=type=cache,target=/root/.cache/uv \
15 |     --mount=type=bind,source=uv.lock,target=uv.lock \
16 |     --mount=type=bind,source=pyproject.toml,target=pyproject.toml \
17 |     uv sync --frozen --no-install-project --no-dev --no-editable
18 | 
19 | # Then, add the rest of the project source code and install it
20 | # Installing separately from its dependencies allows optimal layer caching
21 | ADD . /app
22 | RUN --mount=type=cache,target=/root/.cache/uv \
23 |     uv sync --frozen --no-dev --no-editable
24 | 
25 | FROM python:3.12-slim-bookworm
26 | 
27 | WORKDIR /app
28 |  
29 | COPY --from=uv /root/.local /root/.local
30 | COPY --from=uv --chown=app:app /app/.venv /app/.venv
31 | 
32 | # Place executables in the environment at the front of the path
33 | ENV PATH="/app/.venv/bin:$PATH"
34 | 
35 | # Set the LOCAL_TIMEZONE environment variable
36 | ENV LOCAL_TIMEZONE=${LOCAL_TIMEZONE:-"UTC"}
37 | 
38 | # when running the container, add --local-timezone and a bind mount to the host's db file
39 | ENTRYPOINT ["mcp-server-time", "--local-timezone", "${LOCAL_TIMEZONE}"]
40 | 
```

--------------------------------------------------------------------------------
/.github/pull_request_template.md:
--------------------------------------------------------------------------------

```markdown
 1 | <!-- Provide a brief description of your changes -->
 2 | 
 3 | ## Description
 4 | 
 5 | ## Server Details
 6 | <!-- If modifying an existing server, provide details -->
 7 | - Server: <!-- e.g., filesystem, github -->
 8 | - Changes to: <!-- e.g., tools, resources, prompts -->
 9 | 
10 | ## Motivation and Context
11 | <!-- Why is this change needed? What problem does it solve? -->
12 | 
13 | ## How Has This Been Tested?
14 | <!-- Have you tested this with an LLM client? Which scenarios were tested? -->
15 | 
16 | ## Breaking Changes
17 | <!-- Will users need to update their MCP client configurations? -->
18 | 
19 | ## Types of changes
20 | <!-- What types of changes does your code introduce? Put an `x` in all the boxes that apply: -->
21 | - [ ] Bug fix (non-breaking change which fixes an issue)
22 | - [ ] New feature (non-breaking change which adds functionality)
23 | - [ ] Breaking change (fix or feature that would cause existing functionality to change)
24 | - [ ] Documentation update
25 | 
26 | ## Checklist
27 | <!-- Go over all the following points, and put an `x` in all the boxes that apply. -->
28 | - [ ] I have read the [MCP Protocol Documentation](https://modelcontextprotocol.io)
29 | - [ ] My changes follows MCP security best practices
30 | - [ ] I have updated the server's README accordingly
31 | - [ ] I have tested this with an LLM client
32 | - [ ] My code follows the repository's style guidelines
33 | - [ ] New and existing tests pass locally
34 | - [ ] I have added appropriate error handling
35 | - [ ] I have documented all environment variables and configuration options
36 | 
37 | ## Additional context
38 | <!-- Add any other context, implementation notes, or design decisions -->
39 | 
```

--------------------------------------------------------------------------------
/.github/workflows/claude.yml:
--------------------------------------------------------------------------------

```yaml
 1 | name: Claude Code
 2 | 
 3 | on:
 4 |   issue_comment:
 5 |     types: [created]
 6 |   pull_request_review_comment:
 7 |     types: [created]
 8 |   issues:
 9 |     types: [opened, assigned]
10 |   pull_request_review:
11 |     types: [submitted]
12 | 
13 | jobs:
14 |   claude:
15 |     if: |
16 |       (github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) ||
17 |       (github.event_name == 'pull_request_review_comment' && contains(github.event.comment.body, '@claude')) ||
18 |       (github.event_name == 'pull_request_review' && contains(github.event.review.body, '@claude')) ||
19 |       (github.event_name == 'issues' && (contains(github.event.issue.body, '@claude') || contains(github.event.issue.title, '@claude')))
20 |     runs-on: ubuntu-latest
21 |     permissions:
22 |       contents: read
23 |       pull-requests: read
24 |       issues: read
25 |       id-token: write
26 |       actions: read
27 |     steps:
28 |       - name: Checkout repository
29 |         uses: actions/checkout@v4
30 |         with:
31 |           fetch-depth: 1
32 | 
33 |       - name: Run Claude Code
34 |         id: claude
35 |         uses: anthropics/claude-code-action@beta
36 |         with:
37 |           anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
38 | 
39 |           # Allow Claude to read CI results on PRs
40 |           additional_permissions: |
41 |             actions: read
42 | 
43 |           # Trigger when assigned to an issue
44 |           assignee_trigger: "claude"
45 |           
46 |           # Allow Claude to run bash
47 |           # This should be safe given the repo is already public
48 |           allowed_tools: "Bash"
49 |           
50 |           custom_instructions: |
51 |             If posting a comment to GitHub, give a concise summary of the comment at the top and put all the details in a <details> block.
52 | 
```

--------------------------------------------------------------------------------
/src/everything/sse.ts:
--------------------------------------------------------------------------------

```typescript
 1 | import { SSEServerTransport } from "@modelcontextprotocol/sdk/server/sse.js";
 2 | import express from "express";
 3 | import { createServer } from "./everything.js";
 4 | import cors from 'cors';
 5 | 
 6 | console.error('Starting SSE server...');
 7 | 
 8 | const app = express();
 9 | app.use(cors({
10 |     "origin": "*", // use "*" with caution in production
11 |     "methods": "GET,POST",
12 |     "preflightContinue": false,
13 |     "optionsSuccessStatus": 204,
14 | })); // Enable CORS for all routes so Inspector can connect
15 | const transports: Map<string, SSEServerTransport> = new Map<string, SSEServerTransport>();
16 | 
17 | app.get("/sse", async (req, res) => {
18 |   let transport: SSEServerTransport;
19 |   const { server, cleanup, startNotificationIntervals } = createServer();
20 | 
21 |   if (req?.query?.sessionId) {
22 |     const sessionId = (req?.query?.sessionId as string);
23 |     transport = transports.get(sessionId) as SSEServerTransport;
24 |     console.error("Client Reconnecting? This shouldn't happen; when client has a sessionId, GET /sse should not be called again.", transport.sessionId);
25 |   } else {
26 |     // Create and store transport for new session
27 |     transport = new SSEServerTransport("/message", res);
28 |     transports.set(transport.sessionId, transport);
29 | 
30 |     // Connect server to transport
31 |     await server.connect(transport);
32 |     console.error("Client Connected: ", transport.sessionId);
33 | 
34 |     // Start notification intervals after client connects
35 |     startNotificationIntervals(transport.sessionId);
36 | 
37 |     // Handle close of connection
38 |     server.onclose = async () => {
39 |       console.error("Client Disconnected: ", transport.sessionId);
40 |       transports.delete(transport.sessionId);
41 |       await cleanup();
42 |     };
43 | 
44 |   }
45 | 
46 | });
47 | 
48 | app.post("/message", async (req, res) => {
49 |   const sessionId = (req?.query?.sessionId as string);
50 |   const transport = transports.get(sessionId);
51 |   if (transport) {
52 |     console.error("Client Message from", sessionId);
53 |     await transport.handlePostMessage(req, res);
54 |   } else {
55 |     console.error(`No transport found for sessionId ${sessionId}`)
56 |   }
57 | });
58 | 
59 | const PORT = process.env.PORT || 3001;
60 | app.listen(PORT, () => {
61 |   console.error(`Server is running on port ${PORT}`);
62 | });
63 | 
```

--------------------------------------------------------------------------------
/src/filesystem/path-validation.ts:
--------------------------------------------------------------------------------

```typescript
 1 | import path from 'path';
 2 | 
 3 | /**
 4 |  * Checks if an absolute path is within any of the allowed directories.
 5 |  * 
 6 |  * @param absolutePath - The absolute path to check (will be normalized)
 7 |  * @param allowedDirectories - Array of absolute allowed directory paths (will be normalized)
 8 |  * @returns true if the path is within an allowed directory, false otherwise
 9 |  * @throws Error if given relative paths after normalization
10 |  */
11 | export function isPathWithinAllowedDirectories(absolutePath: string, allowedDirectories: string[]): boolean {
12 |   // Type validation
13 |   if (typeof absolutePath !== 'string' || !Array.isArray(allowedDirectories)) {
14 |     return false;
15 |   }
16 | 
17 |   // Reject empty inputs
18 |   if (!absolutePath || allowedDirectories.length === 0) {
19 |     return false;
20 |   }
21 | 
22 |   // Reject null bytes (forbidden in paths)
23 |   if (absolutePath.includes('\x00')) {
24 |     return false;
25 |   }
26 | 
27 |   // Normalize the input path
28 |   let normalizedPath: string;
29 |   try {
30 |     normalizedPath = path.resolve(path.normalize(absolutePath));
31 |   } catch {
32 |     return false;
33 |   }
34 | 
35 |   // Verify it's absolute after normalization
36 |   if (!path.isAbsolute(normalizedPath)) {
37 |     throw new Error('Path must be absolute after normalization');
38 |   }
39 | 
40 |   // Check against each allowed directory
41 |   return allowedDirectories.some(dir => {
42 |     if (typeof dir !== 'string' || !dir) {
43 |       return false;
44 |     }
45 | 
46 |     // Reject null bytes in allowed dirs
47 |     if (dir.includes('\x00')) {
48 |       return false;
49 |     }
50 | 
51 |     // Normalize the allowed directory
52 |     let normalizedDir: string;
53 |     try {
54 |       normalizedDir = path.resolve(path.normalize(dir));
55 |     } catch {
56 |       return false;
57 |     }
58 | 
59 |     // Verify allowed directory is absolute after normalization
60 |     if (!path.isAbsolute(normalizedDir)) {
61 |       throw new Error('Allowed directories must be absolute paths after normalization');
62 |     }
63 | 
64 |     // Check if normalizedPath is within normalizedDir
65 |     // Path is inside if it's the same or a subdirectory
66 |     if (normalizedPath === normalizedDir) {
67 |       return true;
68 |     }
69 |     
70 |     // Special case for root directory to avoid double slash
71 |     // On Windows, we need to check if both paths are on the same drive
72 |     if (normalizedDir === path.sep) {
73 |       return normalizedPath.startsWith(path.sep);
74 |     }
75 |     
76 |     // On Windows, also check for drive root (e.g., "C:\")
77 |     if (path.sep === '\\' && normalizedDir.match(/^[A-Za-z]:\\?$/)) {
78 |       // Ensure both paths are on the same drive
79 |       const dirDrive = normalizedDir.charAt(0).toLowerCase();
80 |       const pathDrive = normalizedPath.charAt(0).toLowerCase();
81 |       return pathDrive === dirDrive && normalizedPath.startsWith(normalizedDir.replace(/\\?$/, '\\'));
82 |     }
83 |     
84 |     return normalizedPath.startsWith(normalizedDir + path.sep);
85 |   });
86 | }
87 | 
```

--------------------------------------------------------------------------------
/src/filesystem/roots-utils.ts:
--------------------------------------------------------------------------------

```typescript
 1 | import { promises as fs, type Stats } from 'fs';
 2 | import path from 'path';
 3 | import os from 'os';
 4 | import { normalizePath } from './path-utils.js';
 5 | import type { Root } from '@modelcontextprotocol/sdk/types.js';
 6 | 
 7 | /**
 8 |  * Converts a root URI to a normalized directory path with basic security validation.
 9 |  * @param rootUri - File URI (file://...) or plain directory path
10 |  * @returns Promise resolving to validated path or null if invalid
11 |  */
12 | async function parseRootUri(rootUri: string): Promise<string | null> {
13 |   try {
14 |     const rawPath = rootUri.startsWith('file://') ? rootUri.slice(7) : rootUri;
15 |     const expandedPath = rawPath.startsWith('~/') || rawPath === '~' 
16 |       ? path.join(os.homedir(), rawPath.slice(1)) 
17 |       : rawPath;
18 |     const absolutePath = path.resolve(expandedPath);
19 |     const resolvedPath = await fs.realpath(absolutePath);
20 |     return normalizePath(resolvedPath);
21 |   } catch {
22 |     return null; // Path doesn't exist or other error
23 |   }
24 | }
25 | 
26 | /**
27 |  * Formats error message for directory validation failures.
28 |  * @param dir - Directory path that failed validation
29 |  * @param error - Error that occurred during validation
30 |  * @param reason - Specific reason for failure
31 |  * @returns Formatted error message
32 |  */
33 | function formatDirectoryError(dir: string, error?: unknown, reason?: string): string {
34 |   if (reason) {
35 |     return `Skipping ${reason}: ${dir}`;
36 |   }
37 |   const message = error instanceof Error ? error.message : String(error);
38 |   return `Skipping invalid directory: ${dir} due to error: ${message}`;
39 | }
40 | 
41 | /**
42 |  * Resolves requested root directories from MCP root specifications.
43 |  * 
44 |  * Converts root URI specifications (file:// URIs or plain paths) into normalized
45 |  * directory paths, validating that each path exists and is a directory.
46 |  * Includes symlink resolution for security.
47 |  * 
48 |  * @param requestedRoots - Array of root specifications with URI and optional name
49 |  * @returns Promise resolving to array of validated directory paths
50 |  */
51 | export async function getValidRootDirectories(
52 |   requestedRoots: readonly Root[]
53 | ): Promise<string[]> {
54 |   const validatedDirectories: string[] = [];
55 |   
56 |   for (const requestedRoot of requestedRoots) {
57 |     const resolvedPath = await parseRootUri(requestedRoot.uri);
58 |     if (!resolvedPath) {
59 |       console.error(formatDirectoryError(requestedRoot.uri, undefined, 'invalid path or inaccessible'));
60 |       continue;
61 |     }
62 |     
63 |     try {
64 |       const stats: Stats = await fs.stat(resolvedPath);
65 |       if (stats.isDirectory()) {
66 |         validatedDirectories.push(resolvedPath);
67 |       } else {
68 |         console.error(formatDirectoryError(resolvedPath, undefined, 'non-directory root'));
69 |       }
70 |     } catch (error) {
71 |       console.error(formatDirectoryError(resolvedPath, error));
72 |     }
73 |   }
74 |   
75 |   return validatedDirectories;
76 | }
```

--------------------------------------------------------------------------------
/src/filesystem/__tests__/roots-utils.test.ts:
--------------------------------------------------------------------------------

```typescript
 1 | import { describe, it, expect, beforeEach, afterEach } from '@jest/globals';
 2 | import { getValidRootDirectories } from '../roots-utils.js';
 3 | import { mkdtempSync, rmSync, mkdirSync, writeFileSync, realpathSync } from 'fs';
 4 | import { tmpdir } from 'os';
 5 | import { join } from 'path';
 6 | import type { Root } from '@modelcontextprotocol/sdk/types.js';
 7 | 
 8 | describe('getValidRootDirectories', () => {
 9 |   let testDir1: string;
10 |   let testDir2: string;
11 |   let testDir3: string;
12 |   let testFile: string;
13 | 
14 |   beforeEach(() => {
15 |     // Create test directories
16 |     testDir1 = realpathSync(mkdtempSync(join(tmpdir(), 'mcp-roots-test1-')));
17 |     testDir2 = realpathSync(mkdtempSync(join(tmpdir(), 'mcp-roots-test2-')));
18 |     testDir3 = realpathSync(mkdtempSync(join(tmpdir(), 'mcp-roots-test3-')));
19 | 
20 |     // Create a test file (not a directory)
21 |     testFile = join(testDir1, 'test-file.txt');
22 |     writeFileSync(testFile, 'test content');
23 |   });
24 | 
25 |   afterEach(() => {
26 |     // Cleanup
27 |     rmSync(testDir1, { recursive: true, force: true });
28 |     rmSync(testDir2, { recursive: true, force: true });
29 |     rmSync(testDir3, { recursive: true, force: true });
30 |   });
31 | 
32 |   describe('valid directory processing', () => {
33 |     it('should process all URI formats and edge cases', async () => {
34 |       const roots = [
35 |         { uri: `file://${testDir1}`, name: 'File URI' },
36 |         { uri: testDir2, name: 'Plain path' },
37 |         { uri: testDir3 } // Plain path without name property
38 |       ];
39 | 
40 |       const result = await getValidRootDirectories(roots);
41 | 
42 |       expect(result).toContain(testDir1);
43 |       expect(result).toContain(testDir2);
44 |       expect(result).toContain(testDir3);
45 |       expect(result).toHaveLength(3);
46 |     });
47 | 
48 |     it('should normalize complex paths', async () => {
49 |       const subDir = join(testDir1, 'subdir');
50 |       mkdirSync(subDir);
51 |       
52 |       const roots = [
53 |         { uri: `file://${testDir1}/./subdir/../subdir`, name: 'Complex Path' }
54 |       ];
55 | 
56 |       const result = await getValidRootDirectories(roots);
57 | 
58 |       expect(result).toHaveLength(1);
59 |       expect(result[0]).toBe(subDir);
60 |     });
61 |   });
62 | 
63 |   describe('error handling', () => {
64 | 
65 |     it('should handle various error types', async () => {
66 |       const nonExistentDir = join(tmpdir(), 'non-existent-directory-12345');
67 |       const invalidPath = '\0invalid\0path'; // Null bytes cause different error types
68 |       const roots = [
69 |         { uri: `file://${testDir1}`, name: 'Valid Dir' },
70 |         { uri: `file://${nonExistentDir}`, name: 'Non-existent Dir' },
71 |         { uri: `file://${testFile}`, name: 'File Not Dir' },
72 |         { uri: `file://${invalidPath}`, name: 'Invalid Path' }
73 |       ];
74 | 
75 |       const result = await getValidRootDirectories(roots);
76 | 
77 |       expect(result).toContain(testDir1);
78 |       expect(result).not.toContain(nonExistentDir);
79 |       expect(result).not.toContain(testFile);
80 |       expect(result).not.toContain(invalidPath);
81 |       expect(result).toHaveLength(1);
82 |     });
83 |   });
84 | });
```

--------------------------------------------------------------------------------
/.github/workflows/typescript.yml:
--------------------------------------------------------------------------------

```yaml
  1 | name: TypeScript
  2 | 
  3 | on:
  4 |   push:
  5 |     branches:
  6 |       - main
  7 |   pull_request:
  8 |   release:
  9 |     types: [published]
 10 | 
 11 | jobs:
 12 |   detect-packages:
 13 |     runs-on: ubuntu-latest
 14 |     outputs:
 15 |       packages: ${{ steps.find-packages.outputs.packages }}
 16 |     steps:
 17 |       - uses: actions/checkout@v4
 18 |       - name: Find JS packages
 19 |         id: find-packages
 20 |         working-directory: src
 21 |         run: |
 22 |           PACKAGES=$(find . -name package.json -not -path "*/node_modules/*" -exec dirname {} \; | sed 's/^\.\///' | jq -R -s -c 'split("\n")[:-1]')
 23 |           echo "packages=$PACKAGES" >> $GITHUB_OUTPUT
 24 | 
 25 |   test:
 26 |     needs: [detect-packages]
 27 |     strategy:
 28 |       matrix:
 29 |         package: ${{ fromJson(needs.detect-packages.outputs.packages) }}
 30 |     name: Test ${{ matrix.package }}
 31 |     runs-on: ubuntu-latest
 32 |     steps:
 33 |       - uses: actions/checkout@v4
 34 | 
 35 |       - uses: actions/setup-node@v4
 36 |         with:
 37 |           node-version: 22
 38 |           cache: npm
 39 | 
 40 |       - name: Install dependencies
 41 |         working-directory: src/${{ matrix.package }}
 42 |         run: npm ci
 43 | 
 44 |       - name: Check if tests exist
 45 |         id: check-tests
 46 |         working-directory: src/${{ matrix.package }}
 47 |         run: |
 48 |           if npm run test --silent 2>/dev/null; then
 49 |             echo "has-tests=true" >> $GITHUB_OUTPUT
 50 |           else
 51 |             echo "has-tests=false" >> $GITHUB_OUTPUT
 52 |           fi
 53 |         continue-on-error: true
 54 | 
 55 |       - name: Run tests
 56 |         if: steps.check-tests.outputs.has-tests == 'true'
 57 |         working-directory: src/${{ matrix.package }}
 58 |         run: npm test
 59 | 
 60 |   build:
 61 |     needs: [detect-packages, test]
 62 |     strategy:
 63 |       matrix:
 64 |         package: ${{ fromJson(needs.detect-packages.outputs.packages) }}
 65 |     name: Build ${{ matrix.package }}
 66 |     runs-on: ubuntu-latest
 67 |     steps:
 68 |       - uses: actions/checkout@v4
 69 | 
 70 |       - uses: actions/setup-node@v4
 71 |         with:
 72 |           node-version: 22
 73 |           cache: npm
 74 | 
 75 |       - name: Install dependencies
 76 |         working-directory: src/${{ matrix.package }}
 77 |         run: npm ci
 78 | 
 79 |       - name: Build package
 80 |         working-directory: src/${{ matrix.package }}
 81 |         run: npm run build
 82 | 
 83 |   publish:
 84 |     runs-on: ubuntu-latest
 85 |     needs: [build, detect-packages]
 86 |     if: github.event_name == 'release'
 87 |     environment: release
 88 | 
 89 |     strategy:
 90 |       matrix:
 91 |         package: ${{ fromJson(needs.detect-packages.outputs.packages) }}
 92 |     name: Publish ${{ matrix.package }}
 93 | 
 94 |     permissions:
 95 |       contents: read
 96 |       id-token: write
 97 | 
 98 |     steps:
 99 |       - uses: actions/checkout@v4
100 |       - uses: actions/setup-node@v4
101 |         with:
102 |           node-version: 22
103 |           cache: npm
104 |           registry-url: "https://registry.npmjs.org"
105 | 
106 |       - name: Install dependencies
107 |         working-directory: src/${{ matrix.package }}
108 |         run: npm ci
109 | 
110 |       - name: Publish package
111 |         working-directory: src/${{ matrix.package }}
112 |         run: npm publish --access public
113 |         env:
114 |           NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
115 | 
```

--------------------------------------------------------------------------------
/.github/workflows/python.yml:
--------------------------------------------------------------------------------

```yaml
  1 | name: Python
  2 | 
  3 | on:
  4 |   push:
  5 |     branches:
  6 |       - main
  7 |   pull_request:
  8 |   release:
  9 |     types: [published]
 10 | 
 11 | jobs:
 12 |   detect-packages:
 13 |     runs-on: ubuntu-latest
 14 |     outputs:
 15 |       packages: ${{ steps.find-packages.outputs.packages }}
 16 |     steps:
 17 |       - uses: actions/checkout@v4
 18 | 
 19 |       - name: Find Python packages
 20 |         id: find-packages
 21 |         working-directory: src
 22 |         run: |
 23 |           PACKAGES=$(find . -name pyproject.toml -exec dirname {} \; | sed 's/^\.\///' | jq -R -s -c 'split("\n")[:-1]')
 24 |           echo "packages=$PACKAGES" >> $GITHUB_OUTPUT
 25 | 
 26 |   test:
 27 |     needs: [detect-packages]
 28 |     strategy:
 29 |       matrix:
 30 |         package: ${{ fromJson(needs.detect-packages.outputs.packages) }}
 31 |     name: Test ${{ matrix.package }}
 32 |     runs-on: ubuntu-latest
 33 |     steps:
 34 |       - uses: actions/checkout@v4
 35 | 
 36 |       - name: Install uv
 37 |         uses: astral-sh/setup-uv@v3
 38 | 
 39 |       - name: Set up Python
 40 |         uses: actions/setup-python@v5
 41 |         with:
 42 |           python-version-file: "src/${{ matrix.package }}/.python-version"
 43 | 
 44 |       - name: Install dependencies
 45 |         working-directory: src/${{ matrix.package }}
 46 |         run: uv sync --frozen --all-extras --dev
 47 | 
 48 |       - name: Check if tests exist
 49 |         id: check-tests
 50 |         working-directory: src/${{ matrix.package }}
 51 |         run: |
 52 |           if [ -d "tests" ] || [ -d "test" ] || grep -q "pytest" pyproject.toml; then
 53 |             echo "has-tests=true" >> $GITHUB_OUTPUT
 54 |           else
 55 |             echo "has-tests=false" >> $GITHUB_OUTPUT
 56 |           fi
 57 | 
 58 |       - name: Run tests
 59 |         if: steps.check-tests.outputs.has-tests == 'true'
 60 |         working-directory: src/${{ matrix.package }}
 61 |         run: uv run pytest
 62 | 
 63 |   build:
 64 |     needs: [detect-packages, test]
 65 |     strategy:
 66 |       matrix:
 67 |         package: ${{ fromJson(needs.detect-packages.outputs.packages) }}
 68 |     name: Build ${{ matrix.package }}
 69 |     runs-on: ubuntu-latest
 70 |     steps:
 71 |       - uses: actions/checkout@v4
 72 | 
 73 |       - name: Install uv
 74 |         uses: astral-sh/setup-uv@v3
 75 | 
 76 |       - name: Set up Python
 77 |         uses: actions/setup-python@v5
 78 |         with:
 79 |           python-version-file: "src/${{ matrix.package }}/.python-version"
 80 | 
 81 |       - name: Install dependencies
 82 |         working-directory: src/${{ matrix.package }}
 83 |         run: uv sync --frozen --all-extras --dev
 84 | 
 85 |       - name: Run pyright
 86 |         working-directory: src/${{ matrix.package }}
 87 |         run: uv run --frozen pyright
 88 | 
 89 |       - name: Build package
 90 |         working-directory: src/${{ matrix.package }}
 91 |         run: uv build
 92 | 
 93 |       - name: Upload artifacts
 94 |         uses: actions/upload-artifact@v4
 95 |         with:
 96 |           name: dist-${{ matrix.package }}
 97 |           path: src/${{ matrix.package }}/dist/
 98 | 
 99 |   publish:
100 |     runs-on: ubuntu-latest
101 |     needs: [build, detect-packages]
102 |     if: github.event_name == 'release'
103 | 
104 |     strategy:
105 |       matrix:
106 |         package: ${{ fromJson(needs.detect-packages.outputs.packages) }}
107 |     name: Publish ${{ matrix.package }}
108 | 
109 |     environment: release
110 |     permissions:
111 |       id-token: write # Required for trusted publishing
112 | 
113 |     steps:
114 |       - name: Download artifacts
115 |         uses: actions/download-artifact@v4
116 |         with:
117 |           name: dist-${{ matrix.package }}
118 |           path: dist/
119 | 
120 |       - name: Publish package to PyPI
121 |         uses: pypa/gh-action-pypi-publish@release/v1
122 | 
```

--------------------------------------------------------------------------------
/src/filesystem/path-utils.ts:
--------------------------------------------------------------------------------

```typescript
  1 | import path from "path";
  2 | import os from 'os';
  3 | 
  4 | /**
  5 |  * Converts WSL or Unix-style Windows paths to Windows format
  6 |  * @param p The path to convert
  7 |  * @returns Converted Windows path
  8 |  */
  9 | export function convertToWindowsPath(p: string): string {
 10 |   // Handle WSL paths (/mnt/c/...)
 11 |   if (p.startsWith('/mnt/')) {
 12 |     const driveLetter = p.charAt(5).toUpperCase();
 13 |     const pathPart = p.slice(6).replace(/\//g, '\\');
 14 |     return `${driveLetter}:${pathPart}`;
 15 |   }
 16 |   
 17 |   // Handle Unix-style Windows paths (/c/...)
 18 |   if (p.match(/^\/[a-zA-Z]\//)) {
 19 |     const driveLetter = p.charAt(1).toUpperCase();
 20 |     const pathPart = p.slice(2).replace(/\//g, '\\');
 21 |     return `${driveLetter}:${pathPart}`;
 22 |   }
 23 | 
 24 |   // Handle standard Windows paths, ensuring backslashes
 25 |   if (p.match(/^[a-zA-Z]:/)) {
 26 |     return p.replace(/\//g, '\\');
 27 |   }
 28 | 
 29 |   // Leave non-Windows paths unchanged
 30 |   return p;
 31 | }
 32 | 
 33 | /**
 34 |  * Normalizes path by standardizing format while preserving OS-specific behavior
 35 |  * @param p The path to normalize
 36 |  * @returns Normalized path
 37 |  */
 38 | export function normalizePath(p: string): string {
 39 |   // Remove any surrounding quotes and whitespace
 40 |   p = p.trim().replace(/^["']|["']$/g, '');
 41 |   
 42 |   // Check if this is a Unix path (starts with / but not a Windows or WSL path)
 43 |   const isUnixPath = p.startsWith('/') && 
 44 |                     !p.match(/^\/mnt\/[a-z]\//i) && 
 45 |                     !p.match(/^\/[a-zA-Z]\//);
 46 |   
 47 |   if (isUnixPath) {
 48 |     // For Unix paths, just normalize without converting to Windows format
 49 |     // Replace double slashes with single slashes and remove trailing slashes
 50 |     return p.replace(/\/+/g, '/').replace(/\/+$/, '');
 51 |   }
 52 |   
 53 |   // Convert WSL or Unix-style Windows paths to Windows format
 54 |   p = convertToWindowsPath(p);
 55 |   
 56 |   // Handle double backslashes, preserving leading UNC \\
 57 |   if (p.startsWith('\\\\')) {
 58 |     // For UNC paths, first normalize any excessive leading backslashes to exactly \\
 59 |     // Then normalize double backslashes in the rest of the path
 60 |     let uncPath = p;
 61 |     // Replace multiple leading backslashes with exactly two
 62 |     uncPath = uncPath.replace(/^\\{2,}/, '\\\\');
 63 |     // Now normalize any remaining double backslashes in the rest of the path
 64 |     const restOfPath = uncPath.substring(2).replace(/\\\\/g, '\\');
 65 |     p = '\\\\' + restOfPath;
 66 |   } else {
 67 |     // For non-UNC paths, normalize all double backslashes
 68 |     p = p.replace(/\\\\/g, '\\');
 69 |   }
 70 |   
 71 |   // Use Node's path normalization, which handles . and .. segments
 72 |   let normalized = path.normalize(p);
 73 |   
 74 |   // Fix UNC paths after normalization (path.normalize can remove a leading backslash)
 75 |   if (p.startsWith('\\\\') && !normalized.startsWith('\\\\')) {
 76 |     normalized = '\\' + normalized;
 77 |   }
 78 |   
 79 |   // Handle Windows paths: convert slashes and ensure drive letter is capitalized
 80 |   if (normalized.match(/^[a-zA-Z]:/)) {
 81 |     let result = normalized.replace(/\//g, '\\');
 82 |     // Capitalize drive letter if present
 83 |     if (/^[a-z]:/.test(result)) {
 84 |       result = result.charAt(0).toUpperCase() + result.slice(1);
 85 |     }
 86 |     return result;
 87 |   }
 88 |   
 89 |   // For all other paths (including relative paths), convert forward slashes to backslashes
 90 |   // This ensures relative paths like "some/relative/path" become "some\\relative\\path"
 91 |   return normalized.replace(/\//g, '\\');
 92 | }
 93 | 
 94 | /**
 95 |  * Expands home directory tildes in paths
 96 |  * @param filepath The path to expand
 97 |  * @returns Expanded path
 98 |  */
 99 | export function expandHome(filepath: string): string {
100 |   if (filepath.startsWith('~/') || filepath === '~') {
101 |     return path.join(os.homedir(), filepath.slice(1));
102 |   }
103 |   return filepath;
104 | }
105 | 
```

--------------------------------------------------------------------------------
/src/git/tests/test_server.py:
--------------------------------------------------------------------------------

```python
 1 | import pytest
 2 | from pathlib import Path
 3 | import git
 4 | from mcp_server_git.server import git_checkout, git_branch, git_add
 5 | import shutil
 6 | 
 7 | @pytest.fixture
 8 | def test_repository(tmp_path: Path):
 9 |     repo_path = tmp_path / "temp_test_repo"
10 |     test_repo = git.Repo.init(repo_path)
11 | 
12 |     Path(repo_path / "test.txt").write_text("test")
13 |     test_repo.index.add(["test.txt"])
14 |     test_repo.index.commit("initial commit")
15 | 
16 |     yield test_repo
17 | 
18 |     shutil.rmtree(repo_path)
19 | 
20 | def test_git_checkout_existing_branch(test_repository):
21 |     test_repository.git.branch("test-branch")
22 |     result = git_checkout(test_repository, "test-branch")
23 | 
24 |     assert "Switched to branch 'test-branch'" in result
25 |     assert test_repository.active_branch.name == "test-branch"
26 | 
27 | def test_git_checkout_nonexistent_branch(test_repository):
28 | 
29 |     with pytest.raises(git.GitCommandError):
30 |         git_checkout(test_repository, "nonexistent-branch")
31 | 
32 | def test_git_branch_local(test_repository):
33 |     test_repository.git.branch("new-branch-local")
34 |     result = git_branch(test_repository, "local")
35 |     assert "new-branch-local" in result
36 | 
37 | def test_git_branch_remote(test_repository):
38 |     # GitPython does not easily support creating remote branches without a remote.
39 |     # This test will check the behavior when 'remote' is specified without actual remotes.
40 |     result = git_branch(test_repository, "remote")
41 |     assert "" == result.strip()  # Should be empty if no remote branches
42 | 
43 | def test_git_branch_all(test_repository):
44 |     test_repository.git.branch("new-branch-all")
45 |     result = git_branch(test_repository, "all")
46 |     assert "new-branch-all" in result
47 | 
48 | def test_git_branch_contains(test_repository):
49 |     # Create a new branch and commit to it
50 |     test_repository.git.checkout("-b", "feature-branch")
51 |     Path(test_repository.working_dir / Path("feature.txt")).write_text("feature content")
52 |     test_repository.index.add(["feature.txt"])
53 |     commit = test_repository.index.commit("feature commit")
54 |     test_repository.git.checkout("master")
55 | 
56 |     result = git_branch(test_repository, "local", contains=commit.hexsha)
57 |     assert "feature-branch" in result
58 |     assert "master" not in result
59 | 
60 | def test_git_branch_not_contains(test_repository):
61 |     # Create a new branch and commit to it
62 |     test_repository.git.checkout("-b", "another-feature-branch")
63 |     Path(test_repository.working_dir / Path("another_feature.txt")).write_text("another feature content")
64 |     test_repository.index.add(["another_feature.txt"])
65 |     commit = test_repository.index.commit("another feature commit")
66 |     test_repository.git.checkout("master")
67 | 
68 |     result = git_branch(test_repository, "local", not_contains=commit.hexsha)
69 |     assert "another-feature-branch" not in result
70 |     assert "master" in result
71 | 
72 | def test_git_add_all_files(test_repository):
73 |     file_path = Path(test_repository.working_dir) / "all_file.txt"
74 |     file_path.write_text("adding all")
75 | 
76 |     result = git_add(test_repository, ["."])
77 | 
78 |     staged_files = [item.a_path for item in test_repository.index.diff("HEAD")]
79 |     assert "all_file.txt" in staged_files
80 |     assert result == "Files staged successfully"
81 | 
82 | def test_git_add_specific_files(test_repository):
83 |     file1 = Path(test_repository.working_dir) / "file1.txt"
84 |     file2 = Path(test_repository.working_dir) / "file2.txt"
85 |     file1.write_text("file 1 content")
86 |     file2.write_text("file 2 content")
87 | 
88 |     result = git_add(test_repository, ["file1.txt"])
89 | 
90 |     staged_files = [item.a_path for item in test_repository.index.diff("HEAD")]
91 |     assert "file1.txt" in staged_files
92 |     assert "file2.txt" not in staged_files
93 |     assert result == "Files staged successfully"
94 | 
```

--------------------------------------------------------------------------------
/scripts/release.py:
--------------------------------------------------------------------------------

```python
  1 | #!/usr/bin/env uv run --script
  2 | # /// script
  3 | # requires-python = ">=3.12"
  4 | # dependencies = [
  5 | #     "click>=8.1.8",
  6 | #     "tomlkit>=0.13.2"
  7 | # ]
  8 | # ///
  9 | import sys
 10 | import re
 11 | import click
 12 | from pathlib import Path
 13 | import json
 14 | import tomlkit
 15 | import datetime
 16 | import subprocess
 17 | from dataclasses import dataclass
 18 | from typing import Any, Iterator, NewType, Protocol
 19 | 
 20 | 
 21 | Version = NewType("Version", str)
 22 | GitHash = NewType("GitHash", str)
 23 | 
 24 | 
 25 | class GitHashParamType(click.ParamType):
 26 |     name = "git_hash"
 27 | 
 28 |     def convert(
 29 |         self, value: Any, param: click.Parameter | None, ctx: click.Context | None
 30 |     ) -> GitHash | None:
 31 |         if value is None:
 32 |             return None
 33 | 
 34 |         if not (8 <= len(value) <= 40):
 35 |             self.fail(f"Git hash must be between 8 and 40 characters, got {len(value)}")
 36 | 
 37 |         if not re.match(r"^[0-9a-fA-F]+$", value):
 38 |             self.fail("Git hash must contain only hex digits (0-9, a-f)")
 39 | 
 40 |         try:
 41 |             # Verify hash exists in repo
 42 |             subprocess.run(
 43 |                 ["git", "rev-parse", "--verify", value], check=True, capture_output=True
 44 |             )
 45 |         except subprocess.CalledProcessError:
 46 |             self.fail(f"Git hash {value} not found in repository")
 47 | 
 48 |         return GitHash(value.lower())
 49 | 
 50 | 
 51 | GIT_HASH = GitHashParamType()
 52 | 
 53 | 
 54 | class Package(Protocol):
 55 |     path: Path
 56 | 
 57 |     def package_name(self) -> str: ...
 58 | 
 59 |     def update_version(self, version: Version) -> None: ...
 60 | 
 61 | 
 62 | @dataclass
 63 | class NpmPackage:
 64 |     path: Path
 65 | 
 66 |     def package_name(self) -> str:
 67 |         with open(self.path / "package.json", "r") as f:
 68 |             return json.load(f)["name"]
 69 | 
 70 |     def update_version(self, version: Version):
 71 |         with open(self.path / "package.json", "r+") as f:
 72 |             data = json.load(f)
 73 |             data["version"] = version
 74 |             f.seek(0)
 75 |             json.dump(data, f, indent=2)
 76 |             f.truncate()
 77 | 
 78 | 
 79 | @dataclass
 80 | class PyPiPackage:
 81 |     path: Path
 82 | 
 83 |     def package_name(self) -> str:
 84 |         with open(self.path / "pyproject.toml") as f:
 85 |             toml_data = tomlkit.parse(f.read())
 86 |             name = toml_data.get("project", {}).get("name")
 87 |             if not name:
 88 |                 raise Exception("No name in pyproject.toml project section")
 89 |             return str(name)
 90 | 
 91 |     def update_version(self, version: Version):
 92 |         # Update version in pyproject.toml
 93 |         with open(self.path / "pyproject.toml") as f:
 94 |             data = tomlkit.parse(f.read())
 95 |             data["project"]["version"] = version
 96 | 
 97 |         with open(self.path / "pyproject.toml", "w") as f:
 98 |             f.write(tomlkit.dumps(data))
 99 | 
100 | 
101 | def has_changes(path: Path, git_hash: GitHash) -> bool:
102 |     """Check if any files changed between current state and git hash"""
103 |     try:
104 |         output = subprocess.run(
105 |             ["git", "diff", "--name-only", git_hash, "--", "."],
106 |             cwd=path,
107 |             check=True,
108 |             capture_output=True,
109 |             text=True,
110 |         )
111 | 
112 |         changed_files = [Path(f) for f in output.stdout.splitlines()]
113 |         relevant_files = [f for f in changed_files if f.suffix in [".py", ".ts"]]
114 |         return len(relevant_files) >= 1
115 |     except subprocess.CalledProcessError:
116 |         return False
117 | 
118 | 
119 | def gen_version() -> Version:
120 |     """Generate version based on current date"""
121 |     now = datetime.datetime.now()
122 |     return Version(f"{now.year}.{now.month}.{now.day}")
123 | 
124 | 
125 | def find_changed_packages(directory: Path, git_hash: GitHash) -> Iterator[Package]:
126 |     for path in directory.glob("*/package.json"):
127 |         if has_changes(path.parent, git_hash):
128 |             yield NpmPackage(path.parent)
129 |     for path in directory.glob("*/pyproject.toml"):
130 |         if has_changes(path.parent, git_hash):
131 |             yield PyPiPackage(path.parent)
132 | 
133 | 
134 | @click.group()
135 | def cli():
136 |     pass
137 | 
138 | 
139 | @cli.command("update-packages")
140 | @click.option(
141 |     "--directory", type=click.Path(exists=True, path_type=Path), default=Path.cwd()
142 | )
143 | @click.argument("git_hash", type=GIT_HASH)
144 | def update_packages(directory: Path, git_hash: GitHash) -> int:
145 |     # Detect package type
146 |     path = directory.resolve(strict=True)
147 |     version = gen_version()
148 | 
149 |     for package in find_changed_packages(path, git_hash):
150 |         name = package.package_name()
151 |         package.update_version(version)
152 | 
153 |         click.echo(f"{name}@{version}")
154 | 
155 |     return 0
156 | 
157 | 
158 | @cli.command("generate-notes")
159 | @click.option(
160 |     "--directory", type=click.Path(exists=True, path_type=Path), default=Path.cwd()
161 | )
162 | @click.argument("git_hash", type=GIT_HASH)
163 | def generate_notes(directory: Path, git_hash: GitHash) -> int:
164 |     # Detect package type
165 |     path = directory.resolve(strict=True)
166 |     version = gen_version()
167 | 
168 |     click.echo(f"# Release : v{version}")
169 |     click.echo("")
170 |     click.echo("## Updated packages")
171 |     for package in find_changed_packages(path, git_hash):
172 |         name = package.package_name()
173 |         click.echo(f"- {name}@{version}")
174 | 
175 |     return 0
176 | 
177 | 
178 | @cli.command("generate-version")
179 | def generate_version() -> int:
180 |     # Detect package type
181 |     click.echo(gen_version())
182 |     return 0
183 | 
184 | 
185 | @cli.command("generate-matrix")
186 | @click.option(
187 |     "--directory", type=click.Path(exists=True, path_type=Path), default=Path.cwd()
188 | )
189 | @click.option("--npm", is_flag=True, default=False)
190 | @click.option("--pypi", is_flag=True, default=False)
191 | @click.argument("git_hash", type=GIT_HASH)
192 | def generate_matrix(directory: Path, git_hash: GitHash, pypi: bool, npm: bool) -> int:
193 |     # Detect package type
194 |     path = directory.resolve(strict=True)
195 |     version = gen_version()
196 | 
197 |     changes = []
198 |     for package in find_changed_packages(path, git_hash):
199 |         pkg = package.path.relative_to(path)
200 |         if npm and isinstance(package, NpmPackage):
201 |             changes.append(str(pkg))
202 |         if pypi and isinstance(package, PyPiPackage):
203 |             changes.append(str(pkg))
204 | 
205 |     click.echo(json.dumps(changes))
206 |     return 0
207 | 
208 | 
209 | if __name__ == "__main__":
210 |     sys.exit(cli())
211 | 
```

--------------------------------------------------------------------------------
/src/filesystem/__tests__/directory-tree.test.ts:
--------------------------------------------------------------------------------

```typescript
  1 | import { describe, it, expect, beforeEach, afterEach } from '@jest/globals';
  2 | import * as fs from 'fs/promises';
  3 | import * as path from 'path';
  4 | import * as os from 'os';
  5 | 
  6 | // We need to test the buildTree function, but it's defined inside the request handler
  7 | // So we'll extract the core logic into a testable function
  8 | import { minimatch } from 'minimatch';
  9 | 
 10 | interface TreeEntry {
 11 |     name: string;
 12 |     type: 'file' | 'directory';
 13 |     children?: TreeEntry[];
 14 | }
 15 | 
 16 | async function buildTreeForTesting(currentPath: string, rootPath: string, excludePatterns: string[] = []): Promise<TreeEntry[]> {
 17 |     const entries = await fs.readdir(currentPath, {withFileTypes: true});
 18 |     const result: TreeEntry[] = [];
 19 | 
 20 |     for (const entry of entries) {
 21 |         const relativePath = path.relative(rootPath, path.join(currentPath, entry.name));
 22 |         const shouldExclude = excludePatterns.some(pattern => {
 23 |             if (pattern.includes('*')) {
 24 |                 return minimatch(relativePath, pattern, {dot: true});
 25 |             }
 26 |             // For files: match exact name or as part of path
 27 |             // For directories: match as directory path
 28 |             return minimatch(relativePath, pattern, {dot: true}) ||
 29 |                    minimatch(relativePath, `**/${pattern}`, {dot: true}) ||
 30 |                    minimatch(relativePath, `**/${pattern}/**`, {dot: true});
 31 |         });
 32 |         if (shouldExclude)
 33 |             continue;
 34 | 
 35 |         const entryData: TreeEntry = {
 36 |             name: entry.name,
 37 |             type: entry.isDirectory() ? 'directory' : 'file'
 38 |         };
 39 | 
 40 |         if (entry.isDirectory()) {
 41 |             const subPath = path.join(currentPath, entry.name);
 42 |             entryData.children = await buildTreeForTesting(subPath, rootPath, excludePatterns);
 43 |         }
 44 | 
 45 |         result.push(entryData);
 46 |     }
 47 | 
 48 |     return result;
 49 | }
 50 | 
 51 | describe('buildTree exclude patterns', () => {
 52 |     let testDir: string;
 53 | 
 54 |     beforeEach(async () => {
 55 |         testDir = await fs.mkdtemp(path.join(os.tmpdir(), 'filesystem-test-'));
 56 |         
 57 |         // Create test directory structure
 58 |         await fs.mkdir(path.join(testDir, 'src'));
 59 |         await fs.mkdir(path.join(testDir, 'node_modules'));
 60 |         await fs.mkdir(path.join(testDir, '.git'));
 61 |         await fs.mkdir(path.join(testDir, 'nested', 'node_modules'), { recursive: true });
 62 |         
 63 |         // Create test files
 64 |         await fs.writeFile(path.join(testDir, '.env'), 'SECRET=value');
 65 |         await fs.writeFile(path.join(testDir, '.env.local'), 'LOCAL_SECRET=value');
 66 |         await fs.writeFile(path.join(testDir, 'src', 'index.js'), 'console.log("hello");');
 67 |         await fs.writeFile(path.join(testDir, 'package.json'), '{}');
 68 |         await fs.writeFile(path.join(testDir, 'node_modules', 'module.js'), 'module.exports = {};');
 69 |         await fs.writeFile(path.join(testDir, 'nested', 'node_modules', 'deep.js'), 'module.exports = {};');
 70 |     });
 71 | 
 72 |     afterEach(async () => {
 73 |         await fs.rm(testDir, { recursive: true, force: true });
 74 |     });
 75 | 
 76 |     it('should exclude files matching simple patterns', async () => {
 77 |         // Test the current implementation - this will fail until the bug is fixed
 78 |         const tree = await buildTreeForTesting(testDir, testDir, ['.env']);
 79 |         const fileNames = tree.map(entry => entry.name);
 80 |         
 81 |         expect(fileNames).not.toContain('.env');
 82 |         expect(fileNames).toContain('.env.local'); // Should not exclude this
 83 |         expect(fileNames).toContain('src');
 84 |         expect(fileNames).toContain('package.json');
 85 |     });
 86 | 
 87 |     it('should exclude directories matching simple patterns', async () => {
 88 |         const tree = await buildTreeForTesting(testDir, testDir, ['node_modules']);
 89 |         const dirNames = tree.map(entry => entry.name);
 90 |         
 91 |         expect(dirNames).not.toContain('node_modules');
 92 |         expect(dirNames).toContain('src');
 93 |         expect(dirNames).toContain('.git');
 94 |     });
 95 | 
 96 |     it('should exclude nested directories with same pattern', async () => {
 97 |         const tree = await buildTreeForTesting(testDir, testDir, ['node_modules']);
 98 |         
 99 |         // Find the nested directory
100 |         const nestedDir = tree.find(entry => entry.name === 'nested');
101 |         expect(nestedDir).toBeDefined();
102 |         expect(nestedDir!.children).toBeDefined();
103 |         
104 |         // The nested/node_modules should also be excluded
105 |         const nestedChildren = nestedDir!.children!.map(child => child.name);
106 |         expect(nestedChildren).not.toContain('node_modules');
107 |     });
108 | 
109 |     it('should handle glob patterns correctly', async () => {
110 |         const tree = await buildTreeForTesting(testDir, testDir, ['*.env']);
111 |         const fileNames = tree.map(entry => entry.name);
112 |         
113 |         expect(fileNames).not.toContain('.env');
114 |         expect(fileNames).toContain('.env.local'); // *.env should not match .env.local
115 |         expect(fileNames).toContain('src');
116 |     });
117 | 
118 |     it('should handle dot files correctly', async () => {
119 |         const tree = await buildTreeForTesting(testDir, testDir, ['.git']);
120 |         const dirNames = tree.map(entry => entry.name);
121 |         
122 |         expect(dirNames).not.toContain('.git');
123 |         expect(dirNames).toContain('.env'); // Should not exclude this
124 |     });
125 | 
126 |     it('should work with multiple exclude patterns', async () => {
127 |         const tree = await buildTreeForTesting(testDir, testDir, ['node_modules', '.env', '.git']);
128 |         const entryNames = tree.map(entry => entry.name);
129 |         
130 |         expect(entryNames).not.toContain('node_modules');
131 |         expect(entryNames).not.toContain('.env');
132 |         expect(entryNames).not.toContain('.git');
133 |         expect(entryNames).toContain('src');
134 |         expect(entryNames).toContain('package.json');
135 |     });
136 | 
137 |     it('should handle empty exclude patterns', async () => {
138 |         const tree = await buildTreeForTesting(testDir, testDir, []);
139 |         const entryNames = tree.map(entry => entry.name);
140 |         
141 |         // All entries should be included
142 |         expect(entryNames).toContain('node_modules');
143 |         expect(entryNames).toContain('.env');
144 |         expect(entryNames).toContain('.git');
145 |         expect(entryNames).toContain('src');
146 |     });
147 | });
```

--------------------------------------------------------------------------------
/src/everything/streamableHttp.ts:
--------------------------------------------------------------------------------

```typescript
  1 | import { StreamableHTTPServerTransport } from "@modelcontextprotocol/sdk/server/streamableHttp.js";
  2 | import { InMemoryEventStore } from '@modelcontextprotocol/sdk/examples/shared/inMemoryEventStore.js';
  3 | import express, { Request, Response } from "express";
  4 | import { createServer } from "./everything.js";
  5 | import { randomUUID } from 'node:crypto';
  6 | import cors from 'cors';
  7 | 
  8 | console.error('Starting Streamable HTTP server...');
  9 | 
 10 | const app = express();
 11 | app.use(cors({
 12 |     "origin": "*", // use "*" with caution in production
 13 |     "methods": "GET,POST,DELETE",
 14 |     "preflightContinue": false,
 15 |     "optionsSuccessStatus": 204,
 16 |     "exposedHeaders": [
 17 |         'mcp-session-id',
 18 |         'last-event-id',
 19 |         'mcp-protocol-version'
 20 |     ]
 21 | })); // Enable CORS for all routes so Inspector can connect
 22 | 
 23 | const transports: Map<string, StreamableHTTPServerTransport> = new Map<string, StreamableHTTPServerTransport>();
 24 | 
 25 | app.post('/mcp', async (req: Request, res: Response) => {
 26 |   console.error('Received MCP POST request');
 27 |   try {
 28 |     // Check for existing session ID
 29 |     const sessionId = req.headers['mcp-session-id'] as string | undefined;
 30 | 
 31 |     let transport: StreamableHTTPServerTransport;
 32 | 
 33 |     if (sessionId && transports.has(sessionId)) {
 34 |       // Reuse existing transport
 35 |       transport = transports.get(sessionId)!;
 36 |     } else if (!sessionId) {
 37 | 
 38 |       const { server, cleanup, startNotificationIntervals } = createServer();
 39 | 
 40 |       // New initialization request
 41 |       const eventStore = new InMemoryEventStore();
 42 |       transport = new StreamableHTTPServerTransport({
 43 |         sessionIdGenerator: () => randomUUID(),
 44 |         eventStore, // Enable resumability
 45 |         onsessioninitialized: (sessionId: string) => {
 46 |           // Store the transport by session ID when session is initialized
 47 |           // This avoids race conditions where requests might come in before the session is stored
 48 |           console.error(`Session initialized with ID: ${sessionId}`);
 49 |           transports.set(sessionId, transport);
 50 |         }
 51 |       });
 52 | 
 53 | 
 54 |       // Set up onclose handler to clean up transport when closed
 55 |       server.onclose = async () => {
 56 |         const sid = transport.sessionId;
 57 |         if (sid && transports.has(sid)) {
 58 |           console.error(`Transport closed for session ${sid}, removing from transports map`);
 59 |           transports.delete(sid);
 60 |           await cleanup();
 61 |         }
 62 |       };
 63 | 
 64 |       // Connect the transport to the MCP server BEFORE handling the request
 65 |       // so responses can flow back through the same transport
 66 |       await server.connect(transport);
 67 | 
 68 |       await transport.handleRequest(req, res);
 69 | 
 70 |       // Wait until initialize is complete and transport will have a sessionId
 71 |       startNotificationIntervals(transport.sessionId);
 72 | 
 73 |         return; // Already handled
 74 |     } else {
 75 |       // Invalid request - no session ID or not initialization request
 76 |       res.status(400).json({
 77 |         jsonrpc: '2.0',
 78 |         error: {
 79 |           code: -32000,
 80 |           message: 'Bad Request: No valid session ID provided',
 81 |         },
 82 |         id: req?.body?.id,
 83 |       });
 84 |       return;
 85 |     }
 86 | 
 87 |     // Handle the request with existing transport - no need to reconnect
 88 |     // The existing transport is already connected to the server
 89 |     await transport.handleRequest(req, res);
 90 |   } catch (error) {
 91 |     console.error('Error handling MCP request:', error);
 92 |     if (!res.headersSent) {
 93 |       res.status(500).json({
 94 |         jsonrpc: '2.0',
 95 |         error: {
 96 |           code: -32603,
 97 |           message: 'Internal server error',
 98 |         },
 99 |         id: req?.body?.id,
100 |       });
101 |       return;
102 |     }
103 |   }
104 | });
105 | 
106 | // Handle GET requests for SSE streams (using built-in support from StreamableHTTP)
107 | app.get('/mcp', async (req: Request, res: Response) => {
108 |   console.error('Received MCP GET request');
109 |   const sessionId = req.headers['mcp-session-id'] as string | undefined;
110 |   if (!sessionId || !transports.has(sessionId)) {
111 |     res.status(400).json({
112 |       jsonrpc: '2.0',
113 |       error: {
114 |         code: -32000,
115 |         message: 'Bad Request: No valid session ID provided',
116 |       },
117 |       id: req?.body?.id,
118 |     });
119 |     return;
120 |   }
121 | 
122 |   // Check for Last-Event-ID header for resumability
123 |   const lastEventId = req.headers['last-event-id'] as string | undefined;
124 |   if (lastEventId) {
125 |     console.error(`Client reconnecting with Last-Event-ID: ${lastEventId}`);
126 |   } else {
127 |     console.error(`Establishing new SSE stream for session ${sessionId}`);
128 |   }
129 | 
130 |   const transport = transports.get(sessionId);
131 |   await transport!.handleRequest(req, res);
132 | });
133 | 
134 | // Handle DELETE requests for session termination (according to MCP spec)
135 | app.delete('/mcp', async (req: Request, res: Response) => {
136 |   const sessionId = req.headers['mcp-session-id'] as string | undefined;
137 |   if (!sessionId || !transports.has(sessionId)) {
138 |     res.status(400).json({
139 |       jsonrpc: '2.0',
140 |       error: {
141 |         code: -32000,
142 |         message: 'Bad Request: No valid session ID provided',
143 |       },
144 |       id: req?.body?.id,
145 |     });
146 |     return;
147 |   }
148 | 
149 |   console.error(`Received session termination request for session ${sessionId}`);
150 | 
151 |   try {
152 |     const transport = transports.get(sessionId);
153 |     await transport!.handleRequest(req, res);
154 |   } catch (error) {
155 |     console.error('Error handling session termination:', error);
156 |     if (!res.headersSent) {
157 |       res.status(500).json({
158 |         jsonrpc: '2.0',
159 |         error: {
160 |           code: -32603,
161 |           message: 'Error handling session termination',
162 |         },
163 |         id: req?.body?.id,
164 |       });
165 |       return;
166 |     }
167 |   }
168 | });
169 | 
170 | // Start the server
171 | const PORT = process.env.PORT || 3001;
172 | app.listen(PORT, () => {
173 |   console.error(`MCP Streamable HTTP Server listening on port ${PORT}`);
174 | });
175 | 
176 | // Handle server shutdown
177 | process.on('SIGINT', async () => {
178 |   console.error('Shutting down server...');
179 | 
180 |   // Close all active transports to properly clean up resources
181 |   for (const sessionId in transports) {
182 |     try {
183 |       console.error(`Closing transport for session ${sessionId}`);
184 |       await transports.get(sessionId)!.close();
185 |       transports.delete(sessionId);
186 |     } catch (error) {
187 |       console.error(`Error closing transport for session ${sessionId}:`, error);
188 |     }
189 |   }
190 | 
191 |   console.error('Server shutdown complete');
192 |   process.exit(0);
193 | });
194 | 
```

--------------------------------------------------------------------------------
/src/filesystem/__tests__/path-utils.test.ts:
--------------------------------------------------------------------------------

```typescript
  1 | import { describe, it, expect } from '@jest/globals';
  2 | import { normalizePath, expandHome, convertToWindowsPath } from '../path-utils.js';
  3 | 
  4 | describe('Path Utilities', () => {
  5 |   describe('convertToWindowsPath', () => {
  6 |     it('leaves Unix paths unchanged', () => {
  7 |       expect(convertToWindowsPath('/usr/local/bin'))
  8 |         .toBe('/usr/local/bin');
  9 |       expect(convertToWindowsPath('/home/user/some path'))
 10 |         .toBe('/home/user/some path');
 11 |     });
 12 | 
 13 |     it('converts WSL paths to Windows format', () => {
 14 |       expect(convertToWindowsPath('/mnt/c/NS/MyKindleContent'))
 15 |         .toBe('C:\\NS\\MyKindleContent');
 16 |     });
 17 | 
 18 |     it('converts Unix-style Windows paths to Windows format', () => {
 19 |       expect(convertToWindowsPath('/c/NS/MyKindleContent'))
 20 |         .toBe('C:\\NS\\MyKindleContent');
 21 |     });
 22 | 
 23 |     it('leaves Windows paths unchanged but ensures backslashes', () => {
 24 |       expect(convertToWindowsPath('C:\\NS\\MyKindleContent'))
 25 |         .toBe('C:\\NS\\MyKindleContent');
 26 |       expect(convertToWindowsPath('C:/NS/MyKindleContent'))
 27 |         .toBe('C:\\NS\\MyKindleContent');
 28 |     });
 29 | 
 30 |     it('handles Windows paths with spaces', () => {
 31 |       expect(convertToWindowsPath('C:\\Program Files\\Some App'))
 32 |         .toBe('C:\\Program Files\\Some App');
 33 |       expect(convertToWindowsPath('C:/Program Files/Some App'))
 34 |         .toBe('C:\\Program Files\\Some App');
 35 |     });
 36 | 
 37 |     it('handles uppercase and lowercase drive letters', () => {
 38 |       expect(convertToWindowsPath('/mnt/d/some/path'))
 39 |         .toBe('D:\\some\\path');
 40 |       expect(convertToWindowsPath('/d/some/path'))
 41 |         .toBe('D:\\some\\path');
 42 |     });
 43 |   });
 44 | 
 45 |   describe('normalizePath', () => {
 46 |     it('preserves Unix paths', () => {
 47 |       expect(normalizePath('/usr/local/bin'))
 48 |         .toBe('/usr/local/bin');
 49 |       expect(normalizePath('/home/user/some path'))
 50 |         .toBe('/home/user/some path');
 51 |       expect(normalizePath('"/usr/local/some app/"'))
 52 |         .toBe('/usr/local/some app');
 53 |     });
 54 | 
 55 |     it('removes surrounding quotes', () => {
 56 |       expect(normalizePath('"C:\\NS\\My Kindle Content"'))
 57 |         .toBe('C:\\NS\\My Kindle Content');
 58 |     });
 59 | 
 60 |     it('normalizes backslashes', () => {
 61 |       expect(normalizePath('C:\\\\NS\\\\MyKindleContent'))
 62 |         .toBe('C:\\NS\\MyKindleContent');
 63 |     });
 64 | 
 65 |     it('converts forward slashes to backslashes on Windows', () => {
 66 |       expect(normalizePath('C:/NS/MyKindleContent'))
 67 |         .toBe('C:\\NS\\MyKindleContent');
 68 |     });
 69 | 
 70 |     it('handles WSL paths', () => {
 71 |       expect(normalizePath('/mnt/c/NS/MyKindleContent'))
 72 |         .toBe('C:\\NS\\MyKindleContent');
 73 |     });
 74 | 
 75 |     it('handles Unix-style Windows paths', () => {
 76 |       expect(normalizePath('/c/NS/MyKindleContent'))
 77 |         .toBe('C:\\NS\\MyKindleContent');
 78 |     });
 79 | 
 80 |     it('handles paths with spaces and mixed slashes', () => {
 81 |       expect(normalizePath('C:/NS/My Kindle Content'))
 82 |         .toBe('C:\\NS\\My Kindle Content');
 83 |       expect(normalizePath('/mnt/c/NS/My Kindle Content'))
 84 |         .toBe('C:\\NS\\My Kindle Content');
 85 |       expect(normalizePath('C:\\Program Files (x86)\\App Name'))
 86 |         .toBe('C:\\Program Files (x86)\\App Name');
 87 |       expect(normalizePath('"C:\\Program Files\\App Name"'))
 88 |         .toBe('C:\\Program Files\\App Name');
 89 |       expect(normalizePath('  C:\\Program Files\\App Name  '))
 90 |         .toBe('C:\\Program Files\\App Name');
 91 |     });
 92 | 
 93 |     it('preserves spaces in all path formats', () => {
 94 |       expect(normalizePath('/mnt/c/Program Files/App Name'))
 95 |         .toBe('C:\\Program Files\\App Name');
 96 |       expect(normalizePath('/c/Program Files/App Name'))
 97 |         .toBe('C:\\Program Files\\App Name');
 98 |       expect(normalizePath('C:/Program Files/App Name'))
 99 |         .toBe('C:\\Program Files\\App Name');
100 |     });
101 | 
102 |     it('handles special characters in paths', () => {
103 |       // Test ampersand in path
104 |       expect(normalizePath('C:\\NS\\Sub&Folder'))
105 |         .toBe('C:\\NS\\Sub&Folder');
106 |       expect(normalizePath('C:/NS/Sub&Folder'))
107 |         .toBe('C:\\NS\\Sub&Folder');
108 |       expect(normalizePath('/mnt/c/NS/Sub&Folder'))
109 |         .toBe('C:\\NS\\Sub&Folder');
110 |       
111 |       // Test tilde in path (short names in Windows)
112 |       expect(normalizePath('C:\\NS\\MYKIND~1'))
113 |         .toBe('C:\\NS\\MYKIND~1');
114 |       expect(normalizePath('/Users/NEMANS~1/FOLDER~2/SUBFO~1/Public/P12PST~1'))
115 |         .toBe('/Users/NEMANS~1/FOLDER~2/SUBFO~1/Public/P12PST~1');
116 |       
117 |       // Test other special characters
118 |       expect(normalizePath('C:\\Path with #hash'))
119 |         .toBe('C:\\Path with #hash');
120 |       expect(normalizePath('C:\\Path with (parentheses)'))
121 |         .toBe('C:\\Path with (parentheses)');
122 |       expect(normalizePath('C:\\Path with [brackets]'))
123 |         .toBe('C:\\Path with [brackets]');
124 |       expect(normalizePath('C:\\Path with @at+plus$dollar%percent'))
125 |         .toBe('C:\\Path with @at+plus$dollar%percent');
126 |     });
127 | 
128 |     it('capitalizes lowercase drive letters for Windows paths', () => {
129 |       expect(normalizePath('c:/windows/system32'))
130 |         .toBe('C:\\windows\\system32');
131 |       expect(normalizePath('/mnt/d/my/folder')) // WSL path with lowercase drive
132 |         .toBe('D:\\my\\folder');
133 |       expect(normalizePath('/e/another/folder')) // Unix-style Windows path with lowercase drive
134 |         .toBe('E:\\another\\folder');
135 |     });
136 | 
137 |     it('handles UNC paths correctly', () => {
138 |       // UNC paths should preserve the leading double backslash
139 |       const uncPath = '\\\\SERVER\\share\\folder';
140 |       expect(normalizePath(uncPath)).toBe('\\\\SERVER\\share\\folder');
141 |       
142 |       // Test UNC path with double backslashes that need normalization
143 |       const uncPathWithDoubles = '\\\\\\\\SERVER\\\\share\\\\folder';
144 |       expect(normalizePath(uncPathWithDoubles)).toBe('\\\\SERVER\\share\\folder');
145 |     });
146 | 
147 |     it('returns normalized non-Windows/WSL/Unix-style Windows paths as is after basic normalization', () => {
148 |       // Relative path
149 |       const relativePath = 'some/relative/path';
150 |       expect(normalizePath(relativePath)).toBe(relativePath.replace(/\//g, '\\'));
151 | 
152 |       // A path that looks somewhat absolute but isn't a drive or recognized Unix root for Windows conversion
153 |       const otherAbsolutePath = '\\someserver\\share\\file';
154 |       expect(normalizePath(otherAbsolutePath)).toBe(otherAbsolutePath);
155 |     });
156 |   });
157 | 
158 |   describe('expandHome', () => {
159 |     it('expands ~ to home directory', () => {
160 |       const result = expandHome('~/test');
161 |       expect(result).toContain('test');
162 |       expect(result).not.toContain('~');
163 |     });
164 | 
165 |     it('expands bare ~ to home directory', () => {
166 |       const result = expandHome('~');
167 |       expect(result).not.toContain('~');
168 |       expect(result.length).toBeGreaterThan(0);
169 |     });
170 | 
171 |     it('leaves other paths unchanged', () => {
172 |       expect(expandHome('C:/test')).toBe('C:/test');
173 |     });
174 |   });
175 | });
176 | 
```

--------------------------------------------------------------------------------
/.github/workflows/release.yml:
--------------------------------------------------------------------------------

```yaml
  1 | name: Automatic Release Creation
  2 | 
  3 | on:
  4 |   workflow_dispatch:
  5 |   schedule:
  6 |     - cron: '0 10 * * *'
  7 | 
  8 | jobs:
  9 |   create-metadata:
 10 |     runs-on: ubuntu-latest
 11 |     if: github.repository_owner == 'modelcontextprotocol'
 12 |     outputs:
 13 |       hash: ${{ steps.last-release.outputs.hash }}
 14 |       version: ${{ steps.create-version.outputs.version}}
 15 |       npm_packages: ${{ steps.create-npm-packages.outputs.npm_packages}}
 16 |       pypi_packages: ${{ steps.create-pypi-packages.outputs.pypi_packages}}
 17 |     steps:
 18 |       - uses: actions/checkout@v4
 19 |         with:
 20 |           fetch-depth: 0
 21 | 
 22 |       - name: Get last release hash
 23 |         id: last-release
 24 |         run: |
 25 |           HASH=$(git rev-list --tags --max-count=1 || echo "HEAD~1")
 26 |           echo "hash=${HASH}" >> $GITHUB_OUTPUT
 27 |           echo "Using last release hash: ${HASH}"
 28 | 
 29 |       - name: Install uv
 30 |         uses: astral-sh/setup-uv@v5
 31 | 
 32 |       - name: Create version name
 33 |         id: create-version
 34 |         run: |
 35 |           VERSION=$(uv run --script scripts/release.py generate-version)
 36 |           echo "version $VERSION"
 37 |           echo "version=$VERSION" >> $GITHUB_OUTPUT
 38 | 
 39 |       - name: Create notes
 40 |         run: |
 41 |           HASH="${{ steps.last-release.outputs.hash }}"
 42 |           uv run --script scripts/release.py generate-notes --directory src/ $HASH > RELEASE_NOTES.md
 43 |           cat RELEASE_NOTES.md
 44 | 
 45 |       - name: Release notes
 46 |         uses: actions/upload-artifact@v4
 47 |         with:
 48 |           name: release-notes
 49 |           path: RELEASE_NOTES.md
 50 | 
 51 |       - name: Create python matrix
 52 |         id: create-pypi-packages
 53 |         run: |
 54 |           HASH="${{ steps.last-release.outputs.hash }}"
 55 |           PYPI=$(uv run --script scripts/release.py generate-matrix --pypi --directory src $HASH)
 56 |           echo "pypi_packages $PYPI"
 57 |           echo "pypi_packages=$PYPI" >> $GITHUB_OUTPUT
 58 | 
 59 |       - name: Create npm matrix
 60 |         id: create-npm-packages
 61 |         run: |
 62 |           HASH="${{ steps.last-release.outputs.hash }}"
 63 |           NPM=$(uv run --script scripts/release.py generate-matrix --npm --directory src $HASH)
 64 |           echo "npm_packages $NPM"
 65 |           echo "npm_packages=$NPM" >> $GITHUB_OUTPUT
 66 | 
 67 |   update-packages:
 68 |     needs: [create-metadata]
 69 |     if: ${{ needs.create-metadata.outputs.npm_packages != '[]' || needs.create-metadata.outputs.pypi_packages != '[]' }}
 70 |     runs-on: ubuntu-latest
 71 |     environment: release
 72 |     outputs:
 73 |       changes_made: ${{ steps.commit.outputs.changes_made }}
 74 |     steps:
 75 |       - uses: actions/checkout@v4
 76 |         with:
 77 |           fetch-depth: 0
 78 | 
 79 |       - name: Install uv
 80 |         uses: astral-sh/setup-uv@v5
 81 | 
 82 |       - name: Update packages
 83 |         run: |
 84 |           HASH="${{ needs.create-metadata.outputs.hash }}"
 85 |           uv run --script scripts/release.py update-packages --directory src/ $HASH
 86 | 
 87 |       - name: Configure git
 88 |         run: |
 89 |           git config --global user.name "GitHub Actions"
 90 |           git config --global user.email "[email protected]"
 91 | 
 92 |       - name: Commit changes
 93 |         id: commit
 94 |         run: |
 95 |           VERSION="${{ needs.create-metadata.outputs.version }}"
 96 |           git add -u
 97 |           if git diff-index --quiet HEAD; then
 98 |             echo "changes_made=false" >> $GITHUB_OUTPUT
 99 |           else
100 |             git commit -m 'Automatic update of packages'
101 |             git tag -a "$VERSION" -m "Release $VERSION"
102 |             git push origin "$VERSION"
103 |             echo "changes_made=true" >> $GITHUB_OUTPUT
104 |           fi
105 | 
106 |   publish-pypi:
107 |     needs: [update-packages, create-metadata]
108 |     if: ${{ needs.create-metadata.outputs.pypi_packages != '[]' && needs.create-metadata.outputs.pypi_packages != '' }}
109 |     strategy:
110 |       fail-fast: false
111 |       matrix:
112 |         package: ${{ fromJson(needs.create-metadata.outputs.pypi_packages) }}
113 |     name: Build ${{ matrix.package }}
114 |     environment: release
115 |     permissions:
116 |       id-token: write # Required for trusted publishing
117 |     runs-on: ubuntu-latest
118 |     steps:
119 |       - uses: actions/checkout@v4
120 |         with:
121 |           ref: ${{ needs.create-metadata.outputs.version }}
122 | 
123 |       - name: Install uv
124 |         uses: astral-sh/setup-uv@v5
125 | 
126 |       - name: Set up Python
127 |         uses: actions/setup-python@v5
128 |         with:
129 |           python-version-file: "src/${{ matrix.package }}/.python-version"
130 | 
131 |       - name: Install dependencies
132 |         working-directory: src/${{ matrix.package }}
133 |         run: uv sync --frozen --all-extras --dev
134 | 
135 |       - name: Run pyright
136 |         working-directory: src/${{ matrix.package }}
137 |         run: uv run --frozen pyright
138 | 
139 |       - name: Build package
140 |         working-directory: src/${{ matrix.package }}
141 |         run: uv build
142 | 
143 |       - name: Publish package to PyPI
144 |         uses: pypa/gh-action-pypi-publish@release/v1
145 |         with:
146 |           packages-dir: src/${{ matrix.package }}/dist
147 | 
148 |   publish-npm:
149 |     needs: [update-packages, create-metadata]
150 |     if: ${{ needs.create-metadata.outputs.npm_packages != '[]' && needs.create-metadata.outputs.npm_packages != '' }}
151 |     strategy:
152 |       fail-fast: false
153 |       matrix:
154 |         package: ${{ fromJson(needs.create-metadata.outputs.npm_packages) }}
155 |     name: Build ${{ matrix.package }}
156 |     environment: release
157 |     runs-on: ubuntu-latest
158 |     steps:
159 |       - uses: actions/checkout@v4
160 |         with:
161 |           ref: ${{ needs.create-metadata.outputs.version }}
162 | 
163 |       - uses: actions/setup-node@v4
164 |         with:
165 |           node-version: 22
166 |           cache: npm
167 |           registry-url: 'https://registry.npmjs.org'
168 | 
169 |       - name: Install dependencies
170 |         working-directory: src/${{ matrix.package }}
171 |         run: npm ci
172 | 
173 |       - name: Check if version exists on npm
174 |         working-directory: src/${{ matrix.package }}
175 |         run: |
176 |           VERSION=$(jq -r .version package.json)
177 |           if npm view --json | jq -e --arg version "$VERSION" '[.[]][0].versions | contains([$version])'; then
178 |             echo "Version $VERSION already exists on npm"
179 |             exit 1
180 |           fi
181 |           echo "Version $VERSION is new, proceeding with publish"
182 | 
183 |       - name: Build package
184 |         working-directory: src/${{ matrix.package }}
185 |         run: npm run build
186 | 
187 |       - name: Publish package
188 |         working-directory: src/${{ matrix.package }}
189 |         run: |
190 |           npm publish --access public
191 |         env:
192 |           NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
193 | 
194 |   create-release:
195 |     needs: [update-packages, create-metadata, publish-pypi, publish-npm]
196 |     if: |
197 |       always() &&
198 |       needs.update-packages.outputs.changes_made == 'true' &&
199 |       (needs.publish-pypi.result == 'success' || needs.publish-npm.result == 'success')
200 |     runs-on: ubuntu-latest
201 |     environment: release
202 |     permissions:
203 |       contents: write
204 |     steps:
205 |       - uses: actions/checkout@v4
206 | 
207 |       - name: Download release notes
208 |         uses: actions/download-artifact@v4
209 |         with:
210 |           name: release-notes
211 | 
212 |       - name: Create release
213 |         env:
214 |           GH_TOKEN: ${{ secrets.GITHUB_TOKEN}}
215 |         run: |
216 |           VERSION="${{ needs.create-metadata.outputs.version }}"
217 |           gh release create "$VERSION" \
218 |             --title "Release $VERSION" \
219 |             --notes-file RELEASE_NOTES.md
220 | 
221 | 
```

--------------------------------------------------------------------------------
/src/time/src/mcp_server_time/server.py:
--------------------------------------------------------------------------------

```python
  1 | from datetime import datetime, timedelta
  2 | from enum import Enum
  3 | import json
  4 | from typing import Sequence
  5 | 
  6 | from zoneinfo import ZoneInfo
  7 | from tzlocal import get_localzone_name  # ← returns "Europe/Paris", etc.
  8 | 
  9 | from mcp.server import Server
 10 | from mcp.server.stdio import stdio_server
 11 | from mcp.types import Tool, TextContent, ImageContent, EmbeddedResource
 12 | from mcp.shared.exceptions import McpError
 13 | 
 14 | from pydantic import BaseModel
 15 | 
 16 | 
 17 | class TimeTools(str, Enum):
 18 |     GET_CURRENT_TIME = "get_current_time"
 19 |     CONVERT_TIME = "convert_time"
 20 | 
 21 | 
 22 | class TimeResult(BaseModel):
 23 |     timezone: str
 24 |     datetime: str
 25 |     day_of_week: str
 26 |     is_dst: bool
 27 | 
 28 | 
 29 | class TimeConversionResult(BaseModel):
 30 |     source: TimeResult
 31 |     target: TimeResult
 32 |     time_difference: str
 33 | 
 34 | 
 35 | class TimeConversionInput(BaseModel):
 36 |     source_tz: str
 37 |     time: str
 38 |     target_tz_list: list[str]
 39 | 
 40 | 
 41 | def get_local_tz(local_tz_override: str | None = None) -> ZoneInfo:
 42 |     if local_tz_override:
 43 |         return ZoneInfo(local_tz_override)
 44 | 
 45 |     # Get local timezone from datetime.now()
 46 |     local_tzname = get_localzone_name()
 47 |     if local_tzname is not None:
 48 |         return ZoneInfo(local_tzname)
 49 |     # Default to UTC if local timezone cannot be determined
 50 |     return ZoneInfo("UTC")
 51 | 
 52 | 
 53 | def get_zoneinfo(timezone_name: str) -> ZoneInfo:
 54 |     try:
 55 |         return ZoneInfo(timezone_name)
 56 |     except Exception as e:
 57 |         raise McpError(f"Invalid timezone: {str(e)}")
 58 | 
 59 | 
 60 | class TimeServer:
 61 |     def get_current_time(self, timezone_name: str) -> TimeResult:
 62 |         """Get current time in specified timezone"""
 63 |         timezone = get_zoneinfo(timezone_name)
 64 |         current_time = datetime.now(timezone)
 65 | 
 66 |         return TimeResult(
 67 |             timezone=timezone_name,
 68 |             datetime=current_time.isoformat(timespec="seconds"),
 69 |             day_of_week=current_time.strftime("%A"),
 70 |             is_dst=bool(current_time.dst()),
 71 |         )
 72 | 
 73 |     def convert_time(
 74 |         self, source_tz: str, time_str: str, target_tz: str
 75 |     ) -> TimeConversionResult:
 76 |         """Convert time between timezones"""
 77 |         source_timezone = get_zoneinfo(source_tz)
 78 |         target_timezone = get_zoneinfo(target_tz)
 79 | 
 80 |         try:
 81 |             parsed_time = datetime.strptime(time_str, "%H:%M").time()
 82 |         except ValueError:
 83 |             raise ValueError("Invalid time format. Expected HH:MM [24-hour format]")
 84 | 
 85 |         now = datetime.now(source_timezone)
 86 |         source_time = datetime(
 87 |             now.year,
 88 |             now.month,
 89 |             now.day,
 90 |             parsed_time.hour,
 91 |             parsed_time.minute,
 92 |             tzinfo=source_timezone,
 93 |         )
 94 | 
 95 |         target_time = source_time.astimezone(target_timezone)
 96 |         source_offset = source_time.utcoffset() or timedelta()
 97 |         target_offset = target_time.utcoffset() or timedelta()
 98 |         hours_difference = (target_offset - source_offset).total_seconds() / 3600
 99 | 
100 |         if hours_difference.is_integer():
101 |             time_diff_str = f"{hours_difference:+.1f}h"
102 |         else:
103 |             # For fractional hours like Nepal's UTC+5:45
104 |             time_diff_str = f"{hours_difference:+.2f}".rstrip("0").rstrip(".") + "h"
105 | 
106 |         return TimeConversionResult(
107 |             source=TimeResult(
108 |                 timezone=source_tz,
109 |                 datetime=source_time.isoformat(timespec="seconds"),
110 |                 day_of_week=source_time.strftime("%A"),
111 |                 is_dst=bool(source_time.dst()),
112 |             ),
113 |             target=TimeResult(
114 |                 timezone=target_tz,
115 |                 datetime=target_time.isoformat(timespec="seconds"),
116 |                 day_of_week=target_time.strftime("%A"),
117 |                 is_dst=bool(target_time.dst()),
118 |             ),
119 |             time_difference=time_diff_str,
120 |         )
121 | 
122 | 
123 | async def serve(local_timezone: str | None = None) -> None:
124 |     server = Server("mcp-time")
125 |     time_server = TimeServer()
126 |     local_tz = str(get_local_tz(local_timezone))
127 | 
128 |     @server.list_tools()
129 |     async def list_tools() -> list[Tool]:
130 |         """List available time tools."""
131 |         return [
132 |             Tool(
133 |                 name=TimeTools.GET_CURRENT_TIME.value,
134 |                 description="Get current time in a specific timezones",
135 |                 inputSchema={
136 |                     "type": "object",
137 |                     "properties": {
138 |                         "timezone": {
139 |                             "type": "string",
140 |                             "description": f"IANA timezone name (e.g., 'America/New_York', 'Europe/London'). Use '{local_tz}' as local timezone if no timezone provided by the user.",
141 |                         }
142 |                     },
143 |                     "required": ["timezone"],
144 |                 },
145 |             ),
146 |             Tool(
147 |                 name=TimeTools.CONVERT_TIME.value,
148 |                 description="Convert time between timezones",
149 |                 inputSchema={
150 |                     "type": "object",
151 |                     "properties": {
152 |                         "source_timezone": {
153 |                             "type": "string",
154 |                             "description": f"Source IANA timezone name (e.g., 'America/New_York', 'Europe/London'). Use '{local_tz}' as local timezone if no source timezone provided by the user.",
155 |                         },
156 |                         "time": {
157 |                             "type": "string",
158 |                             "description": "Time to convert in 24-hour format (HH:MM)",
159 |                         },
160 |                         "target_timezone": {
161 |                             "type": "string",
162 |                             "description": f"Target IANA timezone name (e.g., 'Asia/Tokyo', 'America/San_Francisco'). Use '{local_tz}' as local timezone if no target timezone provided by the user.",
163 |                         },
164 |                     },
165 |                     "required": ["source_timezone", "time", "target_timezone"],
166 |                 },
167 |             ),
168 |         ]
169 | 
170 |     @server.call_tool()
171 |     async def call_tool(
172 |         name: str, arguments: dict
173 |     ) -> Sequence[TextContent | ImageContent | EmbeddedResource]:
174 |         """Handle tool calls for time queries."""
175 |         try:
176 |             match name:
177 |                 case TimeTools.GET_CURRENT_TIME.value:
178 |                     timezone = arguments.get("timezone")
179 |                     if not timezone:
180 |                         raise ValueError("Missing required argument: timezone")
181 | 
182 |                     result = time_server.get_current_time(timezone)
183 | 
184 |                 case TimeTools.CONVERT_TIME.value:
185 |                     if not all(
186 |                         k in arguments
187 |                         for k in ["source_timezone", "time", "target_timezone"]
188 |                     ):
189 |                         raise ValueError("Missing required arguments")
190 | 
191 |                     result = time_server.convert_time(
192 |                         arguments["source_timezone"],
193 |                         arguments["time"],
194 |                         arguments["target_timezone"],
195 |                     )
196 |                 case _:
197 |                     raise ValueError(f"Unknown tool: {name}")
198 | 
199 |             return [
200 |                 TextContent(type="text", text=json.dumps(result.model_dump(), indent=2))
201 |             ]
202 | 
203 |         except Exception as e:
204 |             raise ValueError(f"Error processing mcp-server-time query: {str(e)}")
205 | 
206 |     options = server.create_initialization_options()
207 |     async with stdio_server() as (read_stream, write_stream):
208 |         await server.run(read_stream, write_stream, options)
209 | 
```

--------------------------------------------------------------------------------
/src/sequentialthinking/index.ts:
--------------------------------------------------------------------------------

```typescript
  1 | #!/usr/bin/env node
  2 | 
  3 | import { Server } from "@modelcontextprotocol/sdk/server/index.js";
  4 | import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
  5 | import {
  6 |   CallToolRequestSchema,
  7 |   ListToolsRequestSchema,
  8 |   Tool,
  9 | } from "@modelcontextprotocol/sdk/types.js";
 10 | // Fixed chalk import for ESM
 11 | import chalk from 'chalk';
 12 | 
 13 | interface ThoughtData {
 14 |   thought: string;
 15 |   thoughtNumber: number;
 16 |   totalThoughts: number;
 17 |   isRevision?: boolean;
 18 |   revisesThought?: number;
 19 |   branchFromThought?: number;
 20 |   branchId?: string;
 21 |   needsMoreThoughts?: boolean;
 22 |   nextThoughtNeeded: boolean;
 23 | }
 24 | 
 25 | class SequentialThinkingServer {
 26 |   private thoughtHistory: ThoughtData[] = [];
 27 |   private branches: Record<string, ThoughtData[]> = {};
 28 |   private disableThoughtLogging: boolean;
 29 | 
 30 |   constructor() {
 31 |     this.disableThoughtLogging = (process.env.DISABLE_THOUGHT_LOGGING || "").toLowerCase() === "true";
 32 |   }
 33 | 
 34 |   private validateThoughtData(input: unknown): ThoughtData {
 35 |     const data = input as Record<string, unknown>;
 36 | 
 37 |     if (!data.thought || typeof data.thought !== 'string') {
 38 |       throw new Error('Invalid thought: must be a string');
 39 |     }
 40 |     if (!data.thoughtNumber || typeof data.thoughtNumber !== 'number') {
 41 |       throw new Error('Invalid thoughtNumber: must be a number');
 42 |     }
 43 |     if (!data.totalThoughts || typeof data.totalThoughts !== 'number') {
 44 |       throw new Error('Invalid totalThoughts: must be a number');
 45 |     }
 46 |     if (typeof data.nextThoughtNeeded !== 'boolean') {
 47 |       throw new Error('Invalid nextThoughtNeeded: must be a boolean');
 48 |     }
 49 | 
 50 |     return {
 51 |       thought: data.thought,
 52 |       thoughtNumber: data.thoughtNumber,
 53 |       totalThoughts: data.totalThoughts,
 54 |       nextThoughtNeeded: data.nextThoughtNeeded,
 55 |       isRevision: data.isRevision as boolean | undefined,
 56 |       revisesThought: data.revisesThought as number | undefined,
 57 |       branchFromThought: data.branchFromThought as number | undefined,
 58 |       branchId: data.branchId as string | undefined,
 59 |       needsMoreThoughts: data.needsMoreThoughts as boolean | undefined,
 60 |     };
 61 |   }
 62 | 
 63 |   private formatThought(thoughtData: ThoughtData): string {
 64 |     const { thoughtNumber, totalThoughts, thought, isRevision, revisesThought, branchFromThought, branchId } = thoughtData;
 65 | 
 66 |     let prefix = '';
 67 |     let context = '';
 68 | 
 69 |     if (isRevision) {
 70 |       prefix = chalk.yellow('🔄 Revision');
 71 |       context = ` (revising thought ${revisesThought})`;
 72 |     } else if (branchFromThought) {
 73 |       prefix = chalk.green('🌿 Branch');
 74 |       context = ` (from thought ${branchFromThought}, ID: ${branchId})`;
 75 |     } else {
 76 |       prefix = chalk.blue('💭 Thought');
 77 |       context = '';
 78 |     }
 79 | 
 80 |     const header = `${prefix} ${thoughtNumber}/${totalThoughts}${context}`;
 81 |     const border = '─'.repeat(Math.max(header.length, thought.length) + 4);
 82 | 
 83 |     return `
 84 | ┌${border}┐
 85 | │ ${header} │
 86 | ├${border}┤
 87 | │ ${thought.padEnd(border.length - 2)} │
 88 | └${border}┘`;
 89 |   }
 90 | 
 91 |   public processThought(input: unknown): { content: Array<{ type: string; text: string }>; isError?: boolean } {
 92 |     try {
 93 |       const validatedInput = this.validateThoughtData(input);
 94 | 
 95 |       if (validatedInput.thoughtNumber > validatedInput.totalThoughts) {
 96 |         validatedInput.totalThoughts = validatedInput.thoughtNumber;
 97 |       }
 98 | 
 99 |       this.thoughtHistory.push(validatedInput);
100 | 
101 |       if (validatedInput.branchFromThought && validatedInput.branchId) {
102 |         if (!this.branches[validatedInput.branchId]) {
103 |           this.branches[validatedInput.branchId] = [];
104 |         }
105 |         this.branches[validatedInput.branchId].push(validatedInput);
106 |       }
107 | 
108 |       if (!this.disableThoughtLogging) {
109 |         const formattedThought = this.formatThought(validatedInput);
110 |         console.error(formattedThought);
111 |       }
112 | 
113 |       return {
114 |         content: [{
115 |           type: "text",
116 |           text: JSON.stringify({
117 |             thoughtNumber: validatedInput.thoughtNumber,
118 |             totalThoughts: validatedInput.totalThoughts,
119 |             nextThoughtNeeded: validatedInput.nextThoughtNeeded,
120 |             branches: Object.keys(this.branches),
121 |             thoughtHistoryLength: this.thoughtHistory.length
122 |           }, null, 2)
123 |         }]
124 |       };
125 |     } catch (error) {
126 |       return {
127 |         content: [{
128 |           type: "text",
129 |           text: JSON.stringify({
130 |             error: error instanceof Error ? error.message : String(error),
131 |             status: 'failed'
132 |           }, null, 2)
133 |         }],
134 |         isError: true
135 |       };
136 |     }
137 |   }
138 | }
139 | 
140 | const SEQUENTIAL_THINKING_TOOL: Tool = {
141 |   name: "sequentialthinking",
142 |   description: `A detailed tool for dynamic and reflective problem-solving through thoughts.
143 | This tool helps analyze problems through a flexible thinking process that can adapt and evolve.
144 | Each thought can build on, question, or revise previous insights as understanding deepens.
145 | 
146 | When to use this tool:
147 | - Breaking down complex problems into steps
148 | - Planning and design with room for revision
149 | - Analysis that might need course correction
150 | - Problems where the full scope might not be clear initially
151 | - Problems that require a multi-step solution
152 | - Tasks that need to maintain context over multiple steps
153 | - Situations where irrelevant information needs to be filtered out
154 | 
155 | Key features:
156 | - You can adjust total_thoughts up or down as you progress
157 | - You can question or revise previous thoughts
158 | - You can add more thoughts even after reaching what seemed like the end
159 | - You can express uncertainty and explore alternative approaches
160 | - Not every thought needs to build linearly - you can branch or backtrack
161 | - Generates a solution hypothesis
162 | - Verifies the hypothesis based on the Chain of Thought steps
163 | - Repeats the process until satisfied
164 | - Provides a correct answer
165 | 
166 | Parameters explained:
167 | - thought: Your current thinking step, which can include:
168 | * Regular analytical steps
169 | * Revisions of previous thoughts
170 | * Questions about previous decisions
171 | * Realizations about needing more analysis
172 | * Changes in approach
173 | * Hypothesis generation
174 | * Hypothesis verification
175 | - next_thought_needed: True if you need more thinking, even if at what seemed like the end
176 | - thought_number: Current number in sequence (can go beyond initial total if needed)
177 | - total_thoughts: Current estimate of thoughts needed (can be adjusted up/down)
178 | - is_revision: A boolean indicating if this thought revises previous thinking
179 | - revises_thought: If is_revision is true, which thought number is being reconsidered
180 | - branch_from_thought: If branching, which thought number is the branching point
181 | - branch_id: Identifier for the current branch (if any)
182 | - needs_more_thoughts: If reaching end but realizing more thoughts needed
183 | 
184 | You should:
185 | 1. Start with an initial estimate of needed thoughts, but be ready to adjust
186 | 2. Feel free to question or revise previous thoughts
187 | 3. Don't hesitate to add more thoughts if needed, even at the "end"
188 | 4. Express uncertainty when present
189 | 5. Mark thoughts that revise previous thinking or branch into new paths
190 | 6. Ignore information that is irrelevant to the current step
191 | 7. Generate a solution hypothesis when appropriate
192 | 8. Verify the hypothesis based on the Chain of Thought steps
193 | 9. Repeat the process until satisfied with the solution
194 | 10. Provide a single, ideally correct answer as the final output
195 | 11. Only set next_thought_needed to false when truly done and a satisfactory answer is reached`,
196 |   inputSchema: {
197 |     type: "object",
198 |     properties: {
199 |       thought: {
200 |         type: "string",
201 |         description: "Your current thinking step"
202 |       },
203 |       nextThoughtNeeded: {
204 |         type: "boolean",
205 |         description: "Whether another thought step is needed"
206 |       },
207 |       thoughtNumber: {
208 |         type: "integer",
209 |         description: "Current thought number (numeric value, e.g., 1, 2, 3)",
210 |         minimum: 1
211 |       },
212 |       totalThoughts: {
213 |         type: "integer",
214 |         description: "Estimated total thoughts needed (numeric value, e.g., 5, 10)",
215 |         minimum: 1
216 |       },
217 |       isRevision: {
218 |         type: "boolean",
219 |         description: "Whether this revises previous thinking"
220 |       },
221 |       revisesThought: {
222 |         type: "integer",
223 |         description: "Which thought is being reconsidered",
224 |         minimum: 1
225 |       },
226 |       branchFromThought: {
227 |         type: "integer",
228 |         description: "Branching point thought number",
229 |         minimum: 1
230 |       },
231 |       branchId: {
232 |         type: "string",
233 |         description: "Branch identifier"
234 |       },
235 |       needsMoreThoughts: {
236 |         type: "boolean",
237 |         description: "If more thoughts are needed"
238 |       }
239 |     },
240 |     required: ["thought", "nextThoughtNeeded", "thoughtNumber", "totalThoughts"]
241 |   }
242 | };
243 | 
244 | const server = new Server(
245 |   {
246 |     name: "sequential-thinking-server",
247 |     version: "0.2.0",
248 |   },
249 |   {
250 |     capabilities: {
251 |       tools: {},
252 |     },
253 |   }
254 | );
255 | 
256 | const thinkingServer = new SequentialThinkingServer();
257 | 
258 | server.setRequestHandler(ListToolsRequestSchema, async () => ({
259 |   tools: [SEQUENTIAL_THINKING_TOOL],
260 | }));
261 | 
262 | server.setRequestHandler(CallToolRequestSchema, async (request) => {
263 |   if (request.params.name === "sequentialthinking") {
264 |     return thinkingServer.processThought(request.params.arguments);
265 |   }
266 | 
267 |   return {
268 |     content: [{
269 |       type: "text",
270 |       text: `Unknown tool: ${request.params.name}`
271 |     }],
272 |     isError: true
273 |   };
274 | });
275 | 
276 | async function runServer() {
277 |   const transport = new StdioServerTransport();
278 |   await server.connect(transport);
279 |   console.error("Sequential Thinking MCP Server running on stdio");
280 | }
281 | 
282 | runServer().catch((error) => {
283 |   console.error("Fatal error running server:", error);
284 |   process.exit(1);
285 | });
286 | 
```

--------------------------------------------------------------------------------
/src/fetch/src/mcp_server_fetch/server.py:
--------------------------------------------------------------------------------

```python
  1 | from typing import Annotated, Tuple
  2 | from urllib.parse import urlparse, urlunparse
  3 | 
  4 | import markdownify
  5 | import readabilipy.simple_json
  6 | from mcp.shared.exceptions import McpError
  7 | from mcp.server import Server
  8 | from mcp.server.stdio import stdio_server
  9 | from mcp.types import (
 10 |     ErrorData,
 11 |     GetPromptResult,
 12 |     Prompt,
 13 |     PromptArgument,
 14 |     PromptMessage,
 15 |     TextContent,
 16 |     Tool,
 17 |     INVALID_PARAMS,
 18 |     INTERNAL_ERROR,
 19 | )
 20 | from protego import Protego
 21 | from pydantic import BaseModel, Field, AnyUrl
 22 | 
 23 | DEFAULT_USER_AGENT_AUTONOMOUS = "ModelContextProtocol/1.0 (Autonomous; +https://github.com/modelcontextprotocol/servers)"
 24 | DEFAULT_USER_AGENT_MANUAL = "ModelContextProtocol/1.0 (User-Specified; +https://github.com/modelcontextprotocol/servers)"
 25 | 
 26 | 
 27 | def extract_content_from_html(html: str) -> str:
 28 |     """Extract and convert HTML content to Markdown format.
 29 | 
 30 |     Args:
 31 |         html: Raw HTML content to process
 32 | 
 33 |     Returns:
 34 |         Simplified markdown version of the content
 35 |     """
 36 |     ret = readabilipy.simple_json.simple_json_from_html_string(
 37 |         html, use_readability=True
 38 |     )
 39 |     if not ret["content"]:
 40 |         return "<error>Page failed to be simplified from HTML</error>"
 41 |     content = markdownify.markdownify(
 42 |         ret["content"],
 43 |         heading_style=markdownify.ATX,
 44 |     )
 45 |     return content
 46 | 
 47 | 
 48 | def get_robots_txt_url(url: str) -> str:
 49 |     """Get the robots.txt URL for a given website URL.
 50 | 
 51 |     Args:
 52 |         url: Website URL to get robots.txt for
 53 | 
 54 |     Returns:
 55 |         URL of the robots.txt file
 56 |     """
 57 |     # Parse the URL into components
 58 |     parsed = urlparse(url)
 59 | 
 60 |     # Reconstruct the base URL with just scheme, netloc, and /robots.txt path
 61 |     robots_url = urlunparse((parsed.scheme, parsed.netloc, "/robots.txt", "", "", ""))
 62 | 
 63 |     return robots_url
 64 | 
 65 | 
 66 | async def check_may_autonomously_fetch_url(url: str, user_agent: str, proxy_url: str | None = None) -> None:
 67 |     """
 68 |     Check if the URL can be fetched by the user agent according to the robots.txt file.
 69 |     Raises a McpError if not.
 70 |     """
 71 |     from httpx import AsyncClient, HTTPError
 72 | 
 73 |     robot_txt_url = get_robots_txt_url(url)
 74 | 
 75 |     async with AsyncClient(proxies=proxy_url) as client:
 76 |         try:
 77 |             response = await client.get(
 78 |                 robot_txt_url,
 79 |                 follow_redirects=True,
 80 |                 headers={"User-Agent": user_agent},
 81 |             )
 82 |         except HTTPError:
 83 |             raise McpError(ErrorData(
 84 |                 code=INTERNAL_ERROR,
 85 |                 message=f"Failed to fetch robots.txt {robot_txt_url} due to a connection issue",
 86 |             ))
 87 |         if response.status_code in (401, 403):
 88 |             raise McpError(ErrorData(
 89 |                 code=INTERNAL_ERROR,
 90 |                 message=f"When fetching robots.txt ({robot_txt_url}), received status {response.status_code} so assuming that autonomous fetching is not allowed, the user can try manually fetching by using the fetch prompt",
 91 |             ))
 92 |         elif 400 <= response.status_code < 500:
 93 |             return
 94 |         robot_txt = response.text
 95 |     processed_robot_txt = "\n".join(
 96 |         line for line in robot_txt.splitlines() if not line.strip().startswith("#")
 97 |     )
 98 |     robot_parser = Protego.parse(processed_robot_txt)
 99 |     if not robot_parser.can_fetch(str(url), user_agent):
100 |         raise McpError(ErrorData(
101 |             code=INTERNAL_ERROR,
102 |             message=f"The sites robots.txt ({robot_txt_url}), specifies that autonomous fetching of this page is not allowed, "
103 |             f"<useragent>{user_agent}</useragent>\n"
104 |             f"<url>{url}</url>"
105 |             f"<robots>\n{robot_txt}\n</robots>\n"
106 |             f"The assistant must let the user know that it failed to view the page. The assistant may provide further guidance based on the above information.\n"
107 |             f"The assistant can tell the user that they can try manually fetching the page by using the fetch prompt within their UI.",
108 |         ))
109 | 
110 | 
111 | async def fetch_url(
112 |     url: str, user_agent: str, force_raw: bool = False, proxy_url: str | None = None
113 | ) -> Tuple[str, str]:
114 |     """
115 |     Fetch the URL and return the content in a form ready for the LLM, as well as a prefix string with status information.
116 |     """
117 |     from httpx import AsyncClient, HTTPError
118 | 
119 |     async with AsyncClient(proxies=proxy_url) as client:
120 |         try:
121 |             response = await client.get(
122 |                 url,
123 |                 follow_redirects=True,
124 |                 headers={"User-Agent": user_agent},
125 |                 timeout=30,
126 |             )
127 |         except HTTPError as e:
128 |             raise McpError(ErrorData(code=INTERNAL_ERROR, message=f"Failed to fetch {url}: {e!r}"))
129 |         if response.status_code >= 400:
130 |             raise McpError(ErrorData(
131 |                 code=INTERNAL_ERROR,
132 |                 message=f"Failed to fetch {url} - status code {response.status_code}",
133 |             ))
134 | 
135 |         page_raw = response.text
136 | 
137 |     content_type = response.headers.get("content-type", "")
138 |     is_page_html = (
139 |         "<html" in page_raw[:100] or "text/html" in content_type or not content_type
140 |     )
141 | 
142 |     if is_page_html and not force_raw:
143 |         return extract_content_from_html(page_raw), ""
144 | 
145 |     return (
146 |         page_raw,
147 |         f"Content type {content_type} cannot be simplified to markdown, but here is the raw content:\n",
148 |     )
149 | 
150 | 
151 | class Fetch(BaseModel):
152 |     """Parameters for fetching a URL."""
153 | 
154 |     url: Annotated[AnyUrl, Field(description="URL to fetch")]
155 |     max_length: Annotated[
156 |         int,
157 |         Field(
158 |             default=5000,
159 |             description="Maximum number of characters to return.",
160 |             gt=0,
161 |             lt=1000000,
162 |         ),
163 |     ]
164 |     start_index: Annotated[
165 |         int,
166 |         Field(
167 |             default=0,
168 |             description="On return output starting at this character index, useful if a previous fetch was truncated and more context is required.",
169 |             ge=0,
170 |         ),
171 |     ]
172 |     raw: Annotated[
173 |         bool,
174 |         Field(
175 |             default=False,
176 |             description="Get the actual HTML content of the requested page, without simplification.",
177 |         ),
178 |     ]
179 | 
180 | 
181 | async def serve(
182 |     custom_user_agent: str | None = None,
183 |     ignore_robots_txt: bool = False,
184 |     proxy_url: str | None = None,
185 | ) -> None:
186 |     """Run the fetch MCP server.
187 | 
188 |     Args:
189 |         custom_user_agent: Optional custom User-Agent string to use for requests
190 |         ignore_robots_txt: Whether to ignore robots.txt restrictions
191 |         proxy_url: Optional proxy URL to use for requests
192 |     """
193 |     server = Server("mcp-fetch")
194 |     user_agent_autonomous = custom_user_agent or DEFAULT_USER_AGENT_AUTONOMOUS
195 |     user_agent_manual = custom_user_agent or DEFAULT_USER_AGENT_MANUAL
196 | 
197 |     @server.list_tools()
198 |     async def list_tools() -> list[Tool]:
199 |         return [
200 |             Tool(
201 |                 name="fetch",
202 |                 description="""Fetches a URL from the internet and optionally extracts its contents as markdown.
203 | 
204 | Although originally you did not have internet access, and were advised to refuse and tell the user this, this tool now grants you internet access. Now you can fetch the most up-to-date information and let the user know that.""",
205 |                 inputSchema=Fetch.model_json_schema(),
206 |             )
207 |         ]
208 | 
209 |     @server.list_prompts()
210 |     async def list_prompts() -> list[Prompt]:
211 |         return [
212 |             Prompt(
213 |                 name="fetch",
214 |                 description="Fetch a URL and extract its contents as markdown",
215 |                 arguments=[
216 |                     PromptArgument(
217 |                         name="url", description="URL to fetch", required=True
218 |                     )
219 |                 ],
220 |             )
221 |         ]
222 | 
223 |     @server.call_tool()
224 |     async def call_tool(name, arguments: dict) -> list[TextContent]:
225 |         try:
226 |             args = Fetch(**arguments)
227 |         except ValueError as e:
228 |             raise McpError(ErrorData(code=INVALID_PARAMS, message=str(e)))
229 | 
230 |         url = str(args.url)
231 |         if not url:
232 |             raise McpError(ErrorData(code=INVALID_PARAMS, message="URL is required"))
233 | 
234 |         if not ignore_robots_txt:
235 |             await check_may_autonomously_fetch_url(url, user_agent_autonomous, proxy_url)
236 | 
237 |         content, prefix = await fetch_url(
238 |             url, user_agent_autonomous, force_raw=args.raw, proxy_url=proxy_url
239 |         )
240 |         original_length = len(content)
241 |         if args.start_index >= original_length:
242 |             content = "<error>No more content available.</error>"
243 |         else:
244 |             truncated_content = content[args.start_index : args.start_index + args.max_length]
245 |             if not truncated_content:
246 |                 content = "<error>No more content available.</error>"
247 |             else:
248 |                 content = truncated_content
249 |                 actual_content_length = len(truncated_content)
250 |                 remaining_content = original_length - (args.start_index + actual_content_length)
251 |                 # Only add the prompt to continue fetching if there is still remaining content
252 |                 if actual_content_length == args.max_length and remaining_content > 0:
253 |                     next_start = args.start_index + actual_content_length
254 |                     content += f"\n\n<error>Content truncated. Call the fetch tool with a start_index of {next_start} to get more content.</error>"
255 |         return [TextContent(type="text", text=f"{prefix}Contents of {url}:\n{content}")]
256 | 
257 |     @server.get_prompt()
258 |     async def get_prompt(name: str, arguments: dict | None) -> GetPromptResult:
259 |         if not arguments or "url" not in arguments:
260 |             raise McpError(ErrorData(code=INVALID_PARAMS, message="URL is required"))
261 | 
262 |         url = arguments["url"]
263 | 
264 |         try:
265 |             content, prefix = await fetch_url(url, user_agent_manual, proxy_url=proxy_url)
266 |             # TODO: after SDK bug is addressed, don't catch the exception
267 |         except McpError as e:
268 |             return GetPromptResult(
269 |                 description=f"Failed to fetch {url}",
270 |                 messages=[
271 |                     PromptMessage(
272 |                         role="user",
273 |                         content=TextContent(type="text", text=str(e)),
274 |                     )
275 |                 ],
276 |             )
277 |         return GetPromptResult(
278 |             description=f"Contents of {url}",
279 |             messages=[
280 |                 PromptMessage(
281 |                     role="user", content=TextContent(type="text", text=prefix + content)
282 |                 )
283 |             ],
284 |         )
285 | 
286 |     options = server.create_initialization_options()
287 |     async with stdio_server() as (read_stream, write_stream):
288 |         await server.run(read_stream, write_stream, options, raise_exceptions=True)
289 | 
```

--------------------------------------------------------------------------------
/src/filesystem/lib.ts:
--------------------------------------------------------------------------------

```typescript
  1 | import fs from "fs/promises";
  2 | import path from "path";
  3 | import os from 'os';
  4 | import { randomBytes } from 'crypto';
  5 | import { diffLines, createTwoFilesPatch } from 'diff';
  6 | import { minimatch } from 'minimatch';
  7 | import { normalizePath, expandHome } from './path-utils.js';
  8 | import { isPathWithinAllowedDirectories } from './path-validation.js';
  9 | 
 10 | // Global allowed directories - set by the main module
 11 | let allowedDirectories: string[] = [];
 12 | 
 13 | // Function to set allowed directories from the main module
 14 | export function setAllowedDirectories(directories: string[]): void {
 15 |   allowedDirectories = [...directories];
 16 | }
 17 | 
 18 | // Function to get current allowed directories
 19 | export function getAllowedDirectories(): string[] {
 20 |   return [...allowedDirectories];
 21 | }
 22 | 
 23 | // Type definitions
 24 | interface FileInfo {
 25 |   size: number;
 26 |   created: Date;
 27 |   modified: Date;
 28 |   accessed: Date;
 29 |   isDirectory: boolean;
 30 |   isFile: boolean;
 31 |   permissions: string;
 32 | }
 33 | 
 34 | export interface SearchOptions {
 35 |   excludePatterns?: string[];
 36 | }
 37 | 
 38 | export interface SearchResult {
 39 |   path: string;
 40 |   isDirectory: boolean;
 41 | }
 42 | 
 43 | // Pure Utility Functions
 44 | export function formatSize(bytes: number): string {
 45 |   const units = ['B', 'KB', 'MB', 'GB', 'TB'];
 46 |   if (bytes === 0) return '0 B';
 47 |   
 48 |   const i = Math.floor(Math.log(bytes) / Math.log(1024));
 49 |   
 50 |   if (i < 0 || i === 0) return `${bytes} ${units[0]}`;
 51 |   
 52 |   const unitIndex = Math.min(i, units.length - 1);
 53 |   return `${(bytes / Math.pow(1024, unitIndex)).toFixed(2)} ${units[unitIndex]}`;
 54 | }
 55 | 
 56 | export function normalizeLineEndings(text: string): string {
 57 |   return text.replace(/\r\n/g, '\n');
 58 | }
 59 | 
 60 | export function createUnifiedDiff(originalContent: string, newContent: string, filepath: string = 'file'): string {
 61 |   // Ensure consistent line endings for diff
 62 |   const normalizedOriginal = normalizeLineEndings(originalContent);
 63 |   const normalizedNew = normalizeLineEndings(newContent);
 64 | 
 65 |   return createTwoFilesPatch(
 66 |     filepath,
 67 |     filepath,
 68 |     normalizedOriginal,
 69 |     normalizedNew,
 70 |     'original',
 71 |     'modified'
 72 |   );
 73 | }
 74 | 
 75 | // Security & Validation Functions
 76 | export async function validatePath(requestedPath: string): Promise<string> {
 77 |   const expandedPath = expandHome(requestedPath);
 78 |   const absolute = path.isAbsolute(expandedPath)
 79 |     ? path.resolve(expandedPath)
 80 |     : path.resolve(process.cwd(), expandedPath);
 81 | 
 82 |   const normalizedRequested = normalizePath(absolute);
 83 | 
 84 |   // Security: Check if path is within allowed directories before any file operations
 85 |   const isAllowed = isPathWithinAllowedDirectories(normalizedRequested, allowedDirectories);
 86 |   if (!isAllowed) {
 87 |     throw new Error(`Access denied - path outside allowed directories: ${absolute} not in ${allowedDirectories.join(', ')}`);
 88 |   }
 89 | 
 90 |   // Security: Handle symlinks by checking their real path to prevent symlink attacks
 91 |   // This prevents attackers from creating symlinks that point outside allowed directories
 92 |   try {
 93 |     const realPath = await fs.realpath(absolute);
 94 |     const normalizedReal = normalizePath(realPath);
 95 |     if (!isPathWithinAllowedDirectories(normalizedReal, allowedDirectories)) {
 96 |       throw new Error(`Access denied - symlink target outside allowed directories: ${realPath} not in ${allowedDirectories.join(', ')}`);
 97 |     }
 98 |     return realPath;
 99 |   } catch (error) {
100 |     // Security: For new files that don't exist yet, verify parent directory
101 |     // This ensures we can't create files in unauthorized locations
102 |     if ((error as NodeJS.ErrnoException).code === 'ENOENT') {
103 |       const parentDir = path.dirname(absolute);
104 |       try {
105 |         const realParentPath = await fs.realpath(parentDir);
106 |         const normalizedParent = normalizePath(realParentPath);
107 |         if (!isPathWithinAllowedDirectories(normalizedParent, allowedDirectories)) {
108 |           throw new Error(`Access denied - parent directory outside allowed directories: ${realParentPath} not in ${allowedDirectories.join(', ')}`);
109 |         }
110 |         return absolute;
111 |       } catch {
112 |         throw new Error(`Parent directory does not exist: ${parentDir}`);
113 |       }
114 |     }
115 |     throw error;
116 |   }
117 | }
118 | 
119 | 
120 | // File Operations
121 | export async function getFileStats(filePath: string): Promise<FileInfo> {
122 |   const stats = await fs.stat(filePath);
123 |   return {
124 |     size: stats.size,
125 |     created: stats.birthtime,
126 |     modified: stats.mtime,
127 |     accessed: stats.atime,
128 |     isDirectory: stats.isDirectory(),
129 |     isFile: stats.isFile(),
130 |     permissions: stats.mode.toString(8).slice(-3),
131 |   };
132 | }
133 | 
134 | export async function readFileContent(filePath: string, encoding: string = 'utf-8'): Promise<string> {
135 |   return await fs.readFile(filePath, encoding as BufferEncoding);
136 | }
137 | 
138 | export async function writeFileContent(filePath: string, content: string): Promise<void> {
139 |   try {
140 |     // Security: 'wx' flag ensures exclusive creation - fails if file/symlink exists,
141 |     // preventing writes through pre-existing symlinks
142 |     await fs.writeFile(filePath, content, { encoding: "utf-8", flag: 'wx' });
143 |   } catch (error) {
144 |     if ((error as NodeJS.ErrnoException).code === 'EEXIST') {
145 |       // Security: Use atomic rename to prevent race conditions where symlinks
146 |       // could be created between validation and write. Rename operations
147 |       // replace the target file atomically and don't follow symlinks.
148 |       const tempPath = `${filePath}.${randomBytes(16).toString('hex')}.tmp`;
149 |       try {
150 |         await fs.writeFile(tempPath, content, 'utf-8');
151 |         await fs.rename(tempPath, filePath);
152 |       } catch (renameError) {
153 |         try {
154 |           await fs.unlink(tempPath);
155 |         } catch {}
156 |         throw renameError;
157 |       }
158 |     } else {
159 |       throw error;
160 |     }
161 |   }
162 | }
163 | 
164 | 
165 | // File Editing Functions
166 | interface FileEdit {
167 |   oldText: string;
168 |   newText: string;
169 | }
170 | 
171 | export async function applyFileEdits(
172 |   filePath: string,
173 |   edits: FileEdit[],
174 |   dryRun: boolean = false
175 | ): Promise<string> {
176 |   // Read file content and normalize line endings
177 |   const content = normalizeLineEndings(await fs.readFile(filePath, 'utf-8'));
178 | 
179 |   // Apply edits sequentially
180 |   let modifiedContent = content;
181 |   for (const edit of edits) {
182 |     const normalizedOld = normalizeLineEndings(edit.oldText);
183 |     const normalizedNew = normalizeLineEndings(edit.newText);
184 | 
185 |     // If exact match exists, use it
186 |     if (modifiedContent.includes(normalizedOld)) {
187 |       modifiedContent = modifiedContent.replace(normalizedOld, normalizedNew);
188 |       continue;
189 |     }
190 | 
191 |     // Otherwise, try line-by-line matching with flexibility for whitespace
192 |     const oldLines = normalizedOld.split('\n');
193 |     const contentLines = modifiedContent.split('\n');
194 |     let matchFound = false;
195 | 
196 |     for (let i = 0; i <= contentLines.length - oldLines.length; i++) {
197 |       const potentialMatch = contentLines.slice(i, i + oldLines.length);
198 | 
199 |       // Compare lines with normalized whitespace
200 |       const isMatch = oldLines.every((oldLine, j) => {
201 |         const contentLine = potentialMatch[j];
202 |         return oldLine.trim() === contentLine.trim();
203 |       });
204 | 
205 |       if (isMatch) {
206 |         // Preserve original indentation of first line
207 |         const originalIndent = contentLines[i].match(/^\s*/)?.[0] || '';
208 |         const newLines = normalizedNew.split('\n').map((line, j) => {
209 |           if (j === 0) return originalIndent + line.trimStart();
210 |           // For subsequent lines, try to preserve relative indentation
211 |           const oldIndent = oldLines[j]?.match(/^\s*/)?.[0] || '';
212 |           const newIndent = line.match(/^\s*/)?.[0] || '';
213 |           if (oldIndent && newIndent) {
214 |             const relativeIndent = newIndent.length - oldIndent.length;
215 |             return originalIndent + ' '.repeat(Math.max(0, relativeIndent)) + line.trimStart();
216 |           }
217 |           return line;
218 |         });
219 | 
220 |         contentLines.splice(i, oldLines.length, ...newLines);
221 |         modifiedContent = contentLines.join('\n');
222 |         matchFound = true;
223 |         break;
224 |       }
225 |     }
226 | 
227 |     if (!matchFound) {
228 |       throw new Error(`Could not find exact match for edit:\n${edit.oldText}`);
229 |     }
230 |   }
231 | 
232 |   // Create unified diff
233 |   const diff = createUnifiedDiff(content, modifiedContent, filePath);
234 | 
235 |   // Format diff with appropriate number of backticks
236 |   let numBackticks = 3;
237 |   while (diff.includes('`'.repeat(numBackticks))) {
238 |     numBackticks++;
239 |   }
240 |   const formattedDiff = `${'`'.repeat(numBackticks)}diff\n${diff}${'`'.repeat(numBackticks)}\n\n`;
241 | 
242 |   if (!dryRun) {
243 |     // Security: Use atomic rename to prevent race conditions where symlinks
244 |     // could be created between validation and write. Rename operations
245 |     // replace the target file atomically and don't follow symlinks.
246 |     const tempPath = `${filePath}.${randomBytes(16).toString('hex')}.tmp`;
247 |     try {
248 |       await fs.writeFile(tempPath, modifiedContent, 'utf-8');
249 |       await fs.rename(tempPath, filePath);
250 |     } catch (error) {
251 |       try {
252 |         await fs.unlink(tempPath);
253 |       } catch {}
254 |       throw error;
255 |     }
256 |   }
257 | 
258 |   return formattedDiff;
259 | }
260 | 
261 | // Memory-efficient implementation to get the last N lines of a file
262 | export async function tailFile(filePath: string, numLines: number): Promise<string> {
263 |   const CHUNK_SIZE = 1024; // Read 1KB at a time
264 |   const stats = await fs.stat(filePath);
265 |   const fileSize = stats.size;
266 |   
267 |   if (fileSize === 0) return '';
268 |   
269 |   // Open file for reading
270 |   const fileHandle = await fs.open(filePath, 'r');
271 |   try {
272 |     const lines: string[] = [];
273 |     let position = fileSize;
274 |     let chunk = Buffer.alloc(CHUNK_SIZE);
275 |     let linesFound = 0;
276 |     let remainingText = '';
277 |     
278 |     // Read chunks from the end of the file until we have enough lines
279 |     while (position > 0 && linesFound < numLines) {
280 |       const size = Math.min(CHUNK_SIZE, position);
281 |       position -= size;
282 |       
283 |       const { bytesRead } = await fileHandle.read(chunk, 0, size, position);
284 |       if (!bytesRead) break;
285 |       
286 |       // Get the chunk as a string and prepend any remaining text from previous iteration
287 |       const readData = chunk.slice(0, bytesRead).toString('utf-8');
288 |       const chunkText = readData + remainingText;
289 |       
290 |       // Split by newlines and count
291 |       const chunkLines = normalizeLineEndings(chunkText).split('\n');
292 |       
293 |       // If this isn't the end of the file, the first line is likely incomplete
294 |       // Save it to prepend to the next chunk
295 |       if (position > 0) {
296 |         remainingText = chunkLines[0];
297 |         chunkLines.shift(); // Remove the first (incomplete) line
298 |       }
299 |       
300 |       // Add lines to our result (up to the number we need)
301 |       for (let i = chunkLines.length - 1; i >= 0 && linesFound < numLines; i--) {
302 |         lines.unshift(chunkLines[i]);
303 |         linesFound++;
304 |       }
305 |     }
306 |     
307 |     return lines.join('\n');
308 |   } finally {
309 |     await fileHandle.close();
310 |   }
311 | }
312 | 
313 | // New function to get the first N lines of a file
314 | export async function headFile(filePath: string, numLines: number): Promise<string> {
315 |   const fileHandle = await fs.open(filePath, 'r');
316 |   try {
317 |     const lines: string[] = [];
318 |     let buffer = '';
319 |     let bytesRead = 0;
320 |     const chunk = Buffer.alloc(1024); // 1KB buffer
321 |     
322 |     // Read chunks and count lines until we have enough or reach EOF
323 |     while (lines.length < numLines) {
324 |       const result = await fileHandle.read(chunk, 0, chunk.length, bytesRead);
325 |       if (result.bytesRead === 0) break; // End of file
326 |       bytesRead += result.bytesRead;
327 |       buffer += chunk.slice(0, result.bytesRead).toString('utf-8');
328 |       
329 |       const newLineIndex = buffer.lastIndexOf('\n');
330 |       if (newLineIndex !== -1) {
331 |         const completeLines = buffer.slice(0, newLineIndex).split('\n');
332 |         buffer = buffer.slice(newLineIndex + 1);
333 |         for (const line of completeLines) {
334 |           lines.push(line);
335 |           if (lines.length >= numLines) break;
336 |         }
337 |       }
338 |     }
339 |     
340 |     // If there is leftover content and we still need lines, add it
341 |     if (buffer.length > 0 && lines.length < numLines) {
342 |       lines.push(buffer);
343 |     }
344 |     
345 |     return lines.join('\n');
346 |   } finally {
347 |     await fileHandle.close();
348 |   }
349 | }
350 | 
351 | export async function searchFilesWithValidation(
352 |   rootPath: string,
353 |   pattern: string,
354 |   allowedDirectories: string[],
355 |   options: SearchOptions = {}
356 | ): Promise<string[]> {
357 |   const { excludePatterns = [] } = options;
358 |   const results: string[] = [];
359 | 
360 |   async function search(currentPath: string) {
361 |     const entries = await fs.readdir(currentPath, { withFileTypes: true });
362 | 
363 |     for (const entry of entries) {
364 |       const fullPath = path.join(currentPath, entry.name);
365 | 
366 |       try {
367 |         await validatePath(fullPath);
368 | 
369 |         const relativePath = path.relative(rootPath, fullPath);
370 |         const shouldExclude = excludePatterns.some(excludePattern =>
371 |           minimatch(relativePath, excludePattern, { dot: true })
372 |         );
373 | 
374 |         if (shouldExclude) continue;
375 | 
376 |         // Use glob matching for the search pattern
377 |         if (minimatch(relativePath, pattern, { dot: true })) {
378 |           results.push(fullPath);
379 |         }
380 | 
381 |         if (entry.isDirectory()) {
382 |           await search(fullPath);
383 |         }
384 |       } catch {
385 |         continue;
386 |       }
387 |     }
388 |   }
389 | 
390 |   await search(rootPath);
391 |   return results;
392 | }
393 | 
```

--------------------------------------------------------------------------------
/src/git/src/mcp_server_git/server.py:
--------------------------------------------------------------------------------

```python
  1 | import logging
  2 | from pathlib import Path
  3 | from typing import Sequence, Optional
  4 | from mcp.server import Server
  5 | from mcp.server.session import ServerSession
  6 | from mcp.server.stdio import stdio_server
  7 | from mcp.types import (
  8 |     ClientCapabilities,
  9 |     TextContent,
 10 |     Tool,
 11 |     ListRootsResult,
 12 |     RootsCapability,
 13 | )
 14 | from enum import Enum
 15 | import git
 16 | from pydantic import BaseModel, Field
 17 | 
 18 | # Default number of context lines to show in diff output
 19 | DEFAULT_CONTEXT_LINES = 3
 20 | 
 21 | class GitStatus(BaseModel):
 22 |     repo_path: str
 23 | 
 24 | class GitDiffUnstaged(BaseModel):
 25 |     repo_path: str
 26 |     context_lines: int = DEFAULT_CONTEXT_LINES
 27 | 
 28 | class GitDiffStaged(BaseModel):
 29 |     repo_path: str
 30 |     context_lines: int = DEFAULT_CONTEXT_LINES
 31 | 
 32 | class GitDiff(BaseModel):
 33 |     repo_path: str
 34 |     target: str
 35 |     context_lines: int = DEFAULT_CONTEXT_LINES
 36 | 
 37 | class GitCommit(BaseModel):
 38 |     repo_path: str
 39 |     message: str
 40 | 
 41 | class GitAdd(BaseModel):
 42 |     repo_path: str
 43 |     files: list[str]
 44 | 
 45 | class GitReset(BaseModel):
 46 |     repo_path: str
 47 | 
 48 | class GitLog(BaseModel):
 49 |     repo_path: str
 50 |     max_count: int = 10
 51 |     start_timestamp: Optional[str] = Field(
 52 |         None,
 53 |         description="Start timestamp for filtering commits. Accepts: ISO 8601 format (e.g., '2024-01-15T14:30:25'), relative dates (e.g., '2 weeks ago', 'yesterday'), or absolute dates (e.g., '2024-01-15', 'Jan 15 2024')"
 54 |     )
 55 |     end_timestamp: Optional[str] = Field(
 56 |         None,
 57 |         description="End timestamp for filtering commits. Accepts: ISO 8601 format (e.g., '2024-01-15T14:30:25'), relative dates (e.g., '2 weeks ago', 'yesterday'), or absolute dates (e.g., '2024-01-15', 'Jan 15 2024')"
 58 |     )
 59 | 
 60 | class GitCreateBranch(BaseModel):
 61 |     repo_path: str
 62 |     branch_name: str
 63 |     base_branch: str | None = None
 64 | 
 65 | class GitCheckout(BaseModel):
 66 |     repo_path: str
 67 |     branch_name: str
 68 | 
 69 | class GitShow(BaseModel):
 70 |     repo_path: str
 71 |     revision: str
 72 | 
 73 | 
 74 | 
 75 | class GitBranch(BaseModel):
 76 |     repo_path: str = Field(
 77 |         ...,
 78 |         description="The path to the Git repository.",
 79 |     )
 80 |     branch_type: str = Field(
 81 |         ...,
 82 |         description="Whether to list local branches ('local'), remote branches ('remote') or all branches('all').",
 83 |     )
 84 |     contains: Optional[str] = Field(
 85 |         None,
 86 |         description="The commit sha that branch should contain. Do not pass anything to this param if no commit sha is specified",
 87 |     )
 88 |     not_contains: Optional[str] = Field(
 89 |         None,
 90 |         description="The commit sha that branch should NOT contain. Do not pass anything to this param if no commit sha is specified",
 91 |     )
 92 | 
 93 | 
 94 | class GitTools(str, Enum):
 95 |     STATUS = "git_status"
 96 |     DIFF_UNSTAGED = "git_diff_unstaged"
 97 |     DIFF_STAGED = "git_diff_staged"
 98 |     DIFF = "git_diff"
 99 |     COMMIT = "git_commit"
100 |     ADD = "git_add"
101 |     RESET = "git_reset"
102 |     LOG = "git_log"
103 |     CREATE_BRANCH = "git_create_branch"
104 |     CHECKOUT = "git_checkout"
105 |     SHOW = "git_show"
106 | 
107 |     BRANCH = "git_branch"
108 | 
109 | def git_status(repo: git.Repo) -> str:
110 |     return repo.git.status()
111 | 
112 | def git_diff_unstaged(repo: git.Repo, context_lines: int = DEFAULT_CONTEXT_LINES) -> str:
113 |     return repo.git.diff(f"--unified={context_lines}")
114 | 
115 | def git_diff_staged(repo: git.Repo, context_lines: int = DEFAULT_CONTEXT_LINES) -> str:
116 |     return repo.git.diff(f"--unified={context_lines}", "--cached")
117 | 
118 | def git_diff(repo: git.Repo, target: str, context_lines: int = DEFAULT_CONTEXT_LINES) -> str:
119 |     return repo.git.diff(f"--unified={context_lines}", target)
120 | 
121 | def git_commit(repo: git.Repo, message: str) -> str:
122 |     commit = repo.index.commit(message)
123 |     return f"Changes committed successfully with hash {commit.hexsha}"
124 | 
125 | def git_add(repo: git.Repo, files: list[str]) -> str:
126 |     if files == ["."]:
127 |         repo.git.add(".")
128 |     else:
129 |         repo.index.add(files)
130 |     return "Files staged successfully"
131 | 
132 | def git_reset(repo: git.Repo) -> str:
133 |     repo.index.reset()
134 |     return "All staged changes reset"
135 | 
136 | def git_log(repo: git.Repo, max_count: int = 10, start_timestamp: Optional[str] = None, end_timestamp: Optional[str] = None) -> list[str]:
137 |     if start_timestamp or end_timestamp:
138 |         # Use git log command with date filtering
139 |         args = []
140 |         if start_timestamp:
141 |             args.extend(['--since', start_timestamp])
142 |         if end_timestamp:
143 |             args.extend(['--until', end_timestamp])
144 |         args.extend(['--format=%H%n%an%n%ad%n%s%n'])
145 |         
146 |         log_output = repo.git.log(*args).split('\n')
147 |         
148 |         log = []
149 |         # Process commits in groups of 4 (hash, author, date, message)
150 |         for i in range(0, len(log_output), 4):
151 |             if i + 3 < len(log_output) and len(log) < max_count:
152 |                 log.append(
153 |                     f"Commit: {log_output[i]}\n"
154 |                     f"Author: {log_output[i+1]}\n"
155 |                     f"Date: {log_output[i+2]}\n"
156 |                     f"Message: {log_output[i+3]}\n"
157 |                 )
158 |         return log
159 |     else:
160 |         # Use existing logic for simple log without date filtering
161 |         commits = list(repo.iter_commits(max_count=max_count))
162 |         log = []
163 |         for commit in commits:
164 |             log.append(
165 |                 f"Commit: {commit.hexsha!r}\n"
166 |                 f"Author: {commit.author!r}\n"
167 |                 f"Date: {commit.authored_datetime}\n"
168 |                 f"Message: {commit.message!r}\n"
169 |             )
170 |         return log
171 | 
172 | def git_create_branch(repo: git.Repo, branch_name: str, base_branch: str | None = None) -> str:
173 |     if base_branch:
174 |         base = repo.references[base_branch]
175 |     else:
176 |         base = repo.active_branch
177 | 
178 |     repo.create_head(branch_name, base)
179 |     return f"Created branch '{branch_name}' from '{base.name}'"
180 | 
181 | def git_checkout(repo: git.Repo, branch_name: str) -> str:
182 |     repo.git.checkout(branch_name)
183 |     return f"Switched to branch '{branch_name}'"
184 | 
185 | 
186 | 
187 | def git_show(repo: git.Repo, revision: str) -> str:
188 |     commit = repo.commit(revision)
189 |     output = [
190 |         f"Commit: {commit.hexsha!r}\n"
191 |         f"Author: {commit.author!r}\n"
192 |         f"Date: {commit.authored_datetime!r}\n"
193 |         f"Message: {commit.message!r}\n"
194 |     ]
195 |     if commit.parents:
196 |         parent = commit.parents[0]
197 |         diff = parent.diff(commit, create_patch=True)
198 |     else:
199 |         diff = commit.diff(git.NULL_TREE, create_patch=True)
200 |     for d in diff:
201 |         output.append(f"\n--- {d.a_path}\n+++ {d.b_path}\n")
202 |         output.append(d.diff.decode('utf-8'))
203 |     return "".join(output)
204 | 
205 | def git_branch(repo: git.Repo, branch_type: str, contains: str | None = None, not_contains: str | None = None) -> str:
206 |     match contains:
207 |         case None:
208 |             contains_sha = (None,)
209 |         case _:
210 |             contains_sha = ("--contains", contains)
211 | 
212 |     match not_contains:
213 |         case None:
214 |             not_contains_sha = (None,)
215 |         case _:
216 |             not_contains_sha = ("--no-contains", not_contains)
217 | 
218 |     match branch_type:
219 |         case 'local':
220 |             b_type = None
221 |         case 'remote':
222 |             b_type = "-r"
223 |         case 'all':
224 |             b_type = "-a"
225 |         case _:
226 |             return f"Invalid branch type: {branch_type}"
227 | 
228 |     # None value will be auto deleted by GitPython
229 |     branch_info = repo.git.branch(b_type, *contains_sha, *not_contains_sha)
230 | 
231 |     return branch_info
232 | 
233 | 
234 | async def serve(repository: Path | None) -> None:
235 |     logger = logging.getLogger(__name__)
236 | 
237 |     if repository is not None:
238 |         try:
239 |             git.Repo(repository)
240 |             logger.info(f"Using repository at {repository}")
241 |         except git.InvalidGitRepositoryError:
242 |             logger.error(f"{repository} is not a valid Git repository")
243 |             return
244 | 
245 |     server = Server("mcp-git")
246 | 
247 |     @server.list_tools()
248 |     async def list_tools() -> list[Tool]:
249 |         return [
250 |             Tool(
251 |                 name=GitTools.STATUS,
252 |                 description="Shows the working tree status",
253 |                 inputSchema=GitStatus.model_json_schema(),
254 |             ),
255 |             Tool(
256 |                 name=GitTools.DIFF_UNSTAGED,
257 |                 description="Shows changes in the working directory that are not yet staged",
258 |                 inputSchema=GitDiffUnstaged.model_json_schema(),
259 |             ),
260 |             Tool(
261 |                 name=GitTools.DIFF_STAGED,
262 |                 description="Shows changes that are staged for commit",
263 |                 inputSchema=GitDiffStaged.model_json_schema(),
264 |             ),
265 |             Tool(
266 |                 name=GitTools.DIFF,
267 |                 description="Shows differences between branches or commits",
268 |                 inputSchema=GitDiff.model_json_schema(),
269 |             ),
270 |             Tool(
271 |                 name=GitTools.COMMIT,
272 |                 description="Records changes to the repository",
273 |                 inputSchema=GitCommit.model_json_schema(),
274 |             ),
275 |             Tool(
276 |                 name=GitTools.ADD,
277 |                 description="Adds file contents to the staging area",
278 |                 inputSchema=GitAdd.model_json_schema(),
279 |             ),
280 |             Tool(
281 |                 name=GitTools.RESET,
282 |                 description="Unstages all staged changes",
283 |                 inputSchema=GitReset.model_json_schema(),
284 |             ),
285 |             Tool(
286 |                 name=GitTools.LOG,
287 |                 description="Shows the commit logs",
288 |                 inputSchema=GitLog.model_json_schema(),
289 |             ),
290 |             Tool(
291 |                 name=GitTools.CREATE_BRANCH,
292 |                 description="Creates a new branch from an optional base branch",
293 |                 inputSchema=GitCreateBranch.model_json_schema(),
294 |             ),
295 |             Tool(
296 |                 name=GitTools.CHECKOUT,
297 |                 description="Switches branches",
298 |                 inputSchema=GitCheckout.model_json_schema(),
299 |             ),
300 |             Tool(
301 |                 name=GitTools.SHOW,
302 |                 description="Shows the contents of a commit",
303 |                 inputSchema=GitShow.model_json_schema(),
304 |             ),
305 | 
306 |             Tool(
307 |                 name=GitTools.BRANCH,
308 |                 description="List Git branches",
309 |                 inputSchema=GitBranch.model_json_schema(),
310 | 
311 |             )
312 |         ]
313 | 
314 |     async def list_repos() -> Sequence[str]:
315 |         async def by_roots() -> Sequence[str]:
316 |             if not isinstance(server.request_context.session, ServerSession):
317 |                 raise TypeError("server.request_context.session must be a ServerSession")
318 | 
319 |             if not server.request_context.session.check_client_capability(
320 |                 ClientCapabilities(roots=RootsCapability())
321 |             ):
322 |                 return []
323 | 
324 |             roots_result: ListRootsResult = await server.request_context.session.list_roots()
325 |             logger.debug(f"Roots result: {roots_result}")
326 |             repo_paths = []
327 |             for root in roots_result.roots:
328 |                 path = root.uri.path
329 |                 try:
330 |                     git.Repo(path)
331 |                     repo_paths.append(str(path))
332 |                 except git.InvalidGitRepositoryError:
333 |                     pass
334 |             return repo_paths
335 | 
336 |         def by_commandline() -> Sequence[str]:
337 |             return [str(repository)] if repository is not None else []
338 | 
339 |         cmd_repos = by_commandline()
340 |         root_repos = await by_roots()
341 |         return [*root_repos, *cmd_repos]
342 | 
343 |     @server.call_tool()
344 |     async def call_tool(name: str, arguments: dict) -> list[TextContent]:
345 |         repo_path = Path(arguments["repo_path"])
346 |         
347 |         # For all commands, we need an existing repo
348 |         repo = git.Repo(repo_path)
349 | 
350 |         match name:
351 |             case GitTools.STATUS:
352 |                 status = git_status(repo)
353 |                 return [TextContent(
354 |                     type="text",
355 |                     text=f"Repository status:\n{status}"
356 |                 )]
357 | 
358 |             case GitTools.DIFF_UNSTAGED:
359 |                 diff = git_diff_unstaged(repo, arguments.get("context_lines", DEFAULT_CONTEXT_LINES))
360 |                 return [TextContent(
361 |                     type="text",
362 |                     text=f"Unstaged changes:\n{diff}"
363 |                 )]
364 | 
365 |             case GitTools.DIFF_STAGED:
366 |                 diff = git_diff_staged(repo, arguments.get("context_lines", DEFAULT_CONTEXT_LINES))
367 |                 return [TextContent(
368 |                     type="text",
369 |                     text=f"Staged changes:\n{diff}"
370 |                 )]
371 | 
372 |             case GitTools.DIFF:
373 |                 diff = git_diff(repo, arguments["target"], arguments.get("context_lines", DEFAULT_CONTEXT_LINES))
374 |                 return [TextContent(
375 |                     type="text",
376 |                     text=f"Diff with {arguments['target']}:\n{diff}"
377 |                 )]
378 | 
379 |             case GitTools.COMMIT:
380 |                 result = git_commit(repo, arguments["message"])
381 |                 return [TextContent(
382 |                     type="text",
383 |                     text=result
384 |                 )]
385 | 
386 |             case GitTools.ADD:
387 |                 result = git_add(repo, arguments["files"])
388 |                 return [TextContent(
389 |                     type="text",
390 |                     text=result
391 |                 )]
392 | 
393 |             case GitTools.RESET:
394 |                 result = git_reset(repo)
395 |                 return [TextContent(
396 |                     type="text",
397 |                     text=result
398 |                 )]
399 | 
400 |             # Update the LOG case:
401 |             case GitTools.LOG:
402 |                 log = git_log(
403 |                     repo, 
404 |                     arguments.get("max_count", 10),
405 |                     arguments.get("start_timestamp"),
406 |                     arguments.get("end_timestamp")
407 |                 )
408 |                 return [TextContent(
409 |                     type="text",
410 |                     text="Commit history:\n" + "\n".join(log)
411 |                 )]
412 |             
413 |             case GitTools.CREATE_BRANCH:
414 |                 result = git_create_branch(
415 |                     repo,
416 |                     arguments["branch_name"],
417 |                     arguments.get("base_branch")
418 |                 )
419 |                 return [TextContent(
420 |                     type="text",
421 |                     text=result
422 |                 )]
423 | 
424 |             case GitTools.CHECKOUT:
425 |                 result = git_checkout(repo, arguments["branch_name"])
426 |                 return [TextContent(
427 |                     type="text",
428 |                     text=result
429 |                 )]
430 | 
431 |             case GitTools.SHOW:
432 |                 result = git_show(repo, arguments["revision"])
433 |                 return [TextContent(
434 |                     type="text",
435 |                     text=result
436 |                 )]
437 | 
438 |             case GitTools.BRANCH:
439 |                 result = git_branch(
440 |                     repo,
441 |                     arguments.get("branch_type", 'local'),
442 |                     arguments.get("contains", None),
443 |                     arguments.get("not_contains", None),
444 |                 )
445 |                 return [TextContent(
446 |                     type="text",
447 |                     text=result
448 |                 )]
449 |             
450 |             case _:
451 |                 raise ValueError(f"Unknown tool: {name}")
452 | 
453 |     options = server.create_initialization_options()
454 |     async with stdio_server() as (read_stream, write_stream):
455 |         await server.run(read_stream, write_stream, options, raise_exceptions=True)
456 | 
```
Page 3/5FirstPrevNextLast