#
tokens: 48888/50000 73/74 files (page 1/2)
lines: off (toggle) GitHub
raw markdown copy
This is page 1 of 2. Use http://codebase.md/rawr-ai/mcp-filesystem?page={x} to view the full context.

# Directory Structure

```
├── .ai
│   └── rules
│       └── filesystem-mcp-server-usage.md
├── .cursor
│   └── rules
│       ├── creating-cursor-rules.mdc
│       ├── filesystem-mcp-tools-guide.mdc
│       └── graphiti
│           ├── graphiti-filesystem-schema.mdc
│           ├── graphiti-knowledge-graph-maintenance.mdc
│           └── graphiti-mcp-core-rules.mdc
├── .early.coverage
│   └── v8
│       └── coverage-final.json
├── .github
│   └── workflows
│       └── ci.yml
├── .gitignore
├── .repomixignore
├── ai
│   ├── graph
│   │   ├── entities
│   │   │   ├── .gitkeep
│   │   │   └── Tool.py
│   │   ├── mcp-config.yaml
│   │   └── rools
│   │       ├── orchestrator_SOPs.md
│   │       └── playbooks
│   │           ├── pb_development_logging.md
│   │           ├── pb_discovery_driven_execution.md
│   │           ├── pb_iterative_execution_verification.md
│   │           └── pb_registry.md
│   └── logs
│       ├── dev
│       │   └── 2025-04-06-regex-content-search.md
│       └── introduce_test_suite
│           └── workflow_diagram.md
├── bun.lock
├── bunfig.toml
├── demo
│   ├── archive
│   │   ├── log.txt
│   │   ├── readme.md
│   │   └── subdir
│   │       └── old_data.txt
│   ├── data.json
│   ├── info.txt
│   ├── nested
│   │   ├── deep
│   │   │   └── hidden.json
│   │   └── info.md
│   ├── README.md
│   └── sample.xml
├── Dockerfile
├── examples
│   ├── mcp_cursor.json
│   ├── mcp_docker.json
│   ├── mcp_glama.json
│   ├── mcp_http.json
│   ├── mcp_permissions.json
│   ├── mcp_roo.json
│   ├── mcp_sse.json
│   └── mcp_stdio.json
├── glama.json
├── index.ts
├── package.json
├── README.md
├── repomix.config.json
├── scripts
│   └── run-docker-demo.sh
├── src
│   ├── config
│   │   └── permissions.ts
│   ├── handlers
│   │   ├── directory-handlers.ts
│   │   ├── file-handlers.ts
│   │   ├── index.ts
│   │   ├── json-handlers.ts
│   │   ├── utility-handlers.ts
│   │   └── xml-handlers.ts
│   ├── schemas
│   │   ├── directory-operations.ts
│   │   ├── file-operations.ts
│   │   ├── index.ts
│   │   ├── json-operations.ts
│   │   └── utility-operations.ts
│   └── utils
│       ├── data-utils.ts
│       ├── file-utils.ts
│       ├── path-utils.ts
│       ├── schema-utils.ts
│       └── typebox-zod.ts
├── test
│   ├── json
│   │   └── users.json
│   ├── sample.xml
│   ├── suites
│   │   ├── regex_search_content
│   │   │   ├── basic_search.test.ts
│   │   │   ├── depth_limiting.test.ts
│   │   │   ├── error_handling.test.ts
│   │   │   ├── file_pattern.test.ts
│   │   │   ├── max_filesize.test.ts
│   │   │   ├── max_results.test.ts
│   │   │   ├── path_usage.test.ts
│   │   │   ├── regex_flags.test.ts
│   │   │   └── spec.md
│   │   └── xml_tools
│   │       └── xml_tools.test.ts
│   ├── transports
│   │   ├── network.test.ts
│   │   └── stdio.test.ts
│   └── utils
│       ├── pathUtils.test.ts
│       └── regexUtils.ts
└── tsconfig.json
```

# Files

--------------------------------------------------------------------------------
/ai/graph/entities/.gitkeep:
--------------------------------------------------------------------------------

```

```

--------------------------------------------------------------------------------
/.repomixignore:
--------------------------------------------------------------------------------

```
# Add patterns to ignore here, one per line
# Example:
# *.log
# tmp/

```

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
# Build output
dist/

# Dependencies
node_modules/
node-compile-cache/

# Environment files
.env
.env.local
.env.test

# IDE files
.vscode/

# Logs and temporary files
*.log
*.tmp

# MCP client configs
.cursor/
!.cursor/rules/
!.cursor/rules/**
.roo/

```

--------------------------------------------------------------------------------
/demo/archive/readme.md:
--------------------------------------------------------------------------------

```markdown
This directory contains example files for testing file operations such as copy, move, list, and delete within the MCP Filesystem Server demo.

Files:
- `subdir/old_data.txt` — An archived file deep in a subdirectory, for recursive and nested operation testing.
- `log.txt` — (Consider creating or manipulating this file) A placeholder for recent archive logs or simple text operation tests.

Feel free to add, read, move, or delete files in this directory as part of your MCP server demonstrations.
```

--------------------------------------------------------------------------------
/demo/README.md:
--------------------------------------------------------------------------------

```markdown
# Demo Directory

This directory provides a sample filesystem structure to demonstrate and test the capabilities of the MCP Filesystem Server. It includes a variety of files and directories that cover common use cases such as:

- Nested directories
- JSON files for read/write/modify operations
- XML files for structure and conversion utilities
- Example files and directories for searching/moving/listing/deletion tests

## Structure

- `nested/`
  - Contains multiple levels of nested folders and a couple of files at deep levels.
- `data.json`
  - A sample JSON file for edit/read/modify operations.
- `sample.xml`
  - An example XML file for conversion and structure queries.
- `info.txt`
  - A plain text file for basic file operations.
- `emptyfolder/`
  - An empty directory to test directory creation and deletion.
- `archive/`
  - Contains archived files and subfolders for operations like moving and recursive listing.

Feel free to add, modify, or delete items in this directory to experiment with and verify server behaviors.
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
# Filesystem MCP Server

Bun-based server implementing Model Context Protocol (MCP) for filesystem operations with comprehensive permission controls and enhanced functionality.

Development uses [Bun](https://bun.sh/) and the server can run directly from TypeScript with `bun`, but most MCP clients execute Node-compatible JavaScript. Use `node dist/index.js` in configs unless you're intentionally running the TypeScript entry with Bun.

<a href="https://glama.ai/mcp/servers/@rawr-ai/mcp-filesystem">
  <img width="380" height="200" src="https://glama.ai/mcp/servers/@rawr-ai/mcp-filesystem/badge" alt="Filesystem Server MCP server" />
</a>

## Features

- Granular permission controls (read-only, full access, or specific operation permissions)
- Secure file operations within allowed directories
- File operations:
  - Read/write/modify files
  - Create/list/delete directories
  - Move files/directories
  - Search files by name or extension
  - Get file metadata
- Directory operations:
  - Tree view of directory structures
  - Recursive operations with exclusion patterns
- Utility functions:
  - XML to JSON conversion
  - Multiple file operations in one call
  - Advanced file editing with pattern matching
- Security features:
  - Symlink control
  - Path validation
  - Sandboxed operations

**Note**: The server will only allow operations within directories specified via `args` and according to the configured permissions.

## Installation

1. **Install Bun** (requires Bun v1.0 or later)
   ```bash
   curl -fsSL https://bun.sh/install | bash
   ```
2. **Install dependencies**
   ```bash
   bun install
   ```
3. **Build the project** (required for Node runtimes)
   ```bash
   bun run build
   ```
4. **Run tests**
   ```bash
   bun test
   ```

## Configuration options

Paths may include environment variables like `$HOME`, `${CUSTOM}`, or `%USERPROFILE%`. Choose the modality that fits your setup:

### Local (Node or Bun)
Use Node for built JavaScript or Bun to run TypeScript directly.
```json
{ "command": "node", "args": ["/path/to/mcp-filesystem/dist/index.js", "$HOME/allowed-directory"] }
```
```json
{ "command": "bun",  "args": ["/path/to/mcp-filesystem/index.ts", "$HOME/allowed-directory"] }
```

### Git hosted
Run straight from the public repo without cloning.
```json
{ "command": "bunx", "args": ["github:rawr-ai/mcp-filesystem", "$HOME/allowed-directory"] }
```
```json
{ "command": "npx",  "args": ["github:rawr-ai/mcp-filesystem", "$HOME/allowed-directory"] }
```

### NPM package (coming soon)
Planned publication to `rawr-ai/mcp-filesystem`.
```json
{ "command": "bunx", "args": ["rawr-ai/mcp-filesystem", "$HOME/allowed-directory"] }
```
```json
{ "command": "npx",  "args": ["rawr-ai/mcp-filesystem", "$HOME/allowed-directory"] }
```

### Docker
Isolated container environment.
```json
{ "command": "docker", "args": ["run", "--rm", "-v", "$HOME/allowed-directory:/data", "mcp/filesystem", "/data"] }
```

### Hosted service
For managed MCP hosts like glama.ai.
```json
{ "mcpServers": { "filesystem": { "url": "https://glama.ai/rawr-ai/mcp-filesystem" } } }
```

See the `examples/` directory for platform-specific configs (Cursor, Roo, etc.) and additional path variants.

## API

### Resources

- `file://system`: File system operations interface

### Tools

All tool argument schemas are defined with [TypeBox](https://github.com/sinclairzx81/typebox) and registered via the `toolSchemas` map in `src/schemas`. This ensures every tool shares a consistent schema that handlers can reference.

- **read_file**
  - Read contents of a file (response-capped)
  - Inputs:
    - `path` (string)
    - `maxBytes` (number): Maximum bytes to return
  - Returns at most `maxBytes` bytes to protect downstream consumers

- **read_multiple_files**
  - Read multiple files simultaneously
  - Inputs:
    - `paths` (string[])
    - `maxBytesPerFile` (number): Maximum bytes to return per file
  - Failed reads won't stop the entire operation

- **create_file**
  - Create a new file with content
  - Inputs:
    - `path` (string): File location
    - `content` (string): File content
  - Fails if file already exists
  - Requires `create` permission

- **modify_file**
  - Modify an existing file with new content
  - Inputs:
    - `path` (string): File location
    - `content` (string): New file content
  - Fails if file doesn't exist
  - Requires `edit` permission

- **edit_file**
  - Make selective edits using pattern matching and formatting
  - Features:
    - Line-based and multi-line content matching
    - Whitespace normalization with indentation preservation
    - Multiple simultaneous edits with correct positioning
    - Indentation style detection and preservation
    - Git-style diff output with context
    - Preview changes with dry run mode
  - Inputs:
    - `path` (string): File to edit
    - `edits` (array): List of edit operations
      - `oldText` (string): Text to search for (exact match)
      - `newText` (string): Text to replace with
    - `dryRun` (boolean): Preview changes without applying (default: false)
    - `maxBytes` (number): Maximum bytes to read before editing
  - Returns detailed diff for dry runs, otherwise applies changes
  - Requires `edit` permission
  - Best Practice: Always use dryRun first to preview changes

- **create_directory**
  - Create new directory or ensure it exists
  - Input: `path` (string)
  - Creates parent directories if needed
  - Succeeds silently if directory exists
  - Requires `create` permission

- **list_directory**
  - List directory contents with [FILE] or [DIR] prefixes
  - Input: `path` (string)
  - Returns detailed listing of files and directories

- **directory_tree**
  - Get recursive tree view of directory structure
  - Input: `path` (string)
  - Returns JSON structure with files and directories
  - Each entry includes name, type, and children (for directories)

- **move_file**
  - Move or rename files and directories
  - Inputs:
    - `source` (string): Source path
    - `destination` (string): Destination path
  - Fails if destination exists
  - Works for both files and directories
  - Requires `move` permission

- **delete_file**
  - Delete a file
  - Input: `path` (string)
  - Fails if file doesn't exist
  - Requires `delete` permission

- **delete_directory**
  - Delete a directory
  - Inputs:
    - `path` (string): Directory to delete
    - `recursive` (boolean): Whether to delete contents (default: false)
  - Fails if directory is not empty and recursive is false
  - Requires `delete` permission

- **search_files**
  - Recursively search for files/directories
  - Inputs:
    - `path` (string): Starting directory
    - `pattern` (string): Search pattern
    - `excludePatterns` (string[]): Exclude patterns (glob format supported)
  - Case-insensitive matching
  - Returns full paths to matches

- **find_files_by_extension**
  - Find all files with specific extension
  - Inputs:
    - `path` (string): Starting directory
    - `extension` (string): File extension to find
    - `excludePatterns` (string[]): Optional exclude patterns
  - Case-insensitive extension matching
  - Returns full paths to matching files

- **get_file_info**
  - Get detailed file/directory metadata
  - Input: `path` (string)
  - Returns:
    - Size
    - Creation time
    - Modified time
    - Access time
    - Type (file/directory)
    - Permissions

- **get_permissions**
  - Get current server permissions
  - No input required
  - Returns:
    - Permission flags (readonly, fullAccess, create, edit, move, delete)
    - Symlink following status
    - Number of allowed directories

- **list_allowed_directories**
  - List all directories the server is allowed to access
  - No input required
  - Returns array of allowed directory paths

- **xml_to_json**
  - Convert XML file to JSON format
  - Inputs:
    - `xmlPath` (string): Source XML file
    - `jsonPath` (string): Destination JSON file
    - `maxResponseBytes` (number, optional): Maximum size of written JSON; large outputs are summarized
    - `options` (object, optional):
      - `ignoreAttributes` (boolean): Skip XML attributes (default: false)
      - `preserveOrder` (boolean): Maintain property order (default: true)
      - `format` (boolean): Pretty print JSON (default: true)
      - `indentSize` (number): JSON indentation (default: 2)
  - Requires `read` permission for XML file
  - Requires `create` or `edit` permission for JSON file

- **xml_to_json_string**
  - Convert XML file to JSON string
  - Inputs:
    - `xmlPath` (string): Source XML file
    - `maxResponseBytes` (number, optional): Maximum size of returned JSON string; large outputs are summarized
    - `options` (object, optional):
      - `ignoreAttributes` (boolean): Skip XML attributes (default: false)
      - `preserveOrder` (boolean): Maintain property order (default: true)
  - Requires `read` permission for XML file
  - Returns JSON string representation (response-capped)

- **xml_query**
  - Query XML file using XPath expressions
  - Inputs:
    - `path` (string): Path to the XML file
    - `query` (string, optional): XPath query to execute
    - `structureOnly` (boolean, optional): Return only tag structure
    - `includeAttributes` (boolean, optional): Include attribute info (default: true)
    - `maxResponseBytes` (number, optional): Maximum size of returned JSON; defaults to 200KB
      - Legacy `maxBytes` is still accepted and treated as response cap
  - XPath examples:
    - Get all elements: `//tagname`
    - Get elements with specific attribute: `//tagname[@attr="value"]`
    - Get text content: `//tagname/text()`
  - Parses full file; response is truncated to fit limits as needed

- **xml_structure**
  - Analyze XML structure
  - Inputs:
    - `path` (string): Path to the XML file
    - `maxDepth` (number, optional): How deep to analyze (default: 2)
    - `includeAttributes` (boolean, optional): Include attribute analysis (default: true)
    - `maxResponseBytes` (number, optional): Maximum size of returned JSON; defaults to 200KB
      - Legacy `maxBytes` is still accepted and treated as response cap
  - Returns statistical information about elements, attributes, namespaces, and hierarchy
  - Parses full file; returns a summarized structure if response exceeds limit

- **regex_search_content**
  - Search file contents with a regular expression
  - Inputs:
    - `path` (string): Root directory to search
    - `regex` (string): Regular expression pattern
    - `filePattern` (string, optional): Glob to limit files (default: `*`)
    - `maxDepth` (number, optional): Directory depth (default: 2)
    - `maxFileSize` (number, optional): Maximum file size in bytes (default: 10MB)
    - `maxResults` (number, optional): Maximum number of files with matches (default: 50)
  - Returns a human-readable summary of files and matching lines

### Argument Validation

The server validates all tool inputs using the `parseArgs` helper. `parseArgs` parses incoming data against the appropriate TypeBox schema and throws an error when the arguments do not match the expected structure.

## Permissions & Security

The server implements a comprehensive security model with granular permission controls:

### Directory Access Control
- Operations are strictly limited to directories specified during startup via `args`
- All operations (including symlink targets) must remain within allowed directories
- Path validation ensures no directory traversal or access outside allowed paths

### Permission Flags
- **--readonly**: Enforces read-only mode, overriding all other permission flags
- **--full-access**: Enables all operations (create, edit, move, delete)
- Individual permission flags (require explicit enabling unless --full-access is set):
  - **--allow-create**: Allow creation of new files and directories
  - **--allow-edit**: Allow modification of existing files
  - **--allow-move**: Allow moving/renaming files and directories
  - **--allow-delete**: Allow deletion of files and directories

**Default Behavior**: If no permission flags are specified, the server runs in read-only mode. To enable any write operations, you must use either `--full-access` or specific `--allow-*` flags.

### Symlink Handling
- By default, symlinks are followed when both the link and target are within allowed directories
- **--no-follow-symlinks**: treat symlinks as regular files and refuse to traverse their targets, preventing escapes via linked paths

See `examples/mcp_permissions.json` for sample configurations using these flags.

## Build

To compile the project locally run:

```bash
bun run build
```

Run the test suite with:

```bash
bun test
```

Docker build:

```bash
docker build -t mcp/filesystem -f Dockerfile .
```

## License

This MCP server is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.
```

--------------------------------------------------------------------------------
/.early.coverage/v8/coverage-final.json:
--------------------------------------------------------------------------------

```json
{}

```

--------------------------------------------------------------------------------
/bunfig.toml:
--------------------------------------------------------------------------------

```toml
[test]
root = "test"

[bundle]
entryPoints = ["index.ts"]
outdir = "dist"

```

--------------------------------------------------------------------------------
/glama.json:
--------------------------------------------------------------------------------

```json
{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "mateicanavra"
  ]
}

```

--------------------------------------------------------------------------------
/examples/mcp_glama.json:
--------------------------------------------------------------------------------

```json
{
  "mcpServers": {
    "filesystem": {
      "url": "https://glama.ai/rawr-ai/mcp-filesystem"
    }
  }
}

```

--------------------------------------------------------------------------------
/examples/mcp_sse.json:
--------------------------------------------------------------------------------

```json
{
  "mcpServers": {
    "filesystem": {
      "transport": "sse",
      "url": "http://localhost:8080/sse"
    }
  }
}

```

--------------------------------------------------------------------------------
/demo/nested/deep/hidden.json:
--------------------------------------------------------------------------------

```json
{
  "secret": true,
  "level": "deep",
  "note": "This file is intentionally placed deep to test recursive listing and access."
}

```

--------------------------------------------------------------------------------
/src/handlers/index.ts:
--------------------------------------------------------------------------------

```typescript
export * from './file-handlers.js';
export * from './directory-handlers.js';
export * from './utility-handlers.js';
export * from './xml-handlers.js';
export * from './json-handlers.js'; 
```

--------------------------------------------------------------------------------
/demo/info.txt:
--------------------------------------------------------------------------------

```
This is a basic text file for demo purposes.

You can use this file to test read, write, modify, or delete operations within the MCP Filesystem Server.

Feel free to append, overwrite, or search for text in this file!

```

--------------------------------------------------------------------------------
/test/json/users.json:
--------------------------------------------------------------------------------

```json
{"users": [{"id": 1, "name": "John", "age": 30, "address": {"city": "New York"}}, {"id": 2, "name": "Jane", "age": 25, "address": {"city": "Boston"}}, {"id": 3, "name": "Bob", "age": 35, "address": {"city": "New York"}}]}

```

--------------------------------------------------------------------------------
/examples/mcp_roo.json:
--------------------------------------------------------------------------------

```json
{
  "mcpServers": {
    "filesystem-home": {
      "command": "node",
      "args": ["dist/index.js", "$HOME/allowed/path"]
    },
    "filesystem-env": {
      "command": "node",
      "args": ["dist/index.js", "${ALLOWED_PATH}"]
    }
  }
}

```

--------------------------------------------------------------------------------
/examples/mcp_stdio.json:
--------------------------------------------------------------------------------

```json
{
  "mcpServers": {
    "filesystem-home": {
      "command": "node",
      "args": ["dist/index.js", "$HOME/allowed/path"]
    },
    "filesystem-env": {
      "command": "node",
      "args": ["dist/index.js", "${ALLOWED_PATH}"]
    }
  }
}

```

--------------------------------------------------------------------------------
/demo/data.json:
--------------------------------------------------------------------------------

```json
{
  "message": "Hello, MCP Filesystem Server!",
  "version": 1,
  "features": [
    "read",
    "write",
    "modify",
    "search"
  ],
  "active": true,
  "nested": {
    "level": 1,
    "description": "For testing JSON structure traversal"
  }
}

```

--------------------------------------------------------------------------------
/demo/archive/subdir/old_data.txt:
--------------------------------------------------------------------------------

```
This is an old archive file.

* Path: archive/subdir/old_data.txt
* Purpose: Test file for demonstrating the server's ability to list, move, or delete files in nested/archive directories.

Contents:
- Created for MCP Filesystem Server demo.
- Safe to manipulate or remove for testing purposes.

```

--------------------------------------------------------------------------------
/examples/mcp_http.json:
--------------------------------------------------------------------------------

```json
{
  "mcpServers": {
    "filesystem-home": {
      "command": "node",
      "args": ["dist/index.js", "--http", "--port", "8080", "$HOME/allowed/path"]
    },
    "filesystem-env": {
      "command": "node",
      "args": ["dist/index.js", "--http", "--port", "8080", "${ALLOWED_PATH}"]
    }
  }
}

```

--------------------------------------------------------------------------------
/demo/archive/log.txt:
--------------------------------------------------------------------------------

```
This is a placeholder log file for the MCP Filesystem Server demo.

You can use this file to test operations such as reading, listing, archiving, deleting, or modifying files in the `archive` directory.

Log entries can be appended here during tests, or you may clear/overwrite the content while experimenting.

```

--------------------------------------------------------------------------------
/examples/mcp_cursor.json:
--------------------------------------------------------------------------------

```json
{
  "mcpServers": {
    "filesystem-home": {
      "command": "node",
      "args": ["dist/index.js", "--no-follow-symlinks", "--readonly", "$HOME/allowed/path"]
    },
    "filesystem-env": {
      "command": "node",
      "args": ["dist/index.js", "--no-follow-symlinks", "--readonly", "${ALLOWED_PATH}"]
    }
  }
}

```

--------------------------------------------------------------------------------
/demo/nested/info.md:
--------------------------------------------------------------------------------

```markdown
# Nested Directory — Info

This file is located in `demo/nested/` and exists to test directory operation scenarios such as:

- Listing files in nested folders
- Renaming or searching for files in deeper paths
- Modifying content in files not at the root level

Feel free to move, delete, or modify this file as part of your MCP Filesystem Server demonstrations.
```

--------------------------------------------------------------------------------
/examples/mcp_docker.json:
--------------------------------------------------------------------------------

```json
{
  "mcpServers": {
    "filesystem-home": {
      "command": "docker",
      "args": ["run", "--rm", "-v", "$HOME/allowed:/data", "mcp/filesystem", "/data"]
    },
    "filesystem-env": {
      "command": "docker",
      "args": ["run", "--rm", "-v", "${ALLOWED_PATH}:/data", "mcp/filesystem", "/data"]
    },
    "filesystem-container": {
      "command": "docker",
      "args": ["run", "--rm", "mcp/filesystem", "/data"]
    }
  }
}

```

--------------------------------------------------------------------------------
/tsconfig.json:
--------------------------------------------------------------------------------

```json
{
  "compilerOptions": {
    "target": "ES2020",
    "module": "NodeNext",
    "moduleResolution": "NodeNext",
    "declaration": true,
    "sourceMap": true,
    "outDir": "./dist",
    "removeComments": true,
    "esModuleInterop": true,
    "forceConsistentCasingInFileNames": true,
    "strict": true,
    "skipLibCheck": true,
    "types": ["bun-types"],
    "rootDir": "."
  },
  "include": ["./**/*.ts"],
  "exclude": ["node_modules", "dist"]
}

```

--------------------------------------------------------------------------------
/examples/mcp_permissions.json:
--------------------------------------------------------------------------------

```json
{
  "mcpServers": {
    "filesystem-readonly": {
      "command": "node",
      "args": ["dist/index.js", "$HOME/allowed/path", "--readonly"]
    },
    "filesystem-full": {
      "command": "node",
      "args": ["dist/index.js", "$HOME/allowed/path", "--full-access", "--no-follow-symlinks"]
    },
    "filesystem-selective": {
      "command": "node",
      "args": ["dist/index.js", "$HOME/allowed/path", "--allow-create", "--allow-edit"]
    }
  }
}

```

--------------------------------------------------------------------------------
/ai/graph/mcp-config.yaml:
--------------------------------------------------------------------------------

```yaml
# Configuration for project: filesystem
services:
  - id: filesystem  # Service ID (used for default naming)
    # container_name: "custom-name"  # Optional: Specify custom container name
    # port_default: 8001             # Optional: Specify custom host port
    group_id: "filesystem"       # Graph group ID
    entity_dir: "entities"           # Relative path to entity definitions within ai/graph
    # environment:                   # Optional: Add non-secret env vars here
    #   GRAPHITI_LOG_LEVEL: "debug"

```

--------------------------------------------------------------------------------
/src/utils/schema-utils.ts:
--------------------------------------------------------------------------------

```typescript
import { Value } from '@sinclair/typebox/value';
import type { Static, TSchema } from '@sinclair/typebox';

export function parseArgs<T extends TSchema>(schema: T, args: unknown, context: string): Static<T> {
  try {
    // Use only the Assert step to ensure strict validation
    return Value.Parse(['Assert'], schema, args);
  } catch {
    const errors = [...Value.Errors(schema, args)]
      .map(e => `${e.path}: ${e.message}`)
      .join('; ');
    throw new Error(`Invalid arguments for ${context}: ${errors}`);
  }
}

```

--------------------------------------------------------------------------------
/ai/graph/rools/playbooks/pb_registry.md:
--------------------------------------------------------------------------------

```markdown
## Initial Playbook Examples

### Head Coach

-   Game plan/strategy: (Test Driven Development) Design, Build, Iterate <-handoff-> Review, Test, Debug, Report <-handoff-> Deploy, Publish, Update

### Offensive Coordinator

-   (No examples listed in the source file)

### Defensive Coordinator

-   (QA Loop) Review, Test, Debug, Report
-   (Security Audit) Security Review, Security Test, Security Debug, Security Report

### Special Teams Coordinator

-   (Publish Loop) Build, Deploy, Package, CI/CD
-   (Feedback Loop) Ingest, Process, Analyze, Suggest
-   (Documentation Loop) Review, Document, Publish, Distribute
```

--------------------------------------------------------------------------------
/ai/logs/dev/2025-04-06-regex-content-search.md:
--------------------------------------------------------------------------------

```markdown
# Development Log: 2025-04-06

## Task: Add Regex File Content Search Feature

**Summary:**
Implemented a new feature allowing users to search file contents using regular expressions. This enhances the file system interaction capabilities by providing more powerful and flexible search options.

**Details:**
*   **Feature Branch:** `feature/regex-content-search`
*   **Merged To:** `main`
*   **Key Changes:**
    *   Added necessary handler logic for regex search.
    *   Defined corresponding schemas for the operation.
    *   Integrated the feature into the existing file system server.
*   **Status:** Merged and completed.
```

--------------------------------------------------------------------------------
/.github/workflows/ci.yml:
--------------------------------------------------------------------------------

```yaml
name: CI

on:
  push:
    branches: ["main"]
  pull_request:

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - uses: oven-sh/setup-bun@v1
        with:
          bun-version: latest
      - name: Cache Bun
        uses: actions/cache@v3
        with:
          path: |
            ~/.bun
            node_modules
          key: bun-${{ runner.os }}-${{ hashFiles('**/bun.lock') }}
          restore-keys: |
            bun-${{ runner.os }}-
      - name: Install dependencies
        run: bun install
      - name: Build
        run: bun run build
      - name: Test
        run: bun test

```

--------------------------------------------------------------------------------
/repomix.config.json:
--------------------------------------------------------------------------------

```json
{
  "output": {
    "filePath": "mcp-filesystem-repo.md",
    "style": "markdown",
    "parsableStyle": false,
    "fileSummary": true,
    "directoryStructure": true,
    "removeComments": false,
    "removeEmptyLines": false,
    "compress": false,
    "topFilesLength": 5,
    "showLineNumbers": false,
    "copyToClipboard": false,
    "git": {
      "sortByChanges": true,
      "sortByChangesMaxCommits": 100
    }
  },
  "include": [],
  "ignore": {
    "useGitignore": true,
    "useDefaultPatterns": true,
    "customPatterns": [
      "**/dist/**",
      "**/node_modules/**"
    ]
  },
  "security": {
    "enableSecurityCheck": true
  },
  "tokenCount": {
    "encoding": "o200k_base"
  }
}
```

--------------------------------------------------------------------------------
/demo/sample.xml:
--------------------------------------------------------------------------------

```
<?xml version="1.0" encoding="UTF-8"?>
<library>
    <name>MCP Filesystem Demo</name>
    <description>
        Example XML document to demonstrate conversion, structure parsing, and querying features of the MCP server.
    </description>
    <books>
        <book id="1" genre="sci-fi">
            <title>Hyperion</title>
            <author>Dan Simmons</author>
            <published>1989</published>
            <checkedOut>false</checkedOut>
        </book>
        <book id="2" genre="fantasy">
            <title>The Hobbit</title>
            <author>J.R.R. Tolkien</author>
            <published>1937</published>
            <checkedOut>true</checkedOut>
        </book>
    </books>
    <users>
        <user>
            <username>alice</username>
            <permissions>read,write</permissions>
        </user>
        <user>
            <username>bob</username>
            <permissions>read</permissions>
        </user>
    </users>
</library>

```

--------------------------------------------------------------------------------
/src/schemas/directory-operations.ts:
--------------------------------------------------------------------------------

```typescript
import { Type, Static } from "@sinclair/typebox";

export const CreateDirectoryArgsSchema = Type.Object({
  path: Type.String(),
});
export type CreateDirectoryArgs = Static<typeof CreateDirectoryArgsSchema>;

export const ListDirectoryArgsSchema = Type.Object({
  path: Type.String(),
});
export type ListDirectoryArgs = Static<typeof ListDirectoryArgsSchema>;

export const DirectoryTreeArgsSchema = Type.Object({
  path: Type.String(),
  maxDepth: Type.Integer({
    minimum: 1,
    description: 'Maximum depth to traverse. Must be a positive integer. Handler default: 2.'
  }),
  excludePatterns: Type.Optional(
    Type.Array(Type.String(), {
      default: [],
      description: 'Glob patterns for files/directories to exclude (e.g., "*.log", "node_modules").'
    })
  )
});
export type DirectoryTreeArgs = Static<typeof DirectoryTreeArgsSchema>;

export const DeleteDirectoryArgsSchema = Type.Object({
  path: Type.String(),
  recursive: Type.Boolean({
    default: false,
    description: 'Whether to recursively delete the directory and all contents'
  })
});
export type DeleteDirectoryArgs = Static<typeof DeleteDirectoryArgsSchema>;

```

--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------

```dockerfile
FROM oven/bun

# Set the application directory
WORKDIR /app

# Copy application source code and configuration files
# Using absolute paths for clarity and to avoid issues if WORKDIR changes.
COPY src /app/src
COPY index.ts /app/index.ts
COPY package.json /app/package.json
COPY bun.lock /app/bun.lock
COPY tsconfig.json /app/tsconfig.json

# Set environment to production
ENV NODE_ENV=production

# Install production dependencies using the lockfile for reproducible builds.
# The --production flag ensures devDependencies are not installed.
RUN bun install --production --frozen-lockfile

# Define the entrypoint for the container.
# This specifies the base command to run, which is the bun executable
# followed by the path to our main script. Using an absolute path is crucial
# because the container's working directory will be changed at runtime.
ENTRYPOINT ["bun", "/app/index.ts"]

# Define the default command arguments.
# These will be appended to the ENTRYPOINT. The user can override these
# arguments in the `docker run` command. Providing `--help` as the default
# is a good practice, as it makes the container's usage self-documenting.
CMD ["--help"]

```

--------------------------------------------------------------------------------
/ai/logs/introduce_test_suite/workflow_diagram.md:
--------------------------------------------------------------------------------

```markdown
```mermaid
sequenceDiagram
    participant Orchestrator
    participant Diagram
    participant Git
    participant Analyze
    participant Test
    participant Code

    Orchestrator ->> Diagram: 1. Request diagram generation
    Diagram -->> Orchestrator: 2. Return Mermaid syntax

    Orchestrator ->> Git: 3. Request Git environment preparation (stash, branch, apply stash)
    Git -->> Orchestrator: 4. Confirm Git preparation

    Orchestrator ->> Analyze: 5. Request test context analysis (command, readiness)
    Analyze -->> Orchestrator: 6. Return test command & readiness assessment

    Orchestrator ->> Test: 7. Request test execution
    Test -->> Orchestrator: 8. Return test results summary

    Orchestrator ->> Code: 9. Request log directory/file creation
    Code -->> Orchestrator: 10. Return log file path

    Orchestrator ->> Analyze: 11. Request log content formatting
    Analyze -->> Orchestrator: 12. Return formatted Markdown content

    Orchestrator ->> Code: 13. Request writing log content to file
    Code -->> Orchestrator: 14. Confirm file write

    Orchestrator ->> Git: 15. Request log file commit
    Git -->> Orchestrator: 16. Confirm commit
```
```

--------------------------------------------------------------------------------
/test/utils/regexUtils.ts:
--------------------------------------------------------------------------------

```typescript
import path from 'path';

export interface RegexMatch { line: number; text: string; }
export interface FileResult { file: string; matches: RegexMatch[]; }

export function parseRegexSearchOutput(text: string): FileResult[] {
  const blocks = text.trim().split(/\n\n+/).filter(Boolean);
  return blocks.map(block => {
    const lines = block.split(/\n/);
    const fileLine = lines.shift() || '';
    const file = fileLine.replace(/^File:\s*/, '');
    const matches = lines.map(l => {
      const m = l.match(/Line\s+(\d+):\s*(.*)/);
      return m ? { line: parseInt(m[1], 10), text: m[2] } : { line: 0, text: l };
    });
    return { file: path.normalize(file), matches };
  });
}

// Helper to safely extract text content from a CallToolResult
import { CallToolResultSchema } from '@modelcontextprotocol/sdk/types.js';
import type { CallToolResult, TextContent } from '@modelcontextprotocol/sdk/types.js';

export function getTextContent(result: unknown): string {
  const parsed = CallToolResultSchema.parse(result) as CallToolResult;
  const first = parsed.content[0];
  if (!first || first.type !== 'text') {
    throw new Error('Expected first content element to be text');
  }
  return (first as TextContent).text;
}

```

--------------------------------------------------------------------------------
/src/utils/typebox-zod.ts:
--------------------------------------------------------------------------------

```typescript
import { ZodFromTypeBox } from "@sinclair/typemap";
import type { TSchema } from "@sinclair/typebox";
import type { FastMCP, Tool } from "fastmcp";

/**
 * Convert a TypeBox schema to a Zod schema compatible with FastMCP.
 * Returns undefined when no schema is provided to match FastMCP API shape.
 */
export function toZodParameters(schema?: TSchema) {
  return schema ? (ZodFromTypeBox(schema) as unknown) : undefined;
}

/**
 * Convenience helper to register a tool defined with TypeBox parameters.
 * This ensures parameters are converted to Zod so MCP clients (Cursor/Claude)
 * recognize the schema without xsschema vendor issues.
 */
// FastMCP's generic constraint is FastMCPSessionAuth = Record<string, unknown> | undefined.
// Mirror that here to avoid importing non-exported types from fastmcp.
export function addTypeBoxTool<TSession extends Record<string, unknown> | undefined = Record<string, unknown> | undefined>(
  server: FastMCP<TSession>,
  tool: {
    name: string;
    description: string;
    parameters?: TSchema;
    execute: Tool<TSession>["execute"];
  },
) {
  server.addTool({
    name: tool.name,
    description: tool.description,
    parameters: toZodParameters(tool.parameters) as any,
    execute: tool.execute as any,
  } as unknown as Tool<TSession>);
}



```

--------------------------------------------------------------------------------
/ai/graph/rools/playbooks/pb_iterative_execution_verification.md:
--------------------------------------------------------------------------------

```markdown
# Playbook: Iterative Execution & Verification

**Purpose:** To reliably execute complex tasks involving modifications by incorporating structured planning, execution, verification, and feedback-driven iteration loops.

**Key Roles (Generic):**
*   **Orchestrator:** Manages the process, dispatches agents, interprets results, guides iteration.
*   **Planner (Optional/Implicit):** Defines the initial strategy.
*   **Executor:** Performs the core modification tasks *as defined by the plan or instructions*.
*   **Verifier:** Assesses the Executor's work against *the defined objective, requirements, and/or quality standards*.
*   **Feedback Source (Optional):** Provides input on plans or results.

**Workflow Steps:**
1.  **Initiation & Planning:** Define objective, formulate plan (optional plan review for robustness).
2.  **Execution:** Dispatch Executor agent to perform planned actions.
3.  **Verification:** Dispatch Verifier agent to assess results *against defined criteria* and report findings.
4.  **Evaluation & Decision:** Orchestrator/User evaluates verification report.
    *   If Success -> Proceed to Step 6 (Completion).
    *   If Issues -> Proceed to Step 5 (Iteration).
5.  **Iteration Loop:**
    *   Synthesize feedback *and verification findings* into corrective instructions.
    *   Dispatch Executor for revision.
    *   Return to Step 3 (Verification).
6.  **Completion:** Orchestrator confirms successful task completion.

This pattern provides a structured approach for tasks requiring modification and quality assurance through iterative refinement.
```

--------------------------------------------------------------------------------
/test/sample.xml:
--------------------------------------------------------------------------------

```
<?xml version="1.0" encoding="UTF-8"?>
<catalog xmlns="http://example.org/catalog">
  <book id="bk101" category="fiction">
    <author>Gambardella, Matthew</author>
    <title>XML Developer's Guide</title>
    <genre>Computer</genre>
    <price>44.95</price>
    <publish_date>2000-10-01</publish_date>
    <description>An in-depth look at creating applications with XML.</description>
  </book>
  <book id="bk102" category="fiction">
    <author>Ralls, Kim</author>
    <title>Midnight Rain</title>
    <genre>Fantasy</genre>
    <price>5.95</price>
    <publish_date>2000-12-16</publish_date>
    <description>A former architect battles corporate zombies, an evil sorceress, and her own childhood to become queen of the world.</description>
  </book>
  <book id="bk103" category="non-fiction">
    <author>Corets, Eva</author>
    <title>Maeve Ascendant</title>
    <genre>Fantasy</genre>
    <price>5.95</price>
    <publish_date>2000-11-17</publish_date>
    <description>After the collapse of a nanotechnology society, the young survivors lay the foundation for a new society.</description>
  </book>
  <magazine id="mg101" frequency="monthly">
    <title>Programming Today</title>
    <publisher>Tech Media</publisher>
    <price>6.50</price>
    <issue>125</issue>
    <publish_date>2023-01-15</publish_date>
    <articles>
      <article>
        <author>Jane Smith</author>
        <title>Modern XML Processing</title>
      </article>
      <article>
        <author>John Doe</author>
        <title>XPath Deep Dive</title>
      </article>
    </articles>
  </magazine>
</catalog> 
```

--------------------------------------------------------------------------------
/ai/graph/rools/playbooks/pb_development_logging.md:
--------------------------------------------------------------------------------

```markdown
# Playbook: Development Logging

## Purpose

This playbook outlines the standard process for generating development logs, ensuring consistency and clarity in documenting development tasks.

## Key Roles/Modes

*   **Architect:** Defines log storage path and filename conventions.
*   **Analyze:** Reviews conversation history and generates a detailed summary of the development task, then formats the summary into a structured Markdown log entry.
*   **Code:** Writes the formatted log entry to the file system according to the defined conventions.
*   **Git:** Commits the newly created log file to the repository.

## Workflow Steps

1.  **Define Log Convention:**
    *   Use `architect` mode to determine the log storage path and filename convention. This ensures logs are stored in a consistent and easily accessible manner.
2.  **Summarize History:**
    *   Use `analyze` mode to review the conversation history related to the development task.
    *   Generate a detailed summary of the task, including the problem addressed, the solution implemented, and any challenges encountered.
3.  **Format Log Entry:**
    *   Use `analyze` mode to format the summary into a structured Markdown log entry.
    *   Include relevant details such as the date, task description, and key steps taken.
4.  **Write Log File:**
    *   Use `code` mode to write the formatted log entry to the file system.
    *   Adhere to the log storage path and filename convention defined in step 1.
5.  **Commit Log File:**
    *   Use `git` mode to commit the newly created log file to the repository.
    *   Include a descriptive commit message that clearly identifies the task being logged.
```

--------------------------------------------------------------------------------
/package.json:
--------------------------------------------------------------------------------

```json
{
  "name": "@modelcontextprotocol/server-filesystem",
  "version": "0.6.2",
  "description": "MCP server for filesystem access",
  "license": "MIT",
  "author": "Anthropic, PBC (https://anthropic.com)",
  "homepage": "https://modelcontextprotocol.io",
  "bugs": "https://github.com/modelcontextprotocol/servers/issues",
  "type": "module",
  "bin": {
    "mcp-server-filesystem": "dist/index.js"
  },
  "files": [
    "dist"
  ],
  "scripts": {
    "build": "bun run tsc && chmod +x dist/*.js",
    "watch": "bun run tsc --watch",
    "test": "bun test",
    "inspect": "bun run build && npx -y fastmcp@latest inspect dist/index.js -- --cwd --readonly",
    "demo:node": "bun run demo:node:sse",
    "demo:node:sse": "bun run build && (cd demo && node ../dist/index.js --cwd --http --port 8090)",
    "demo:node:http": "bun run build && (cd demo && node ../dist/index.js --cwd --http --port 8090)",
    "demo:bun": "bun run demo:bun:sse",
    "demo:bun:sse": "cd demo && bun ../index.ts --cwd --http --port 8090",
    "demo:bun:http": "cd demo && bun ../index.ts --cwd --http --port 8090",
    "demo:docker": "./scripts/run-docker-demo.sh --cwd --http"
  },
  "dependencies": {
    "@modelcontextprotocol/sdk": "1.12.1",
    "@sinclair/typebox": "^0.34.33",
    "@sinclair/typemap": "^0.10.1",
    "@xmldom/xmldom": "^0.9.8",
    "ajv": "^8.17.1",
    "diff": "^8.0.2",
    "fast-xml-parser": "^5.2.5",
    "fastmcp": "^3.14.1",
    "jsonata": "^2.0.6",
    "jsonpath-plus": "^10.3.0",
    "minimatch": "^10.0.1",
    "type-fest": "^4.41.0",
    "xpath": "^0.0.34"
  },
  "devDependencies": {
    "@types/cross-spawn": "^6.0.6",
    "@types/diff": "^8.0.0",
    "@types/jsonpath-plus": "^5.0.5",
    "@types/minimatch": "^5.1.2",
    "bun-types": "^1.2.15",
    "typescript": "^5.8.3"
  }
}

```

--------------------------------------------------------------------------------
/ai/graph/rools/orchestrator_SOPs.md:
--------------------------------------------------------------------------------

```markdown
# Standard Operating Procedures

## Git Workflow

All development work (features, fixes, refactoring) MUST be done on a dedicated feature branch created from the `main` branch.

Work MUST be committed incrementally to the feature branch.

Before merging, the work SHOULD be reviewed/verified (details may depend on the task).

Once complete and verified, the feature branch MUST be merged back into the `main` branch.

## Development Logging

Upon successful completion and merging of any significant development task, a development log entry MUST be created.

The process outlined in `agents/orchestrate/playbooks/playbook_development_logging.md` MUST be followed to generate and commit this log entry to the `main` branch.

## Plan Review

For complex or large-scale plans involving multiple agents or significant modifications, the Orchestrator SHOULD first submit the proposed plan to an `analyze` or `ask` agent for review and feedback before presenting it to the user or initiating the first step. The Orchestrator MUST incorporate feedback before finalizing the plan.

## General Workflow Principles

1.  **Define Conventions:** Before generating artifacts (logs, code, documentation), establish and adhere to clear conventions (e.g., naming, storage paths, formats).
2.  **Specify Before Execution:** Synthesize research findings or plans into a clear specification or set of instructions before initiating the main execution step.
3.  **Verify & Iterate:** Verify task outputs against defined objectives, requirements, or specifications. Iterate based on verification results and feedback, refining the approach or output until criteria are met.
4.  **Mode Switching for Content Generation:** Agents generating substantial content (e.g., Markdown, code) SHOULD switch to an appropriate mode (like `code` or `document`) within their task loop. After successful generation, they MUST return only the path to the created file.
```

--------------------------------------------------------------------------------
/test/transports/network.test.ts:
--------------------------------------------------------------------------------

```typescript
import { describe, it, expect, beforeAll, afterAll } from "bun:test";
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { SSEClientTransport } from "@modelcontextprotocol/sdk/client/sse.js";
import { StreamableHTTPClientTransport } from "@modelcontextprotocol/sdk/client/streamableHttp.js";
import spawn from "cross-spawn";
import type { ChildProcess } from "child_process";
import fs from "fs/promises";
import path from "path";
import { fileURLToPath } from "url";
import { getTextContent } from "../utils/regexUtils.js";

const __dirname = path.dirname(fileURLToPath(import.meta.url));
const serverRoot = path.resolve(__dirname, "../fs_root");
const serverCommand = "bun";
const port = 8091;
const serverArgs = [
  "dist/index.js",
  serverRoot,
  "--full-access",
  "--http",
  "--port",
  String(port),
];
let proc: ChildProcess;

describe("transport", () => {
  beforeAll(async () => {
    await fs.mkdir(serverRoot, { recursive: true });
    proc = spawn(serverCommand, serverArgs, { stdio: "inherit" });
    await new Promise((r) => setTimeout(r, 1000));
  });

  afterAll(async () => {
    proc.kill();
  });

  it("supports SSE", async () => {
    const client = new Client({ name: "sse-test", version: "1.0" });
    const transport = new SSEClientTransport(
      new URL(`http://localhost:${port}/sse`),
    );
    await client.connect(transport);
    const res = await client.callTool({
      name: "list_allowed_directories",
      arguments: {},
    });
    expect(getTextContent(res)).toContain(serverRoot);
    await transport.close();
  });

  it("supports HTTP streaming", async () => {
    const client = new Client({ name: "http-test", version: "1.0" });
    const transport = new StreamableHTTPClientTransport(
      new URL(`http://localhost:${port}/mcp`),
    );
    await client.connect(transport);
    const res = await client.callTool({
      name: "list_allowed_directories",
      arguments: {},
    });
    expect(getTextContent(res)).toContain(serverRoot);
    await transport.close();
  });
});

```

--------------------------------------------------------------------------------
/ai/graph/entities/Tool.py:
--------------------------------------------------------------------------------

```python
"""Example of how to create a custom entity type for Graphiti MCP Server."""

from pydantic import BaseModel, Field


class Tool(BaseModel):
    """
    **AI Persona:** You are an expert entity extraction assistant.
    
    **Task:** Identify and extract information about Tool entities mentioned in the provided text context.
    A Tool represents a specific good or service that a company offers.

    **Context:** The user will provide text containing potential mentions of products.

    **Extraction Instructions:**
    Your goal is to accurately populate the fields (`name`, `description`, `category`) 
    based *only* on information explicitly or implicitly stated in the text.

    1.  **Identify Core Mentions:** Look for explicit mentions of commercial goods or services.
    2.  **Extract Name:** Identify Tool names, especially proper nouns, capitalized words, or terms near trademark symbols (™, ®).
    3.  **Extract Description:** Synthesize a concise description using details about features, purpose, pricing, or availability found *only* in the text.
    4.  **Extract Category:** Determine the product category (e.g., "Software", "Hardware", "Service") based on the description or explicit mentions.
    5.  **Refine Details:** Pay attention to specifications, technical details, stated benefits, unique selling points, variations, or models mentioned, and incorporate relevant details into the description.
    6.  **Handle Ambiguity:** If information for a field is missing or unclear in the text, indicate that rather than making assumptions.

    **Output Format:** Respond with the extracted data structured according to this Pydantic model.
    """

    name: str = Field(
        ...,
        description='The specific name of the product as mentioned in the text.',
    )
    description: str = Field(
        ...,
        description='A concise description of the Tool, synthesized *only* from information present in the provided text context.',
    )
    category: str = Field(
        ...,
        description='The category the Tool belongs to (e.g., "Electronics", "Software", "Service") based on the text.',
    ) 
```

--------------------------------------------------------------------------------
/ai/graph/rools/playbooks/pb_discovery_driven_execution.md:
--------------------------------------------------------------------------------

```markdown
# Playbook: Discovery-Driven Execution

## Purpose

This playbook outlines a generic workflow for tasks where execution depends on first understanding external systems, APIs, file formats, or conventions. It emphasizes a research-driven approach to ensure successful task completion when faced with initial unknowns.

## Key Roles

*   **Researcher:** Gathers information about unknown conventions/constraints. Employs `search`, `read_file`, etc.
*   **Analyzer:** Synthesizes research findings into a clear execution specification.
*   **Executor:** Performs the task according to the derived specification. Employs `code`, `implement`, etc.
*   **Verifier:** Assesses results against the objective *and* the derived specification. Employs `review`, `test`.

## Workflow Steps

1.  **Initiation & Planning:**
    *   Define the objective of the task.
    *   Identify potential unknowns regarding the execution method or conventions.
2.  **Research/Discovery:**
    *   Dispatch Researcher agent(s) to gather information about the unknown conventions/constraints.
    *   Utilize tools like `search`, `read_file`, etc., to explore external systems, APIs, file formats, or conventions.
3.  **Analysis & Specification:**
    *   Dispatch Analyzer agent(s) to synthesize research findings into a clear execution specification.
    *   Define the required format, API calls, file paths, or any other relevant details for successful execution.
4.  **Execution:**
    *   Dispatch Executor agent(s) to perform the task *according to the derived specification*.
    *   Ensure the execution adheres to the identified conventions and constraints.
5.  **Verification:**
    *   Dispatch Verifier agent(s) to assess the results against the objective *and* the derived specification.
    *   Check for adherence to the defined format, API calls, file paths, etc.
6.  **Iteration Loop:**
    *   If verification fails, analyze the reasons for failure.
    *   Refine understanding/specification (back to Research or Analysis) or execution.
    *   Re-verify the results.
7.  **Completion:**
    *   Confirm successful task completion based on the objective and the derived specification.
```

--------------------------------------------------------------------------------
/test/transports/stdio.test.ts:
--------------------------------------------------------------------------------

```typescript
import { describe, it, expect, beforeAll, afterAll } from "bun:test";
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
import { ListToolsResultSchema } from "@modelcontextprotocol/sdk/types.js";
import fs from "fs/promises";
import path from "path";
import { fileURLToPath } from "url";
import { getTextContent } from "../utils/regexUtils.js";

const __dirname = path.dirname(fileURLToPath(import.meta.url));
const serverRoot = path.resolve(__dirname, "../fs_root");

describe("stdio transport", () => {
  beforeAll(async () => {
    await fs.mkdir(serverRoot, { recursive: true });
  });

  afterAll(async () => {
  });

  it("announces tools over stdio", async () => {
    const client = new Client({ name: "stdio-test", version: "1.0" });
    const transport = new StdioClientTransport({
      command: "node",
      args: [
        path.resolve(__dirname, "../../dist/index.js"),
        serverRoot,
        "--readonly",
      ],
    });
    await client.connect(transport as any);
    const res = await client.callTool({
      name: "list_allowed_directories",
      arguments: {},
    });
    expect(getTextContent(res)).toContain(serverRoot);
    await transport.close();
  });

  it("lists tools with parameter schemas", async () => {
    const client = new Client({ name: "stdio-list-tools", version: "1.0" });
    const transport = new StdioClientTransport({
      command: "node",
      args: [
        path.resolve(__dirname, "../../dist/index.js"),
        serverRoot,
        "--readonly",
      ],
    });
    await client.connect(transport as any);
    const list = await client.listTools();
    const parsed = ListToolsResultSchema.parse(list);
    const sample = parsed.tools.find((t) => t.name === "list_directory" || t.name === "read_file");
    expect(sample).toBeTruthy();
    expect(sample?.inputSchema).toBeTruthy();
    // Ensure schema isn’t using unsupported vendor by checking presence of jsonSchema or zod-ish shape
    const schemaKeys = Object.keys(sample!.inputSchema as Record<string, unknown>);
    expect(schemaKeys.length).toBeGreaterThan(0);
    await transport.close();
  });
});



```

--------------------------------------------------------------------------------
/src/config/permissions.ts:
--------------------------------------------------------------------------------

```typescript
import path from 'path';
import { expandHome, normalizePath } from '../utils/path-utils.js';

export interface Permissions {
  create: boolean;
  edit: boolean;
  move: boolean;
  delete: boolean;
  rename: boolean;
  fullAccess: boolean;
}

export interface ServerConfig {
  readonlyFlag: boolean;
  noFollowSymlinks: boolean;
  permissions: Permissions;
  allowedDirectories: string[];
}

export function parseCommandLineArgs(args: string[]): ServerConfig {
  // Remove flags from args and store them
  const readonlyFlag = args.includes('--readonly');
  const noFollowSymlinks = args.includes('--no-follow-symlinks');
  const fullAccessFlag = args.includes('--full-access');
  
  // Granular permission flags
  const allowCreate = args.includes('--allow-create');
  const allowEdit = args.includes('--allow-edit');
  const allowMove = args.includes('--allow-move');
  const allowDelete = args.includes('--allow-delete');
  const allowRename = args.includes('--allow-rename');

  // Permission calculation
  // readonly flag overrides all other permissions as a safety mechanism
  // fullAccess enables all permissions unless readonly is set
  // individual allow flags enable specific permissions unless readonly is set
  const permissions: Permissions = {
    create: !readonlyFlag && (fullAccessFlag || allowCreate),
    edit: !readonlyFlag && (fullAccessFlag || allowEdit),
    move: !readonlyFlag && (fullAccessFlag || allowMove),
    delete: !readonlyFlag && (fullAccessFlag || allowDelete),
    rename: !readonlyFlag && (fullAccessFlag || allowRename),
    // fullAccess is true only if the flag is explicitly set and not in readonly mode
    fullAccess: !readonlyFlag && fullAccessFlag
  };

  // Remove flags from args
  const cleanArgs = args.filter(arg => !arg.startsWith('--'));

  if (cleanArgs.length === 0) {
    throw new Error(
      "Usage: mcp-server-filesystem [--full-access] [--readonly] [--no-follow-symlinks] " +
      "[--allow-create] [--allow-edit] [--allow-move] [--allow-delete] [--allow-rename] " +
      "<allowed-directory> [additional-directories...]"
    );
  }

  return {
    readonlyFlag,
    noFollowSymlinks,
    permissions,
    allowedDirectories: cleanArgs.map(dir =>
      normalizePath(path.resolve(expandHome(dir)))
    )
  };
}
```

--------------------------------------------------------------------------------
/test/utils/pathUtils.test.ts:
--------------------------------------------------------------------------------

```typescript
import os from 'os';
import path from 'path';
import { test, expect } from 'bun:test';
import fs from 'fs/promises';
import { expandHome, validatePath } from '../../src/utils/path-utils.js';

test('expands tilde to home directory', () => {
  const result = expandHome('~/example');
  expect(result).toBe(path.join(os.homedir(), 'example'));
});

test('expands $VAR environment variables', () => {
  process.env.TEST_VAR = '/tmp/test';
  expect(expandHome('$TEST_VAR/file.txt')).toBe('/tmp/test/file.txt');
});

test('expands %VAR% environment variables', () => {
  process.env.TEST_VAR = '/tmp/test';
  expect(expandHome('%TEST_VAR%/file.txt')).toBe('/tmp/test/file.txt');
});

test('expands ${VAR} environment variables', () => {
  process.env.BRACED = '/var/tmp';
  expect(expandHome('${BRACED}/file.txt')).toBe('/var/tmp/file.txt');
});

test('throws on undefined environment variables', () => {
  delete process.env.UNDEFINED_VAR;
  expect(() => expandHome('$UNDEFINED_VAR/file.txt')).toThrow('Environment variable UNDEFINED_VAR is not defined');
});

test('environment variables cannot bypass symlink restrictions', async () => {
  const allowed = await fs.mkdtemp(path.join(os.tmpdir(), 'allowed-'));
  const outside = await fs.mkdtemp(path.join(os.tmpdir(), 'outside-'));
  const linkPath = path.join(allowed, 'link');
  await fs.symlink(outside, linkPath);
  process.env.LINK_VAR = linkPath;
  await expect(
    validatePath('$LINK_VAR/secret.txt', [allowed], new Map(), false)
  ).rejects.toThrow(/outside allowed directories/);
});

test('expands $CWD to process.cwd()', () => {
  const cwd = process.cwd();
  const result = expandHome('$CWD/subdir');
  expect(result).toBe(path.join(cwd, 'subdir'));
});

test('expands $PWD when set, falls back to process.cwd() when not set', () => {
  const originalPwd = process.env.PWD;
  try {
    process.env.PWD = '/tmp/pwd-test';
    expect(expandHome('$PWD/file.txt')).toBe('/tmp/pwd-test/file.txt');
  } finally {
    // restore first
    if (originalPwd === undefined) {
      delete process.env.PWD;
    } else {
      process.env.PWD = originalPwd;
    }
  }

  // Now unset and verify fallback
  const current = process.cwd();
  delete process.env.PWD;
  expect(expandHome('$PWD/other')).toBe(path.join(current, 'other'));
});

```

--------------------------------------------------------------------------------
/scripts/run-docker-demo.sh:
--------------------------------------------------------------------------------

```bash
#!/bin/bash

# This script builds and runs the MCP Filesystem Server demo in a Docker container.
# It includes robust checks for port conflicts and cleans up previous container instances.

# --- Configuration ---
PORT=8090
CONTAINER_NAME="mcpfs-demo"
IMAGE_NAME="mcpfs-demo"

# Exit immediately if any command fails
set -e

# --- Cleanup Function ---
# This function will be called when the script exits (either normally or via signal)
cleanup() {
  if [ -n "$CONTAINER_PID" ]; then
    echo ""
    echo "Stopping container '$CONTAINER_NAME'..."
    docker stop $CONTAINER_NAME >/dev/null 2>&1 || true
    echo "Container stopped."
  fi
}

# Register cleanup to run on script exit and common signals
trap cleanup EXIT SIGINT SIGTERM

# --- Pre-flight Checks & Setup ---

# 1. Check if our container is already running
if docker ps --format '{{.Names}}' | grep -q "^${CONTAINER_NAME}$"; then
  echo "Found existing container '$CONTAINER_NAME' running. Stopping it..."
  docker stop $CONTAINER_NAME >/dev/null 2>&1 || true
  docker rm $CONTAINER_NAME >/dev/null 2>&1 || true
  sleep 1  # Give it a moment to release the port
fi

# 2. Clean up any stopped containers with the same name
docker rm $CONTAINER_NAME >/dev/null 2>&1 || true

# 3. Check if the required port is still in use (by something else)
if lsof -i :$PORT >/dev/null 2>&1; then
  echo "Error: Port $PORT is already in use by another process:"
  echo ""
  lsof -i :$PORT | grep LISTEN || lsof -i :$PORT
  echo ""
  echo "Please stop the conflicting process and try again."
  exit 1
fi

# --- Docker Build ---

echo "Building Docker image '$IMAGE_NAME'..."
docker build -t $IMAGE_NAME .

# --- Docker Run ---

echo "Starting container '$CONTAINER_NAME'..."
echo "The server will be accessible at http://localhost:$PORT"
echo "Press Ctrl+C to stop the server."
echo ""

# Run Docker container in detached mode to maintain control in the script
docker run -d \
  --rm \
  --name $CONTAINER_NAME \
  -p ${PORT}:${PORT} \
  -v "$(pwd)/demo:/data" \
  -w /data \
  $IMAGE_NAME "$@" > /dev/null

# Mark that we have a container to clean up
CONTAINER_PID=1

# Follow the container logs
# This will block until the container stops or we receive a signal
docker logs -f $CONTAINER_NAME 2>&1 || true

# Wait for any background processes
wait

```

--------------------------------------------------------------------------------
/src/schemas/file-operations.ts:
--------------------------------------------------------------------------------

```typescript
import { Type, Static } from "@sinclair/typebox";

// Schema definitions moved from index.ts

export const ReadFileArgsSchema = Type.Object({
  path: Type.String(),
  maxBytes: Type.Integer({
    minimum: 1,
    description: 'Maximum bytes to read from the file. Must be a positive integer. Handler default: 10KB.'
  })
});
export type ReadFileArgs = Static<typeof ReadFileArgsSchema>;

export const ReadMultipleFilesArgsSchema = Type.Object({
  paths: Type.Array(Type.String()),
  maxBytesPerFile: Type.Integer({
    minimum: 1,
    description: 'Maximum bytes to read per file. Must be a positive integer. Handler default: 10KB.'
  })
});
export type ReadMultipleFilesArgs = Static<typeof ReadMultipleFilesArgsSchema>;

// Note: WriteFileArgsSchema is used by both create_file and modify_file
export const WriteFileArgsSchema = Type.Object({
  path: Type.String(),
  content: Type.String(),
  // No maxBytes here as it's about writing, not reading limit
});
export type WriteFileArgs = Static<typeof WriteFileArgsSchema>;

export const EditOperation = Type.Object({
  oldText: Type.String({ description: 'Text to search for - must match exactly' }),
  newText: Type.String({ description: 'Text to replace with' })
});
export type EditOperationType = Static<typeof EditOperation>;

export const EditFileArgsSchema = Type.Object({
  path: Type.String(),
  edits: Type.Array(EditOperation),
  dryRun: Type.Boolean({
    default: false,
    description: 'Preview changes using git-style diff format'
  }),
  maxBytes: Type.Integer({
    minimum: 1,
    description: 'Maximum bytes to read from the file before editing. Must be a positive integer. Handler default: 10KB.'
  })
});
export type EditFileArgs = Static<typeof EditFileArgsSchema>;

export const GetFileInfoArgsSchema = Type.Object({
  path: Type.String(),
});
export type GetFileInfoArgs = Static<typeof GetFileInfoArgsSchema>;

export const MoveFileArgsSchema = Type.Object({
  source: Type.String(),
  destination: Type.String(),
});
export type MoveFileArgs = Static<typeof MoveFileArgsSchema>;

export const DeleteFileArgsSchema = Type.Object({
  path: Type.String(),
});
export type DeleteFileArgs = Static<typeof DeleteFileArgsSchema>;

export const RenameFileArgsSchema = Type.Object({
  path: Type.String({ description: 'Path to the file to be renamed' }),
  newName: Type.String({ description: 'New name for the file (without path)' })
});
export type RenameFileArgs = Static<typeof RenameFileArgsSchema>;


```

--------------------------------------------------------------------------------
/test/suites/regex_search_content/regex_flags.test.ts:
--------------------------------------------------------------------------------

```typescript
import { describe, it, expect, beforeAll, afterAll } from 'bun:test';
import { Client } from '@modelcontextprotocol/sdk/client/index.js';
import { StdioClientTransport } from '@modelcontextprotocol/sdk/client/stdio.js';
import { ClientCapabilities, CallToolResultSchema } from '@modelcontextprotocol/sdk/types.js';
import { parseRegexSearchOutput, getTextContent } from '../../utils/regexUtils.js';
import path from 'path';
import fs from 'fs/promises';
import { fileURLToPath } from 'url';

const clientInfo = { name: 'regex-search-flags-test-suite', version: '0.1.0' };
const clientCapabilities: ClientCapabilities = { toolUse: { enabled: true } };
const __dirname = path.dirname(fileURLToPath(import.meta.url));
const serverRoot = path.resolve(__dirname, '../../fs_root');
const serverCommand = 'bun';
const serverArgs = ['dist/index.js', serverRoot, '--full-access'];
const testBasePath = 'regex_search_content_flags/';

describe('test-filesystem::regex_search_content - Regex Flags', () => {
  let client: Client;
  let transport: StdioClientTransport;

  beforeAll(async () => {
    await fs.mkdir(serverRoot, { recursive: true });
    transport = new StdioClientTransport({ command: serverCommand, args: serverArgs });
    client = new Client(clientInfo, { capabilities: clientCapabilities });
    await client.connect(transport);

    await client.callTool({ name: 'create_directory', arguments: { path: testBasePath } });
    await client.callTool({ name: 'create_file', arguments: { path: `${testBasePath}case.txt`, content: 'CaseSensitivePattern' } });
  });

  afterAll(async () => {
    await client.callTool({ name: 'delete_directory', arguments: { path: testBasePath, recursive: true } });
    await transport.close();
  });

  it('performs case-sensitive search by default', async () => {
    const res = await client.callTool({ name: 'regex_search_content', arguments: { path: testBasePath, regex: 'CaseSensitivePattern' } }, CallToolResultSchema);
    expect(res.isError).not.toBe(true);
    const parsed = parseRegexSearchOutput(getTextContent(res));
    expect(parsed[0].file).toBe(path.join(serverRoot, `${testBasePath}case.txt`));
  });

  it('returns an error for unsupported (?i) flag', async () => {
    const res = await client.callTool({ name: 'regex_search_content', arguments: { path: testBasePath, regex: '(?i)casesensitivepattern' } }, CallToolResultSchema);
    expect(res.isError).toBe(true);
    expect(getTextContent(res)).toMatch(/Invalid regex pattern/);
  });
});

```

--------------------------------------------------------------------------------
/test/suites/regex_search_content/max_results.test.ts:
--------------------------------------------------------------------------------

```typescript
import { describe, it, expect, beforeAll, afterAll } from 'bun:test';
import { Client } from '@modelcontextprotocol/sdk/client/index.js';
import { StdioClientTransport } from '@modelcontextprotocol/sdk/client/stdio.js';
import { ClientCapabilities, CallToolResultSchema } from '@modelcontextprotocol/sdk/types.js';
import { parseRegexSearchOutput, getTextContent } from '../../utils/regexUtils.js';
import path from 'path';
import fs from 'fs/promises';
import { fileURLToPath } from 'url';

const clientInfo = { name: 'regex-search-maxresults-test-suite', version: '0.1.0' };
const clientCapabilities: ClientCapabilities = { toolUse: { enabled: true } };
const __dirname = path.dirname(fileURLToPath(import.meta.url));
const serverRoot = path.resolve(__dirname, '../../fs_root');
const serverCommand = 'bun';
const serverArgs = ['dist/index.js', serverRoot, '--full-access'];
const testBasePath = 'regex_search_content_maxresults/';

describe('test-filesystem::regex_search_content - Max Results Limiting', () => {
  let client: Client;
  let transport: StdioClientTransport;

  beforeAll(async () => {
    await fs.mkdir(serverRoot, { recursive: true });
    transport = new StdioClientTransport({ command: serverCommand, args: serverArgs });
    client = new Client(clientInfo, { capabilities: clientCapabilities });
    await client.connect(transport);

    await client.callTool({ name: 'create_directory', arguments: { path: testBasePath } });
    for (let i = 1; i <= 5; i++) {
      await client.callTool({ name: 'create_file', arguments: { path: `${testBasePath}file_${i}.txt`, content: 'max_results_pattern' } });
    }
  });

  afterAll(async () => {
    await client.callTool({ name: 'delete_directory', arguments: { path: testBasePath, recursive: true } });
    await transport.close();
  });

  it('limits number of files returned', async () => {
    const res = await client.callTool({ name: 'regex_search_content', arguments: { path: testBasePath, regex: 'max_results_pattern', maxResults: 2 } }, CallToolResultSchema);
    expect(res.isError).not.toBe(true);
    const parsed = parseRegexSearchOutput(getTextContent(res));
    expect(parsed.length).toBe(2);
  });

  it('returns all matches when limit higher than count', async () => {
    const res = await client.callTool({ name: 'regex_search_content', arguments: { path: testBasePath, regex: 'max_results_pattern', maxResults: 10 } }, CallToolResultSchema);
    expect(res.isError).not.toBe(true);
    const parsed = parseRegexSearchOutput(getTextContent(res));
    expect(parsed.length).toBe(5);
  });
});

```

--------------------------------------------------------------------------------
/test/suites/regex_search_content/error_handling.test.ts:
--------------------------------------------------------------------------------

```typescript
import { describe, it, expect, beforeAll, afterAll } from 'bun:test';
import { Client } from '@modelcontextprotocol/sdk/client/index.js';
import { StdioClientTransport } from '@modelcontextprotocol/sdk/client/stdio.js';
import { ClientCapabilities, CallToolResultSchema } from '@modelcontextprotocol/sdk/types.js';
import path from 'path';
import fs from 'fs/promises';
import { fileURLToPath } from 'url';
import { getTextContent } from '../../utils/regexUtils.js';

const clientInfo = { name: 'regex-search-error-test-suite', version: '0.1.0' };
const clientCapabilities: ClientCapabilities = { toolUse: { enabled: true } };
const __dirname = path.dirname(fileURLToPath(import.meta.url));
const serverRoot = path.resolve(__dirname, '../../fs_root');
const serverCommand = 'bun';
const serverArgs = ['dist/index.js', serverRoot, '--full-access'];
const testBasePath = 'regex_search_content_errors/';
const nonExistentPath = 'regex_search_content_nonexistent/';

describe('test-filesystem::regex_search_content - Error Handling', () => {
  let client: Client;
  let transport: StdioClientTransport;

  beforeAll(async () => {
    await fs.mkdir(serverRoot, { recursive: true });
    transport = new StdioClientTransport({ command: serverCommand, args: serverArgs });
    client = new Client(clientInfo, { capabilities: clientCapabilities });
    await client.connect(transport);

    await client.callTool({ name: 'create_directory', arguments: { path: testBasePath } });
    await client.callTool({ name: 'create_file', arguments: { path: `${testBasePath}a_file.txt`, content: 'content' } });
  });

  afterAll(async () => {
    await client.callTool({ name: 'delete_directory', arguments: { path: testBasePath, recursive: true } });
    await transport.close();
  });

  it('returns error for invalid regex', async () => {
    const res = await client.callTool({ name: 'regex_search_content', arguments: { path: testBasePath, regex: '[invalid' } }, CallToolResultSchema);
    expect(res.isError).toBe(true);
    expect(getTextContent(res)).toMatch(/Invalid regex pattern/);
  });

  it('returns no matches for non-existent path', async () => {
    const res = await client.callTool({ name: 'regex_search_content', arguments: { path: nonExistentPath, regex: 'x' } }, CallToolResultSchema);
    expect(res.isError).not.toBe(true);
    expect(getTextContent(res)).toBe('No matches found for the given regex pattern.');
  });

  it('returns no matches when path is a file', async () => {
    const res = await client.callTool({ name: 'regex_search_content', arguments: { path: `${testBasePath}a_file.txt`, regex: 'x' } }, CallToolResultSchema);
    expect(res.isError).not.toBe(true);
    expect(getTextContent(res)).toBe('No matches found for the given regex pattern.');
  });
});

```

--------------------------------------------------------------------------------
/test/suites/regex_search_content/max_filesize.test.ts:
--------------------------------------------------------------------------------

```typescript
import { describe, it, expect, beforeAll, afterAll } from 'bun:test';
import { Client } from '@modelcontextprotocol/sdk/client/index.js';
import { StdioClientTransport } from '@modelcontextprotocol/sdk/client/stdio.js';
import { ClientCapabilities, CallToolResultSchema } from '@modelcontextprotocol/sdk/types.js';
import { parseRegexSearchOutput, getTextContent } from '../../utils/regexUtils.js';
import path from 'path';
import fs from 'fs/promises';
import { fileURLToPath } from 'url';

const clientInfo = { name: 'regex-search-filesize-test-suite', version: '0.1.0' };
const clientCapabilities: ClientCapabilities = { toolUse: { enabled: true } };
const __dirname = path.dirname(fileURLToPath(import.meta.url));
const serverRoot = path.resolve(__dirname, '../../fs_root');
const serverCommand = 'bun';
const serverArgs = ['dist/index.js', serverRoot, '--full-access'];
const testBasePath = 'regex_search_content_filesize/';

describe('test-filesystem::regex_search_content - Max File Size Limiting', () => {
  let client: Client;
  let transport: StdioClientTransport;

  beforeAll(async () => {
    await fs.mkdir(serverRoot, { recursive: true });
    transport = new StdioClientTransport({ command: serverCommand, args: serverArgs });
    client = new Client(clientInfo, { capabilities: clientCapabilities });
    await client.connect(transport);

    await client.callTool({ name: 'create_directory', arguments: { path: testBasePath } });
    await client.callTool({ name: 'create_file', arguments: { path: `${testBasePath}small.txt`, content: 'filesize_pattern small' } });
    const bigContent = 'filesize_pattern '.padEnd(2000, 'x');
    await client.callTool({ name: 'create_file', arguments: { path: `${testBasePath}large.txt`, content: bigContent } });
  });

  afterAll(async () => {
    await client.callTool({ name: 'delete_directory', arguments: { path: testBasePath, recursive: true } });
    await transport.close();
  });

  it('skips files larger than limit', async () => {
    const res = await client.callTool({ name: 'regex_search_content', arguments: { path: testBasePath, regex: 'filesize_pattern', maxFileSize: 100 } }, CallToolResultSchema);
    expect(res.isError).not.toBe(true);
    const files = parseRegexSearchOutput(getTextContent(res)).map(r => r.file);
    expect(files).toEqual([path.join(serverRoot, `${testBasePath}small.txt`)]);
  });

  it('searches all when limit high', async () => {
    const res = await client.callTool({ name: 'regex_search_content', arguments: { path: testBasePath, regex: 'filesize_pattern', maxFileSize: 5000 } }, CallToolResultSchema);
    expect(res.isError).not.toBe(true);
    const files = parseRegexSearchOutput(getTextContent(res)).map(r => r.file);
    expect(files).toEqual(expect.arrayContaining([
      path.join(serverRoot, `${testBasePath}small.txt`),
      path.join(serverRoot, `${testBasePath}large.txt`)
    ]));
  });
});

```

--------------------------------------------------------------------------------
/src/schemas/index.ts:
--------------------------------------------------------------------------------

```typescript
import {
  ReadFileArgsSchema,
  ReadMultipleFilesArgsSchema,
  WriteFileArgsSchema,
  EditFileArgsSchema,
  EditOperationType,
  GetFileInfoArgsSchema,
  MoveFileArgsSchema,
  DeleteFileArgsSchema,
  RenameFileArgsSchema,
  ReadFileArgs,
  ReadMultipleFilesArgs,
  WriteFileArgs,
  EditFileArgs,
  GetFileInfoArgs,
  MoveFileArgs,
  DeleteFileArgs,
  RenameFileArgs,
} from './file-operations.js';

import {
  CreateDirectoryArgsSchema,
  ListDirectoryArgsSchema,
  DirectoryTreeArgsSchema,
  DeleteDirectoryArgsSchema,
  CreateDirectoryArgs,
  ListDirectoryArgs,
  DirectoryTreeArgs,
  DeleteDirectoryArgs,
} from './directory-operations.js';

import {
  SearchFilesArgsSchema,
  FindFilesByExtensionArgsSchema,
  GetPermissionsArgsSchema,
  XmlToJsonArgsSchema,
  XmlToJsonStringArgsSchema,
  RegexSearchContentArgsSchema,
  XmlQueryArgsSchema,
  XmlStructureArgsSchema,
  SearchFilesArgs,
  FindFilesByExtensionArgs,
  GetPermissionsArgs,
  XmlToJsonArgs,
  XmlToJsonStringArgs,
  RegexSearchContentArgs,
  XmlQueryArgs,
  XmlStructureArgs,
} from './utility-operations.js';

import {
  JsonQueryArgsSchema,
  JsonFilterArgsSchema,
  JsonGetValueArgsSchema,
  JsonTransformArgsSchema,
  JsonStructureArgsSchema,
  JsonSampleArgsSchema,
  JsonValidateArgsSchema,
  JsonSearchKvArgsSchema,
  JsonQueryArgs,
  JsonFilterArgs,
  JsonGetValueArgs,
  JsonTransformArgs,
  JsonStructureArgs,
  JsonSampleArgs,
  JsonValidateArgs,
  JsonSearchKvArgs,
} from './json-operations.js';

export const toolSchemas = {
  read_file: ReadFileArgsSchema,
  read_multiple_files: ReadMultipleFilesArgsSchema,
  create_file: WriteFileArgsSchema,
  modify_file: WriteFileArgsSchema,
  edit_file: EditFileArgsSchema,
  create_directory: CreateDirectoryArgsSchema,
  list_directory: ListDirectoryArgsSchema,
  directory_tree: DirectoryTreeArgsSchema,
  delete_directory: DeleteDirectoryArgsSchema,
  search_files: SearchFilesArgsSchema,
  find_files_by_extension: FindFilesByExtensionArgsSchema,
  move_file: MoveFileArgsSchema,
  delete_file: DeleteFileArgsSchema,
  rename_file: RenameFileArgsSchema,
  get_file_info: GetFileInfoArgsSchema,
  get_permissions: GetPermissionsArgsSchema,
  xml_query: XmlQueryArgsSchema,
  xml_structure: XmlStructureArgsSchema,
  xml_to_json: XmlToJsonArgsSchema,
  xml_to_json_string: XmlToJsonStringArgsSchema,
  json_query: JsonQueryArgsSchema,
  json_structure: JsonStructureArgsSchema,
  json_filter: JsonFilterArgsSchema,
  json_get_value: JsonGetValueArgsSchema,
  json_transform: JsonTransformArgsSchema,
  json_sample: JsonSampleArgsSchema,
  json_validate: JsonValidateArgsSchema,
  json_search_kv: JsonSearchKvArgsSchema,
  regex_search_content: RegexSearchContentArgsSchema,
} as const;

export type {
  ReadFileArgs,
  ReadMultipleFilesArgs,
  WriteFileArgs,
  EditFileArgs,
  EditOperationType,
  GetFileInfoArgs,
  MoveFileArgs,
  DeleteFileArgs,
  RenameFileArgs,
  CreateDirectoryArgs,
  ListDirectoryArgs,
  DirectoryTreeArgs,
  DeleteDirectoryArgs,
  SearchFilesArgs,
  FindFilesByExtensionArgs,
  GetPermissionsArgs,
  XmlToJsonArgs,
  XmlToJsonStringArgs,
  RegexSearchContentArgs,
  XmlQueryArgs,
  XmlStructureArgs,
  JsonQueryArgs,
  JsonFilterArgs,
  JsonGetValueArgs,
  JsonTransformArgs,
  JsonStructureArgs,
  JsonSampleArgs,
  JsonValidateArgs,
  JsonSearchKvArgs,
};

```

--------------------------------------------------------------------------------
/test/suites/regex_search_content/path_usage.test.ts:
--------------------------------------------------------------------------------

```typescript
import { describe, it, expect, beforeAll, afterAll } from "bun:test";
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
import {
  ClientCapabilities,
  CallToolResultSchema,
} from "@modelcontextprotocol/sdk/types.js";
import path from "path";
import { getTextContent } from "../../utils/regexUtils.js";
import fs from "fs/promises";
import { fileURLToPath } from "url";

const clientInfo = { name: "regex-search-path-test-suite", version: "0.1.0" };
const clientCapabilities: ClientCapabilities = { toolUse: { enabled: true } };
const __dirname = path.dirname(fileURLToPath(import.meta.url));
const serverRoot = path.resolve(__dirname, "../../fs_root");
const serverCommand = "bun";
const serverArgs = ["dist/index.js", serverRoot, "--full-access"];
const testRelativeBasePath = "regex_search_content_paths/";
const absoluteBasePath = path.join(serverRoot, testRelativeBasePath);

describe("test-filesystem::regex_search_content - Path Usage", () => {
  let client: Client;
  let transport: StdioClientTransport;

  beforeAll(async () => {
    await fs.mkdir(serverRoot, { recursive: true });
    transport = new StdioClientTransport({
      command: serverCommand,
      args: serverArgs,
    });
    client = new Client(clientInfo, { capabilities: clientCapabilities });
    await client.connect(transport);

    await client.callTool({
      name: "create_directory",
      arguments: { path: testRelativeBasePath },
    });
    await client.callTool({
      name: "create_file",
      arguments: {
        path: `${testRelativeBasePath}file_in_root.txt`,
        content: "Path pattern",
      },
    });
    await client.callTool({
      name: "create_directory",
      arguments: { path: `${testRelativeBasePath}sub/` },
    });
    await client.callTool({
      name: "create_file",
      arguments: {
        path: `${testRelativeBasePath}sub/file_in_subdir.txt`,
        content: "Path pattern",
      },
    });
  });

  afterAll(async () => {
    await client.callTool({
      name: "delete_directory",
      arguments: { path: testRelativeBasePath, recursive: true },
    });
    await transport.close();
  });

  it("works with relative path", async () => {
    const res = await client.callTool(
      {
        name: "regex_search_content",
        arguments: { path: testRelativeBasePath, regex: "Path pattern" },
      },
      CallToolResultSchema,
    );
    expect(res.isError).not.toBe(true);
    expect(getTextContent(res)).toMatch("file_in_root.txt");
  });

  it("works with absolute path within root", async () => {
    const res = await client.callTool(
      {
        name: "regex_search_content",
        arguments: { path: absoluteBasePath, regex: "Path pattern" },
      },
      CallToolResultSchema,
    );
    expect(res.isError).not.toBe(true);
    expect(getTextContent(res)).toMatch("file_in_root.txt");
  });

  it("errors for path outside root", async () => {
    const outside = path.dirname(serverRoot);
    const res = await client.callTool(
      {
        name: "regex_search_content",
        arguments: { path: outside, regex: "x" },
      },
      CallToolResultSchema,
    );
    expect(res.isError).toBe(true);
    expect(getTextContent(res)).toMatch(/Access denied/);
  });
});

```

--------------------------------------------------------------------------------
/test/suites/regex_search_content/file_pattern.test.ts:
--------------------------------------------------------------------------------

```typescript
import { describe, it, expect, beforeAll, afterAll } from 'bun:test';
import { Client } from '@modelcontextprotocol/sdk/client/index.js';
import { StdioClientTransport } from '@modelcontextprotocol/sdk/client/stdio.js';
import { ClientCapabilities, CallToolResultSchema } from '@modelcontextprotocol/sdk/types.js';
import { parseRegexSearchOutput, getTextContent } from '../../utils/regexUtils.js';
import path from 'path';
import fs from 'fs/promises';
import { fileURLToPath } from 'url';

const clientInfo = { name: 'regex-search-pattern-test-suite', version: '0.1.0' };
const clientCapabilities: ClientCapabilities = { toolUse: { enabled: true } };
const __dirname = path.dirname(fileURLToPath(import.meta.url));
const serverRoot = path.resolve(__dirname, '../../fs_root');
const serverCommand = 'bun';
const serverArgs = ['dist/index.js', serverRoot, '--full-access'];
const testBasePath = 'regex_search_content_pattern/';

describe('test-filesystem::regex_search_content - File Pattern Matching', () => {
  let client: Client;
  let transport: StdioClientTransport;

  beforeAll(async () => {
    await fs.mkdir(serverRoot, { recursive: true });
    transport = new StdioClientTransport({ command: serverCommand, args: serverArgs });
    client = new Client(clientInfo, { capabilities: clientCapabilities });
    await client.connect(transport);

    await client.callTool({ name: 'create_directory', arguments: { path: testBasePath } });
    await client.callTool({ name: 'create_file', arguments: { path: `${testBasePath}a.txt`, content: 'pattern_here' } });
    await client.callTool({ name: 'create_file', arguments: { path: `${testBasePath}b.log`, content: 'pattern_here' } });
    await client.callTool({ name: 'create_directory', arguments: { path: `${testBasePath}sub/` } });
    await client.callTool({ name: 'create_file', arguments: { path: `${testBasePath}sub/c.txt`, content: 'pattern_here' } });
  });

  afterAll(async () => {
    await client.callTool({ name: 'delete_directory', arguments: { path: testBasePath, recursive: true } });
    await transport.close();
  });

  it('limits search using *.txt glob', async () => {
    const res = await client.callTool({ name: 'regex_search_content', arguments: { path: testBasePath, regex: 'pattern_here', filePattern: '*.txt' } }, CallToolResultSchema);
    expect(res.isError).not.toBe(true);
    const files = parseRegexSearchOutput(getTextContent(res)).map(r => r.file);
    expect(files).toEqual([path.join(serverRoot, `${testBasePath}a.txt`)]);
  });

  it('searches recursively with **/*.txt', async () => {
    const res = await client.callTool({ name: 'regex_search_content', arguments: { path: testBasePath, regex: 'pattern_here', filePattern: '**/*.txt' } }, CallToolResultSchema);
    expect(res.isError).not.toBe(true);
    const files = parseRegexSearchOutput(getTextContent(res)).map(r => r.file);
    expect(files).toEqual(expect.arrayContaining([
      path.join(serverRoot, `${testBasePath}a.txt`),
      path.join(serverRoot, `${testBasePath}sub/c.txt`)
    ]));
  });

  it('returns empty when glob matches nothing', async () => {
    const res = await client.callTool({ name: 'regex_search_content', arguments: { path: testBasePath, regex: 'pattern_here', filePattern: '*.none' } }, CallToolResultSchema);
    expect(res.isError).not.toBe(true);
    expect(getTextContent(res)).toBe('No matches found for the given regex pattern.');
  });
});

```

--------------------------------------------------------------------------------
/test/suites/regex_search_content/basic_search.test.ts:
--------------------------------------------------------------------------------

```typescript
import { describe, it, expect, beforeAll, afterAll } from 'bun:test';
import { Client } from '@modelcontextprotocol/sdk/client/index.js';
import { StdioClientTransport } from '@modelcontextprotocol/sdk/client/stdio.js';
import { ClientCapabilities, CallToolResultSchema } from '@modelcontextprotocol/sdk/types.js';
import { parseRegexSearchOutput, getTextContent } from '../../utils/regexUtils.js';
import path from 'path';
import fs from 'fs/promises';
import { fileURLToPath } from 'url';

const clientInfo = { name: 'regex-search-test-suite', version: '0.1.0' };
const clientCapabilities: ClientCapabilities = { toolUse: { enabled: true } };
const __dirname = path.dirname(fileURLToPath(import.meta.url));
const serverRoot = path.resolve(__dirname, '../../fs_root');
const serverCommand = 'bun';
const serverArgs = ['dist/index.js', serverRoot, '--full-access'];
const testBasePath = 'regex_search_content/';

describe('test-filesystem::regex_search_content - Basic Search', () => {
  let client: Client;
  let transport: StdioClientTransport;

  beforeAll(async () => {
    await fs.mkdir(serverRoot, { recursive: true });
    transport = new StdioClientTransport({ command: serverCommand, args: serverArgs });
    client = new Client(clientInfo, { capabilities: clientCapabilities });
    await client.connect(transport);

    await client.callTool({ name: 'create_directory', arguments: { path: testBasePath } });
    await client.callTool({ name: 'create_file', arguments: { path: `${testBasePath}file1.txt`, content: 'A unique_pattern_123 here' } });
    await client.callTool({ name: 'create_file', arguments: { path: `${testBasePath}file2.log`, content: 'Another unique_pattern_123 again' } });
    await client.callTool({ name: 'create_directory', arguments: { path: `${testBasePath}sub/` } });
    await client.callTool({ name: 'create_file', arguments: { path: `${testBasePath}sub/subfile.txt`, content: 'unique_pattern_123 in sub' } });
  });

  afterAll(async () => {
    await client.callTool({ name: 'delete_directory', arguments: { path: testBasePath, recursive: true } });
    await transport.close();
  });

  it('finds a pattern in a single file', async () => {
    const res = await client.callTool({ name: 'regex_search_content', arguments: { path: testBasePath, regex: 'unique_pattern_123', filePattern: 'file1.txt' } }, CallToolResultSchema);
    expect(res.isError).not.toBe(true);
    const parsed = parseRegexSearchOutput(getTextContent(res));
    expect(parsed).toHaveLength(1);
    expect(parsed[0].file).toBe(path.join(serverRoot, `${testBasePath}file1.txt`));
  });

  it('returns multiple files when pattern exists in them', async () => {
    const res = await client.callTool({ name: 'regex_search_content', arguments: { path: testBasePath, regex: 'unique_pattern_123', filePattern: '**/*' } }, CallToolResultSchema);
    expect(res.isError).not.toBe(true);
    const parsed = parseRegexSearchOutput(getTextContent(res));
    const files = parsed.map(p => p.file);
    expect(files).toEqual(expect.arrayContaining([
      path.join(serverRoot, `${testBasePath}file1.txt`),
      path.join(serverRoot, `${testBasePath}file2.log`),
      path.join(serverRoot, `${testBasePath}sub/subfile.txt`)
    ]));
  });

  it('returns no matches when pattern does not exist', async () => {
    const res = await client.callTool({ name: 'regex_search_content', arguments: { path: testBasePath, regex: 'does_not_exist' } }, CallToolResultSchema);
    expect(res.isError).not.toBe(true);
    expect(getTextContent(res)).toBe('No matches found for the given regex pattern.');
  });
});

```

--------------------------------------------------------------------------------
/test/suites/regex_search_content/depth_limiting.test.ts:
--------------------------------------------------------------------------------

```typescript
import { describe, it, expect, beforeAll, afterAll } from 'bun:test';
import { Client } from '@modelcontextprotocol/sdk/client/index.js';
import { StdioClientTransport } from '@modelcontextprotocol/sdk/client/stdio.js';
import { ClientCapabilities, CallToolResultSchema } from '@modelcontextprotocol/sdk/types.js';
import { parseRegexSearchOutput, getTextContent } from '../../utils/regexUtils.js';
import path from 'path';
import fs from 'fs/promises';
import { fileURLToPath } from 'url';

const clientInfo = { name: 'regex-search-depth-test-suite', version: '0.1.0' };
const clientCapabilities: ClientCapabilities = { toolUse: { enabled: true } };
const __dirname = path.dirname(fileURLToPath(import.meta.url));
const serverRoot = path.resolve(__dirname, '../../fs_root');
const serverCommand = 'bun';
const serverArgs = ['dist/index.js', serverRoot, '--full-access'];
const testBasePath = 'regex_search_content_depth/';

describe('test-filesystem::regex_search_content - Depth Limiting', () => {
  let client: Client;
  let transport: StdioClientTransport;

  beforeAll(async () => {
    await fs.mkdir(serverRoot, { recursive: true });
    transport = new StdioClientTransport({ command: serverCommand, args: serverArgs });
    client = new Client(clientInfo, { capabilities: clientCapabilities });
    await client.connect(transport);

    await client.callTool({ name: 'create_directory', arguments: { path: testBasePath } });
    await client.callTool({ name: 'create_file', arguments: { path: `${testBasePath}file_root.txt`, content: 'depth_pattern' } });
    await client.callTool({ name: 'create_directory', arguments: { path: `${testBasePath}sub1/` } });
    await client.callTool({ name: 'create_file', arguments: { path: `${testBasePath}sub1/file1.txt`, content: 'depth_pattern' } });
    await client.callTool({ name: 'create_directory', arguments: { path: `${testBasePath}sub1/sub2/` } });
    await client.callTool({ name: 'create_file', arguments: { path: `${testBasePath}sub1/sub2/file2.txt`, content: 'depth_pattern' } });
  });

  afterAll(async () => {
    await client.callTool({ name: 'delete_directory', arguments: { path: testBasePath, recursive: true } });
    await transport.close();
  });

  it('searches only root when maxDepth is 1', async () => {
    const res = await client.callTool({ name: 'regex_search_content', arguments: { path: testBasePath, regex: 'depth_pattern', filePattern: '**/*', maxDepth: 1 } }, CallToolResultSchema);
    expect(res.isError).not.toBe(true);
    const parsed = parseRegexSearchOutput(getTextContent(res));
    expect(parsed.map(p=>p.file)).toEqual([path.join(serverRoot, `${testBasePath}file_root.txt`)]);
  });

  it('searches up to depth 2', async () => {
    const res = await client.callTool({ name: 'regex_search_content', arguments: { path: testBasePath, regex: 'depth_pattern', filePattern: '**/*', maxDepth: 2 } }, CallToolResultSchema);
    expect(res.isError).not.toBe(true);
    const files = parseRegexSearchOutput(getTextContent(res)).map(p=>p.file);
    expect(files).toEqual(expect.arrayContaining([
      path.join(serverRoot, `${testBasePath}file_root.txt`),
      path.join(serverRoot, `${testBasePath}sub1/file1.txt`)
    ]));
  });

  it('searches all when maxDepth large', async () => {
    const res = await client.callTool({ name: 'regex_search_content', arguments: { path: testBasePath, regex: 'depth_pattern', filePattern: '**/*', maxDepth: 5 } }, CallToolResultSchema);
    expect(res.isError).not.toBe(true);
    const files = parseRegexSearchOutput(getTextContent(res)).map(p=>p.file);
    expect(files).toEqual(expect.arrayContaining([
      path.join(serverRoot, `${testBasePath}file_root.txt`),
      path.join(serverRoot, `${testBasePath}sub1/file1.txt`),
      path.join(serverRoot, `${testBasePath}sub1/sub2/file2.txt`)
    ]));
  });
});

```

--------------------------------------------------------------------------------
/src/utils/data-utils.ts:
--------------------------------------------------------------------------------

```typescript
export function isPlainObject(value: any): value is Record<string, any> {
  return Object.prototype.toString.call(value) === '[object Object]';
}

export function pickBy<T extends Record<string, any>>(obj: T, predicate: (value: any, key: string) => boolean): Partial<T> {
  const result: Partial<T> = {};
  for (const [key, val] of Object.entries(obj)) {
    if (predicate(val, key)) {
      (result as any)[key] = val;
    }
  }
  return result;
}

export function size(value: any): number {
  if (Array.isArray(value) || typeof value === 'string') return value.length;
  if (isPlainObject(value)) return Object.keys(value).length;
  return 0;
}

export function values<T>(obj: Record<string, T>): T[] {
  return Object.values(obj);
}

export function get(obj: any, path: string | Array<string | number>): any {
  const parts = Array.isArray(path) ? path : String(path).split('.');
  let current = obj;
  for (const part of parts) {
    if (current == null) return undefined;
    current = current[part as any];
  }
  return current;
}

export function isEqual(a: any, b: any): boolean {
  return JSON.stringify(a) === JSON.stringify(b);
}

export function groupBy<T>(array: T[], iteratee: string | ((item: T) => string | number)): Record<string, T[]> {
  const getKey = typeof iteratee === 'function' ? iteratee : (item: T) => String(get(item as any, iteratee));
  return array.reduce<Record<string, T[]>>((acc, item) => {
    const key = String(getKey(item));
    (acc[key] ||= []).push(item);
    return acc;
  }, {});
}

export function orderBy<T>(array: T[], iteratee: string | ((item: T) => any), orders: ('asc' | 'desc')[] = ['asc']): T[] {
  const getValue = typeof iteratee === 'function' ? iteratee : (item: T) => get(item as any, iteratee);
  const order = orders[0] ?? 'asc';
  return [...array].sort((a, b) => {
    const va = getValue(a);
    const vb = getValue(b);
    if (va < vb) return order === 'asc' ? -1 : 1;
    if (va > vb) return order === 'asc' ? 1 : -1;
    return 0;
  });
}

export function flattenDeep(arr: any[]): any[] {
  const result: any[] = [];
  for (const item of arr) {
    if (Array.isArray(item)) {
      result.push(...flattenDeep(item));
    } else {
      result.push(item);
    }
  }
  return result;
}

export function pick<T extends Record<string, any>, K extends keyof T>(obj: T, keys: readonly K[]): Pick<T, K> {
  const result: Partial<T> = {};
  for (const key of keys) {
    if (key in obj) {
      (result as any)[key] = obj[key];
    }
  }
  return result as Pick<T, K>;
}

export function omit<T extends Record<string, any>, K extends keyof T>(obj: T, keys: readonly K[]): Omit<T, K> {
  const result: Record<string, any> = { ...obj };
  for (const key of keys) {
    delete result[key as string];
  }
  return result as Omit<T, K>;
}

export function isEmpty(value: any): boolean {
  if (Array.isArray(value) || typeof value === 'string') return value.length === 0;
  if (isPlainObject(value)) return Object.keys(value).length === 0;
  return !value;
}

export function every<T>(arr: T[], predicate: (item: T) => boolean): boolean {
  return arr.every(predicate);
}

export function some<T>(arr: T[], predicate: (item: T) => boolean): boolean {
  return arr.some(predicate);
}

export function map<T, U>(arr: T[], iteratee: (item: T) => U): U[] {
  return arr.map(iteratee);
}

export function filter<T>(arr: T[], predicate: (item: T) => boolean): T[] {
  return arr.filter(predicate);
}

export function sampleSize<T>(arr: T[], n: number): T[] {
  const copy = [...arr];
  for (let i = copy.length - 1; i > 0; i--) {
    const j = Math.floor(Math.random() * (i + 1));
    [copy[i], copy[j]] = [copy[j], copy[i]];
  }
  return copy.slice(0, n);
}

export function take<T>(arr: T[], n: number): T[] {
  return arr.slice(0, n);
}

export function transform<T extends Record<string, any>, R>(obj: T, iteratee: (result: R, value: any, key: string) => void, accumulator: R): R {
  for (const [key, val] of Object.entries(obj)) {
    iteratee(accumulator, val, key);
  }
  return accumulator;
}

```

--------------------------------------------------------------------------------
/src/utils/path-utils.ts:
--------------------------------------------------------------------------------

```typescript
import path from 'path';
import os from 'os';
import fs from 'fs/promises';
import type { ReadonlyDeep } from 'type-fest';

// Normalize all paths consistently
export function normalizePath(p: string): string {
  return path.normalize(p);
}

export function expandHome(filepath: string): string {
  // Expand $VAR, ${VAR}, and %VAR% environment variables
  let expanded = filepath.replace(/\$(?:\{([A-Za-z_][A-Za-z0-9_]*)\}|([A-Za-z_][A-Za-z0-9_]*))|%([A-Za-z_][A-Za-z0-9_]*)%/g, (match, braced, unixVar, winVar) => {
    const envVar = (braced || unixVar || winVar) as string;

    // Built-in fallbacks for common CWD variables
    if (envVar === 'CWD') {
      return process.cwd();
    }
    if (envVar === 'PWD') {
      return process.env.PWD ?? process.cwd();
    }

    const value = process.env[envVar];
    if (value === undefined) {
      throw new Error(`Environment variable ${envVar} is not defined`);
    }
    return value;
  });

  // Expand ~ to home directory
  if (expanded.startsWith('~/') || expanded === '~') {
    expanded = path.join(os.homedir(), expanded.slice(1));
  }

  // Ensure no unresolved variables remain
  if (/\$\{?[A-Za-z_][A-Za-z0-9_]*\}?|%[A-Za-z_][A-Za-z0-9_]*%/.test(expanded)) {
    throw new Error('Unresolved environment variables in path');
  }

  return expanded;
}
export type ValidatePathOptions = ReadonlyDeep<{
  checkParentExists?: boolean;
}>;


export async function validatePath(
  requestedPath: string,
  allowedDirectories: ReadonlyArray<string>,
  symlinksMap: Map<string, string>,
  noFollowSymlinks: boolean,
  options?: ValidatePathOptions
): Promise<string> {
  // Default checkParentExists to true if not provided
  const checkParentExists = options?.checkParentExists ?? true;
  const expandedPath = expandHome(requestedPath);
  // Resolve absolute paths directly, resolve relative paths against the first allowed directory
  const absolute = path.isAbsolute(expandedPath)
    ? path.resolve(expandedPath)
    : path.resolve(allowedDirectories[0], expandedPath);

  const normalizedRequested = normalizePath(absolute);

  // Check if path is within allowed directories
  const isAllowed = allowedDirectories.some(dir => normalizedRequested.startsWith(dir));
  if (!isAllowed) {
    // Check if it's a real path that matches a symlink we know about
    const matchingSymlink = Array.from(symlinksMap.entries()).find(([realPath, symlinkPath]) => 
      normalizedRequested.startsWith(realPath)
    );
    
    if (matchingSymlink) {
      const [realPath, symlinkPath] = matchingSymlink;
      // Convert the path from real path to symlink path
      const relativePath = normalizedRequested.substring(realPath.length);
      const symlinkEquivalent = path.join(symlinkPath, relativePath);
      
      // Return the symlink path instead
      return symlinkEquivalent;
    }
    
    throw new Error(`Access denied - path outside allowed directories: ${absolute} not in ${allowedDirectories.join(', ')}`);
  }

  // Handle symlinks by checking their real path
  try {
    const realPath = await fs.realpath(absolute);
    const normalizedReal = normalizePath(realPath);
    
    // If the real path is different from the requested path, it's a symlink
    if (normalizedReal !== normalizedRequested) {
      // Store this mapping for future reference
      symlinksMap.set(normalizedReal, normalizedRequested);
      
      // Make sure the real path is also allowed
      const isRealPathAllowed = allowedDirectories.some(dir => normalizedReal.startsWith(dir));
      if (!isRealPathAllowed) {
        throw new Error("Access denied - symlink target outside allowed directories");
      }
      
      // If no-follow-symlinks is true, return the original path
      if (noFollowSymlinks) {
        return absolute;
      }
    }
    
    return realPath;
  } catch (error) {
    // For new files/dirs that don't exist yet, verify parent directory *if requested*
    if (checkParentExists) { // Add this condition
      const parentDir = path.dirname(absolute);
      try {
        const realParentPath = await fs.realpath(parentDir);
        const normalizedParent = normalizePath(realParentPath);
        const isParentAllowed = allowedDirectories.some(dir => normalizedParent.startsWith(dir));
        if (!isParentAllowed) {
          throw new Error("Access denied - parent directory outside allowed directories");
        }
        // If parent exists and is allowed, return the original absolute path for creation
        return absolute;
      } catch (parentError) {
         // If parent check fails, throw specific error
         // Check if parent doesn't exist specifically using the error code
         if ((parentError as NodeJS.ErrnoException)?.code === 'ENOENT') {
            throw new Error(`Parent directory does not exist: ${parentDir}`);
         }
         // Rethrow other parent errors
         throw parentError;
      }
    } else {
      // If checkParentExists is false, just return the absolute path
      // The initial isAllowed check already confirmed it's within bounds
      return absolute;
    }
  }
} 
```

--------------------------------------------------------------------------------
/test/suites/xml_tools/xml_tools.test.ts:
--------------------------------------------------------------------------------

```typescript
import { describe, it, expect, beforeAll, afterAll } from 'bun:test';
import { Client } from '@modelcontextprotocol/sdk/client/index.js';
import { StdioClientTransport } from '@modelcontextprotocol/sdk/client/stdio.js';
import { ClientCapabilities, CallToolResultSchema } from '@modelcontextprotocol/sdk/types.js';
import path from 'path';
import fs from 'fs/promises';
import { fileURLToPath } from 'url';

const clientInfo = { name: 'xml-tools-test-suite', version: '0.1.0' };
const clientCapabilities: ClientCapabilities = { toolUse: { enabled: true } };
const __dirname = path.dirname(fileURLToPath(import.meta.url));
const serverRoot = path.resolve(__dirname, '../../fs_root');
const serverCommand = 'bun';
const serverArgs = ['dist/index.js', serverRoot, '--full-access'];
const basePath = 'xml_tools/';

function getTextContent(result: unknown): string {
  const parsed = CallToolResultSchema.parse(result);
  const first = parsed.content[0];
  if (!first || first.type !== 'text') throw new Error('Expected text content');
  return (first as any).text as string;
}

describe('test-filesystem::xml_tools', () => {
  let client: Client;
  let transport: StdioClientTransport;

  beforeAll(async () => {
    await fs.mkdir(serverRoot, { recursive: true });
    transport = new StdioClientTransport({ command: serverCommand, args: serverArgs });
    client = new Client(clientInfo, { capabilities: clientCapabilities });
    await client.connect(transport);

    await client.callTool({ name: 'create_directory', arguments: { path: basePath } });

    const xml = `<?xml version="1.0" encoding="UTF-8"?>\n` +
      `<catalog xmlns="http://example.org/catalog">\n` +
      `  <book id="bk101" category="fiction">\n` +
      `    <author>Gambardella, Matthew</author>\n` +
      `    <title>XML Developer's Guide</title>\n` +
      `  </book>\n` +
      `  <book id="bk102" category="fiction">\n` +
      `    <author>Ralls, Kim</author>\n` +
      `    <title>Midnight Rain</title>\n` +
      `  </book>\n` +
      `</catalog>\n`;

    await client.callTool({ name: 'create_file', arguments: { path: `${basePath}basic.xml`, content: xml } });

    // Create a larger XML to exercise truncation
    const manyBooks = Array.from({ length: 200 }, (_, i) => `  <book id="bk${1000 + i}"><title>T${i}</title></book>`).join('\n');
    const bigXml = `<?xml version="1.0"?><catalog xmlns="http://example.org/catalog">\n${manyBooks}\n</catalog>\n`;
    await client.callTool({ name: 'create_file', arguments: { path: `${basePath}big.xml`, content: bigXml } });
  });

  afterAll(async () => {
    await client.callTool({ name: 'delete_directory', arguments: { path: basePath, recursive: true } });
    await transport.close();
  });

  it('xml_structure returns structure for a basic XML', async () => {
    const res = await client.callTool({ name: 'xml_structure', arguments: { path: `${basePath}basic.xml`, maxDepth: 2, includeAttributes: true, maxResponseBytes: 1024 * 1024 } }, CallToolResultSchema);
    expect(res.isError).not.toBe(true);
    const text = getTextContent(res);
    const obj = JSON.parse(text);
    expect(obj.rootElement).toBe('catalog');
    expect(typeof obj.elements).toBe('object');
    expect(obj.namespaces).toBeDefined();
  });

  it('xml_query supports XPath with local-name() (namespace-agnostic)', async () => {
    const res = await client.callTool({ name: 'xml_query', arguments: { path: `${basePath}basic.xml`, query: "//*[local-name()='title']/text()", includeAttributes: true, maxResponseBytes: 50 * 1024 } }, CallToolResultSchema);
    expect(res.isError).not.toBe(true);
    const text = getTextContent(res);
    const arr = JSON.parse(text);
    expect(Array.isArray(arr)).toBe(true);
    // Expect at least two titles
    expect(arr.length).toBeGreaterThanOrEqual(2);
    expect(arr[0].type).toBeDefined();
  });

  it('xml_structure truncates output when exceeding maxResponseBytes', async () => {
    const res = await client.callTool({ name: 'xml_structure', arguments: { path: `${basePath}big.xml`, maxDepth: 2, includeAttributes: false, maxResponseBytes: 300 } }, CallToolResultSchema);
    expect(res.isError).not.toBe(true);
    const text = getTextContent(res);
    const obj = JSON.parse(text);
    expect(obj._meta?.truncated).toBe(true);
  });

  it('xml_query truncates output when exceeding maxResponseBytes', async () => {
    const res = await client.callTool({ name: 'xml_query', arguments: { path: `${basePath}big.xml`, query: "//*[local-name()='book']", includeAttributes: true, maxResponseBytes: 400 } }, CallToolResultSchema);
    expect(res.isError).not.toBe(true);
    const text = getTextContent(res);
    // Either includes meta or a small array; ensure length is not huge
    expect(text.length).toBeLessThanOrEqual(400 + 200); // allow small overhead
  });

  it('xml_to_json_string returns JSON and applies response cap when small', async () => {
    const res = await client.callTool({ name: 'xml_to_json_string', arguments: { xmlPath: `${basePath}big.xml`, maxResponseBytes: 500 } }, CallToolResultSchema);
    expect(res.isError).not.toBe(true);
    const jsonText = getTextContent(res);
    const parsed = JSON.parse(jsonText);
    expect(parsed._meta?.truncated).toBe(true);
  });

  it('xml_to_json writes a file and applies response cap when small', async () => {
    const outPath = `${basePath}out.json`;
    const res = await client.callTool({ name: 'xml_to_json', arguments: { xmlPath: `${basePath}big.xml`, jsonPath: outPath, maxResponseBytes: 600, options: { format: true } } }, CallToolResultSchema);
    expect(res.isError).not.toBe(true);
    const read = await client.callTool({ name: 'read_file', arguments: { path: outPath, maxBytes: 100000 } }, CallToolResultSchema);
    const jsonText = getTextContent(read);
    const parsed = JSON.parse(jsonText);
    expect(parsed._meta?.truncated).toBe(true);
  });
});



```

--------------------------------------------------------------------------------
/src/handlers/directory-handlers.ts:
--------------------------------------------------------------------------------

```typescript
import fs from 'fs/promises';
import path from 'path';
import { minimatch } from 'minimatch';
import { Permissions } from '../config/permissions.js';
import { validatePath } from '../utils/path-utils.js';
import { parseArgs } from '../utils/schema-utils.js';
import {
  CreateDirectoryArgsSchema,
  ListDirectoryArgsSchema,
  DirectoryTreeArgsSchema,
  DeleteDirectoryArgsSchema,
  type CreateDirectoryArgs,
  type ListDirectoryArgs,
  type DirectoryTreeArgs,
  type DeleteDirectoryArgs
} from '../schemas/directory-operations.js';

interface TreeEntry {
  name: string;
  type: 'file' | 'directory';
  children?: TreeEntry[];
}

export async function handleCreateDirectory(
  args: unknown,
  permissions: Permissions,
  allowedDirectories: string[],
  symlinksMap: Map<string, string>,
  noFollowSymlinks: boolean
) {
  const parsed = parseArgs(CreateDirectoryArgsSchema, args, 'create_directory');
  
  // Enforce permission checks
  if (!permissions.create && !permissions.fullAccess) {
    throw new Error('Cannot create directory: create permission not granted (requires --allow-create)');
  }
  
  const validPath = await validatePath(
    parsed.path,
    allowedDirectories,
    symlinksMap,
    noFollowSymlinks,
    { checkParentExists: false } // Add this option
  );
  await fs.mkdir(validPath, { recursive: true });
  return {
    content: [{ type: "text", text: `Successfully created directory ${parsed.path}` }],
  };
}

export async function handleListDirectory(
  args: unknown,
  allowedDirectories: string[],
  symlinksMap: Map<string, string>,
  noFollowSymlinks: boolean
) {
  const parsed = parseArgs(ListDirectoryArgsSchema, args, 'list_directory');
  const validPath = await validatePath(parsed.path, allowedDirectories, symlinksMap, noFollowSymlinks);
  const entries = await fs.readdir(validPath, { withFileTypes: true });
  const formatted = entries
    .map((entry) => `${entry.isDirectory() ? "[DIR]" : "[FILE]"} ${entry.name}`)
    .join("\n");
  return {
    content: [{ type: "text", text: formatted }],
  };
}

export async function handleDirectoryTree(
  args: unknown,
  allowedDirectories: string[],
  symlinksMap: Map<string, string>,
  noFollowSymlinks: boolean
) {
  const parsed = parseArgs(DirectoryTreeArgsSchema, args, 'directory_tree');

  const { path: startPath, maxDepth, excludePatterns } = parsed; // maxDepth is mandatory (handler default: 2)
  const validatedStartPath = await validatePath(startPath, allowedDirectories, symlinksMap, noFollowSymlinks);

  async function buildTree(
    currentPath: string,
    basePath: string,
    currentDepth: number,
    maxDepth?: number,
    excludePatterns?: string[]
  ): Promise<TreeEntry[]> {
    // Depth check
    if (maxDepth !== undefined && currentDepth >= maxDepth) {
      return []; // Stop traversal if max depth is reached
    }

    const validPath = await validatePath(currentPath, allowedDirectories, symlinksMap, noFollowSymlinks);
    
    let entries;
    try {
      entries = await fs.readdir(validPath, { withFileTypes: true });
    } catch (error) {
      // Handle cases where directory might not be readable
      console.error(`Error reading directory ${validPath}: ${error}`);
      return [];
    }
    
    const result: TreeEntry[] = [];

    for (const entry of entries) {
      const entryFullPath = path.join(currentPath, entry.name);
      const entryRelativePath = path.relative(basePath, entryFullPath);

      // Exclusion check using minimatch
      if (excludePatterns && excludePatterns.length > 0) {
        const shouldExclude = excludePatterns.some(pattern =>
          minimatch(entryRelativePath, pattern, { dot: true, matchBase: true })
        );
        if (shouldExclude) {
          continue; // Skip this entry if it matches any exclude pattern
        }
      }

      const entryData: TreeEntry = {
        name: entry.name,
        type: entry.isDirectory() ? 'directory' : 'file'
      };

      if (entry.isDirectory()) {
        // Recursive call with incremented depth
        entryData.children = await buildTree(
          entryFullPath,
          basePath,
          currentDepth + 1,
          maxDepth,
          excludePatterns
        );
      }

      result.push(entryData);
    }

    return result;
  }

  // Initial call to buildTree with base parameters
  const treeData = await buildTree(
    validatedStartPath, 
    validatedStartPath, 
    0, 
    maxDepth, 
    excludePatterns
  );
  
  return {
    content: [{
      type: "text",
      text: JSON.stringify(treeData, null, 2)
    }],
  };
}

export async function handleDeleteDirectory(
  args: unknown,
  permissions: Permissions,
  allowedDirectories: string[],
  symlinksMap: Map<string, string>,
  noFollowSymlinks: boolean
) {
  const parsed = parseArgs(DeleteDirectoryArgsSchema, args, 'delete_directory');
  
  // Enforce permission checks
  if (!permissions.delete && !permissions.fullAccess) {
    throw new Error('Cannot delete directory: delete permission not granted (requires --allow-delete)');
  }
  
  const validPath = await validatePath(parsed.path, allowedDirectories, symlinksMap, noFollowSymlinks);
  
  try {
    if (parsed.recursive) {
      // Safety confirmation for recursive delete
      await fs.rm(validPath, { recursive: true, force: true });
      return {
        content: [{ type: "text", text: `Successfully deleted directory ${parsed.path} and all its contents` }],
      };
    } else {
      // Non-recursive directory delete
      await fs.rmdir(validPath);
      return {
        content: [{ type: "text", text: `Successfully deleted directory ${parsed.path}` }],
      };
    }
  } catch (error) {
    const msg = error instanceof Error ? error.message : String(error);
    if (msg.includes('ENOTEMPTY')) {
      throw new Error(`Cannot delete directory: directory is not empty. Use recursive=true to delete with contents.`);
    }
    throw new Error(`Failed to delete directory: ${msg}`);
  }
} 
```

--------------------------------------------------------------------------------
/src/schemas/utility-operations.ts:
--------------------------------------------------------------------------------

```typescript
import { Type, Static } from "@sinclair/typebox";

export const GetPermissionsArgsSchema = Type.Object({});
export type GetPermissionsArgs = Static<typeof GetPermissionsArgsSchema>;

export const SearchFilesArgsSchema = Type.Object({
  path: Type.String(),
  pattern: Type.String(),
  excludePatterns: Type.Optional(
    Type.Array(Type.String(), { default: [] })
  ),
  maxDepth: Type.Integer({
    minimum: 1,
    description: 'Maximum directory depth to search. Must be a positive integer. Handler default: 2.'
  }),
  maxResults: Type.Integer({
    minimum: 1,
    description: 'Maximum number of results to return. Must be a positive integer. Handler default: 10.'
  })
});
export type SearchFilesArgs = Static<typeof SearchFilesArgsSchema>;

export const FindFilesByExtensionArgsSchema = Type.Object({
  path: Type.String(),
  extension: Type.String({ description: 'File extension to search for (e.g., "xml", "json", "ts")' }),
  excludePatterns: Type.Optional(
    Type.Array(Type.String(), { default: [] })
  ),
  maxDepth: Type.Integer({
    minimum: 1,
    description: 'Maximum directory depth to search. Must be a positive integer. Handler default: 2.'
  }),
  maxResults: Type.Integer({
    minimum: 1,
    description: 'Maximum number of results to return. Must be a positive integer. Handler default: 10.'
  })
});
export type FindFilesByExtensionArgs = Static<typeof FindFilesByExtensionArgsSchema>;

export const XmlToJsonArgsSchema = Type.Object({
  xmlPath: Type.String({ description: 'Path to the XML file to convert' }),
  jsonPath: Type.String({ description: 'Path where the JSON should be saved' }),
  maxBytes: Type.Optional(Type.Integer({
    minimum: 1,
    description: '[Deprecated semantics] Previously limited file bytes read; ignored for parsing; considered only as a response size cap where applicable.'
  })),
  maxResponseBytes: Type.Optional(Type.Integer({
    minimum: 1,
    description: 'Maximum size, in bytes, of the returned content. Parsing reads full file; response may be truncated to respect this limit.'
  })),
  options: Type.Optional(
    Type.Object({
      ignoreAttributes: Type.Boolean({ default: false, description: 'Whether to ignore attributes in XML' }),
      preserveOrder: Type.Boolean({ default: true, description: 'Whether to preserve the order of properties' }),
      format: Type.Boolean({ default: true, description: 'Whether to format the JSON output' }),
      indentSize: Type.Number({ default: 2, description: 'Number of spaces for indentation' })
    }, { default: {} })
  )
});
export type XmlToJsonArgs = Static<typeof XmlToJsonArgsSchema>;

export const XmlToJsonStringArgsSchema = Type.Object({
  xmlPath: Type.String({ description: 'Path to the XML file to convert' }),
  maxBytes: Type.Optional(Type.Integer({
    minimum: 1,
    description: '[Deprecated semantics] Previously limited file bytes read; now treated as a response size cap in bytes.'
  })),
  maxResponseBytes: Type.Optional(Type.Integer({
    minimum: 1,
    description: 'Maximum size, in bytes, of the returned JSON string. Parsing reads full file; response may be truncated to respect this limit.'
  })),
  options: Type.Optional(
    Type.Object({
      ignoreAttributes: Type.Boolean({ default: false, description: 'Whether to ignore attributes in XML' }),
      preserveOrder: Type.Boolean({ default: true, description: 'Whether to preserve the order of properties' })
    }, { default: {} })
  )
});
export type XmlToJsonStringArgs = Static<typeof XmlToJsonStringArgsSchema>;

export const XmlQueryArgsSchema = Type.Object({
  path: Type.String({ description: 'Path to the XML file to query' }),
  query: Type.Optional(Type.String({ description: 'XPath query to execute against the XML file' })),
  structureOnly: Type.Optional(Type.Boolean({ default: false, description: 'If true, returns only tag names and structure instead of executing query' })),
  maxBytes: Type.Optional(Type.Integer({
    minimum: 1,
    description: '[Deprecated semantics] Previously limited file bytes read; now treated as a response size cap in bytes.'
  })),
  maxResponseBytes: Type.Optional(Type.Integer({
    minimum: 1,
    description: 'Maximum size, in bytes, of the returned content. Parsing reads full file; response may be truncated to respect this limit.'
  })),
  includeAttributes: Type.Optional(Type.Boolean({ default: true, description: 'Whether to include attribute information in the results' }))
});
export type XmlQueryArgs = Static<typeof XmlQueryArgsSchema>;

export const XmlStructureArgsSchema = Type.Object({
  path: Type.String({ description: 'Path to the XML file to analyze' }),
  maxDepth: Type.Integer({
    minimum: 1,
    description: 'How deep to analyze the hierarchy. Must be a positive integer. Handler default: 2.'
  }),
  includeAttributes: Type.Optional(Type.Boolean({ default: true, description: 'Whether to include attribute information' })),
  maxBytes: Type.Optional(Type.Integer({
    minimum: 1,
    description: '[Deprecated semantics] Previously limited file bytes read; now treated as a response size cap in bytes.'
  })),
  maxResponseBytes: Type.Optional(Type.Integer({
    minimum: 1,
    description: 'Maximum size, in bytes, of the returned content. Parsing reads full file; response may be truncated to respect this limit.'
  }))
});
export type XmlStructureArgs = Static<typeof XmlStructureArgsSchema>;

export const RegexSearchContentArgsSchema = Type.Object({
  path: Type.String({ description: 'Directory path to start the search from.' }),
  regex: Type.String({ description: 'The regular expression pattern to search for within file content.' }),
  filePattern: Type.Optional(Type.String({ default: '*', description: 'Glob pattern to filter files to search within (e.g., "*.ts", "data/**.json"). Defaults to searching all files.' })),
  maxDepth: Type.Optional(Type.Integer({ minimum: 1, default: 2, description: 'Maximum directory depth to search recursively. Defaults to 2.' })),
  maxFileSize: Type.Optional(Type.Integer({ minimum: 1, default: 10 * 1024 * 1024, description: 'Maximum file size in bytes to read for searching. Defaults to 10MB.' })),
  maxResults: Type.Optional(Type.Integer({ minimum: 1, default: 50, description: 'Maximum number of files with matches to return. Defaults to 50.' }))
});
export type RegexSearchContentArgs = Static<typeof RegexSearchContentArgsSchema>;

```

--------------------------------------------------------------------------------
/src/schemas/json-operations.ts:
--------------------------------------------------------------------------------

```typescript
import { Type, Static } from "@sinclair/typebox";

// Schema for JSONPath query operations
export const JsonQueryArgsSchema = Type.Object({
  path: Type.String({ description: 'Path to the JSON file to query' }),
  query: Type.String({ description: 'JSONPath expression to execute against the JSON data' }),
  maxBytes: Type.Integer({
    minimum: 1,
    description: 'Maximum bytes to read from the file. Must be a positive integer. Handler default: 10KB.'
  })
});
export type JsonQueryArgs = Static<typeof JsonQueryArgsSchema>;

// Schema for filtering JSON arrays
export const JsonFilterArgsSchema = Type.Object({
  path: Type.String({ description: 'Path to the JSON file to filter' }),
  arrayPath: Type.Optional(
    Type.String({ description: 'Optional JSONPath expression to locate the target array (e.g., "$.items" or "$.data.records")' })
  ),
  conditions: Type.Array(
    Type.Object({
      field: Type.String({ description: 'Path to the field to check (e.g., "address.city" or "tags[0]")' }),
      operator: Type.Union([
        Type.Literal('eq'), Type.Literal('neq'),
        Type.Literal('gt'), Type.Literal('gte'),
        Type.Literal('lt'), Type.Literal('lte'),
        Type.Literal('contains'),
        Type.Literal('startsWith'),
        Type.Literal('endsWith'),
        Type.Literal('exists'),
        Type.Literal('type')
      ], { description: 'Comparison operator' }),
      value: Type.Any({ description: 'Value to compare against' })
    }),
    { minItems: 1, description: 'Array of filter conditions' }
  ),
  match: Type.Optional(
    Type.Union([Type.Literal('all'), Type.Literal('any')], {
      default: 'all',
      description: 'How to combine multiple conditions - "all" for AND, "any" for OR'
    })
  ),
  maxBytes: Type.Integer({
    minimum: 1,
    description: 'Maximum bytes to read from the file. Must be a positive integer. Handler default: 10KB.'
  })
});
export type JsonFilterArgs = Static<typeof JsonFilterArgsSchema>;

// Schema for getting a specific value from a JSON file
export const JsonGetValueArgsSchema = Type.Object({
  path: Type.String({ description: 'Path to the JSON file' }),
  field: Type.String({ description: 'Path to the field to retrieve (e.g., "user.address.city" or "items[0].name")' }),
  maxBytes: Type.Integer({
    minimum: 1,
    description: 'Maximum bytes to read from the file. Must be a positive integer. Handler default: 10KB.'
  })
});
export type JsonGetValueArgs = Static<typeof JsonGetValueArgsSchema>;

// Schema for transforming JSON data
export const JsonTransformArgsSchema = Type.Object({
  path: Type.String({ description: 'Path to the JSON file to transform' }),
  operations: Type.Array(
    Type.Object({
      type: Type.Union([
        Type.Literal('map'),
        Type.Literal('groupBy'),
        Type.Literal('sort'),
        Type.Literal('flatten'),
        Type.Literal('pick'),
        Type.Literal('omit')
      ], { description: 'Type of transformation operation' }),
      field: Type.Optional(Type.String({ description: 'Field to operate on (if applicable)' })),
      order: Type.Optional(Type.Union([Type.Literal('asc'), Type.Literal('desc')], { description: 'Sort order (if applicable)' })),
      fields: Type.Optional(Type.Array(Type.String(), { description: 'Fields to pick/omit (if applicable)' }))
    }),
    { minItems: 1, description: 'Array of transformation operations to apply in sequence' }
  ),
  maxBytes: Type.Integer({
    minimum: 1,
    description: 'Maximum bytes to read from the file. Must be a positive integer. Handler default: 10KB.'
  })
});
export type JsonTransformArgs = Static<typeof JsonTransformArgsSchema>;

// Schema for getting JSON structure
export const JsonStructureArgsSchema = Type.Object({
  path: Type.String({ description: 'Path to the JSON file to analyze' }),
  maxBytes: Type.Integer({
    minimum: 1,
    description: 'Maximum bytes to read from the file. Must be a positive integer. Handler default: 10KB.'
  }),
  maxDepth: Type.Integer({
    minimum: 1,
    description: 'How deep to analyze the structure. Must be a positive integer. Handler default: 2.'
  }),
  detailedArrayTypes: Type.Optional(Type.Boolean({
    default: false,
    description: 'Whether to analyze all array elements for mixed types (default: false)'
  }))
});
export type JsonStructureArgs = Static<typeof JsonStructureArgsSchema>;

// Schema for sampling JSON array elements
export const JsonSampleArgsSchema = Type.Object({
  path: Type.String({ description: 'Path to the JSON file containing the array' }),
  arrayPath: Type.String({ description: 'JSONPath expression to locate the target array (e.g., "$.items" or "$.data.records")' }),
  count: Type.Integer({ minimum: 1, description: 'Number of elements to sample' }),
  method: Type.Optional(
    Type.Union([Type.Literal('first'), Type.Literal('random')], {
      default: 'first',
      description: 'Sampling method - "first" for first N elements, "random" for random sampling'
    })
  ),
  maxBytes: Type.Integer({
    minimum: 1,
    description: 'Maximum bytes to read from the file. Must be a positive integer. Handler default: 10KB.'
  })
});
export type JsonSampleArgs = Static<typeof JsonSampleArgsSchema>;

// Schema for JSON Schema validation
export const JsonValidateArgsSchema = Type.Object({
  path: Type.String({ description: 'Path to the JSON file to validate' }),
  schemaPath: Type.String({ description: 'Path to the JSON Schema file' }),
  maxBytes: Type.Integer({
    minimum: 1,
    description: 'Maximum bytes to read from the file. Must be a positive integer. Handler default: 10KB.'
  }),
  strict: Type.Optional(Type.Boolean({
    default: false,
    description: 'Whether to enable strict mode validation (additionalProperties: false)'
  })),
  allErrors: Type.Optional(Type.Boolean({
    default: true,
    description: 'Whether to collect all validation errors or stop at first error'
  }))
});
export type JsonValidateArgs = Static<typeof JsonValidateArgsSchema>;

// Schema for searching JSON files by key/value pairs
export const JsonSearchKvArgsSchema = Type.Object({
  directoryPath: Type.String({ description: 'Directory to search in' }),
  key: Type.String({ description: 'Key to search for' }),
  value: Type.Optional(Type.Any({ description: 'Optional value to match against the key' })),
  recursive: Type.Optional(Type.Boolean({ default: true, description: 'Whether to search recursively in subdirectories' })),
  matchType: Type.Optional(
    Type.Union([
      Type.Literal('exact'),
      Type.Literal('contains'),
      Type.Literal('startsWith'),
      Type.Literal('endsWith')
    ], { default: 'exact', description: 'How to match values - only applies if value is provided' })
  ),
  maxBytes: Type.Integer({
    minimum: 1,
    description: 'Maximum bytes to read from each file. Must be a positive integer. Handler default: 10KB.'
  }),
  maxResults: Type.Integer({
    minimum: 1,
    description: 'Maximum number of results to return. Must be a positive integer. Handler default: 10.'
  }),
  maxDepth: Type.Integer({
    minimum: 1,
    description: 'Maximum directory depth to search. Must be a positive integer. Handler default: 2.'
  })
});
export type JsonSearchKvArgs = Static<typeof JsonSearchKvArgsSchema>;

```

--------------------------------------------------------------------------------
/test/suites/regex_search_content/spec.md:
--------------------------------------------------------------------------------

```markdown
# Test Suite Specification: `test-filesystem::regex_search_content`

## 1. Objective

This document defines the specification for creating a robust, automated test suite for the `regex_search_content` tool provided by the `test-filesystem` MCP server. The goal is to ensure the tool functions correctly across various scenarios, including those identified during prior interactive testing.

## 2. Testing Framework

**Recommendation:** **Bun's built-in test runner**

*   **Rationale:** Bun's test runner is built-in, provides a Jest-compatible API, and runs quickly with native ESM support.

## 3. Directory Structure

The test suite files and related artifacts will be organized within the main project repository (`mcp-filesystem/`) as follows:

```
mcp-filesystem/
├── src/
├── test/
│   ├── suites/
│   │   ├── regex_search_content/
│   │   │   ├── spec.md             # This specification document
│   │   │   ├── basic_search.test.ts # Example test file
│   │   │   ├── depth_limiting.test.ts
│   │   │   ├── error_handling.test.ts
│   │   │   └── ... (other test files)
│   │   └── ... (other tool suites)
│   └── ... (other test helpers, fixtures)
└── ... (package.json, tsconfig.json, etc.)
```

**Test Data Directory (Managed by Tests via MCP):**

All test files and directories manipulated *during test execution* will reside within the `test-filesystem` MCP server's designated root under a dedicated subdirectory:

*   `test/fs_root/regex_search_content/`

This ensures test isolation and utilizes the MCP server's file operations for setup and teardown.

## 4. Test File Conventions

*   **Naming:** Test files should follow the pattern `[feature_or_scenario].test.ts`. Examples:
    *   `basic_search.test.ts`
    *   `file_patterns.test.ts`
    *   `depth_limiting.test.ts`
    *   `max_results.test.ts`
    *   `error_handling.test.ts`
    *   `edge_cases.test.ts`
*   **Structure:** Utilize standard structure using Bun's test API (similar to Jest):
    *   Use `describe()` blocks to group tests related to a specific feature or aspect (e.g., `describe('maxDepth parameter', () => { ... })`).
    *   Use `it()` or `test()` for individual test cases with descriptive names (e.g., `it('should return matches only from files matching the filePattern')`).
    *   Employ `beforeAll`, `afterAll`, `beforeEach`, `afterEach` for setup and teardown logic as described in Section 7.

## 5. MCP Tool Call Handling

*   **Method:** Tests will directly invoke the `test-filesystem::regex_search_content` tool using the project's established MCP client/communication mechanism.
*   **Invocation:** Calls should mirror how the tool would be used in a real scenario, passing parameters like `path`, `regex`, `filePattern`, `maxDepth`, `maxFileSize`, and `maxResults`.
*   **Example (Conceptual):**
    ```typescript
    import { mcpClient } from '../path/to/mcp/client'; // Assuming an MCP client instance

    it('should find a simple pattern in the root test directory', async () => {
      const result = await mcpClient.useTool('test-filesystem', 'regex_search_content', {
        path: 'regex_search_content/', // Relative to MCP server root
        regex: 'unique_pattern_123',
        // maxDepth, maxResults etc. with defaults or specific values
      });
      // Assertions on the result structure and content
      expect(result.success).toBe(true);
      expect(result.data).toEqual(expect.arrayContaining([
        expect.objectContaining({
          file: 'regex_search_content/file1.txt',
          matches: expect.arrayContaining([
            expect.objectContaining({ line: 5, text: expect.stringContaining('unique_pattern_123') })
          ])
        })
      ]));
    });
    ```
*   **Abstraction (Optional Future Improvement):** A helper function or class could be created later to wrap these MCP calls, simplifying test code and potentially enabling easier mocking if needed for unit testing components that *use* the MCP client. For this initial suite, direct calls are sufficient.

## 6. Test Case Implementation

*   **Coverage:** Implement test cases derived from the original list (RCS-001 to RCS-083, skipping noted exclusions) and specifically address the findings from interactive testing.
*   **Findings Integration:**
    *   **Path:** Always use `regex_search_content/` as the base `path` parameter in tests targeting the prepared test data area.
    *   **Recursion/Depth:**
        *   Test default depth behavior.
        *   Test `maxDepth: 1`, `maxDepth: 2`, `maxDepth: N` (where N > 2).
        *   Test interactions between `maxDepth` and `filePattern` (e.g., pattern matches file beyond `maxDepth`).
        *   Explicitly test `maxDepth: 0` and assert that it throws an appropriate validation error (addressing RCS-040 failure).
    *   **Regex Flags:**
        *   Confirm `(?i)` and `(?m)` are *not* supported and likely cause an invalid regex error.
        *   Test case-insensitive matching using character sets (e.g., `[Ss]earch`, `[Tt]his`).
    *   **`maxResults`:**
        *   Create scenarios with more files containing matches than `maxResults`.
        *   Assert that the number of *files* in the result array equals `maxResults`.
        *   Assert that the matches *within* the returned files are not truncated by `maxResults`.
    *   **Glob Negation:** Test `filePattern: '!(*.log)'` or similar and assert it likely fails or doesn't exclude as expected, confirming lack of support.
    *   **Error Handling:**
        *   Test with syntactically invalid regex patterns (confirming RCS-070 pass).
        *   Test with a `path` that does not exist within the MCP server's scope (assert specific error, not "No matches" - addressing RCS-071 failure).
        *   Test using a file path as the `path` parameter (assert specific error, not "No matches" - addressing RCS-072 failure).
*   **Assertions:** Use `expect` assertions to validate:
    *   Success/failure status of the MCP call.
    *   Presence or absence of expected files in the results.
    *   Correct file paths (relative to the MCP server root).
    *   Correct line numbers for matches.
    *   Correct matching text snippets.
    *   Correct error messages/types for invalid inputs or scenarios.

## 7. Setup & Teardown

Test setup and teardown are crucial for creating a controlled environment within the `test-filesystem` MCP server's accessible directory (`test/fs_root/`).

*   **Mechanism:** Use `beforeAll`, `afterAll` (and potentially `beforeEach`/`afterEach` if needed) within the test files.
*   **Actions:** These hooks will use the `test-filesystem` MCP server's *own tools* (`create_directory`, `create_file`, `delete_directory`) to manage the test environment under `regex_search_content/`. The directory `test/fs_root/regex_search_content/` is created during setup and removed after the tests complete.

*   **`beforeAll` (Executed once per test file):**
    1.  **Ensure Base Directory:** Call `test-filesystem::create_directory` with `path: 'regex_search_content/'`. This is idempotent.
    2.  **Create Test Files/Dirs:** Call `test-filesystem::create_file` multiple times to populate `regex_search_content/` with a variety of files and subdirectories needed for the tests in that specific file. Examples:
        *   `regex_search_content/file1.txt` (contains pattern A)
        *   `regex_search_content/subdir1/file2.log` (contains pattern A, pattern B)
        *   `regex_search_content/subdir1/nested/file3.txt` (contains pattern C)
        *   `regex_search_content/empty.txt`
        *   Files designed to test `maxResults`, `maxDepth`, `filePattern`, etc.
*   **`afterAll` (Executed once per test file):**
    1.  **Clean Up:** Call `test-filesystem::delete_directory` with `path: 'regex_search_content/'` and `recursive: true`. This removes all test-specific files and directories created by the corresponding `beforeAll`.
*   **`beforeEach`/`afterEach`:** Use sparingly only if a specific test requires a pristine state different from the `beforeAll` setup or needs to clean up uniquely created artifacts.

## 8. Execution
*   Tests will be executed using Bun with the configured test runner:
    *   **Bun test:** `bun test test/suites/regex_search_content/` (or specific file)
*   Integration with CI/CD pipelines should execute the Bun command.

```

--------------------------------------------------------------------------------
/.ai/rules/filesystem-mcp-server-usage.md:
--------------------------------------------------------------------------------

```markdown
---
description: Guide on using the filesystem MCP server, covering capabilities, permissions, security, use-cases, and efficient tool chaining. Consult before performing filesystem operations.
globs: 
alwaysApply: false
---

# Guide: Using the Filesystem MCP Server

This document provides guidance on interacting with the `mcp-filesystem` server, which facilitates secure and permission-controlled access to the local filesystem via the Model Context Protocol (MCP).

## Overview

The `mcp-filesystem` server is a Bun application that exposes filesystem operations as MCP tools. It operates within a sandboxed environment, restricting actions to pre-configured directories and enforcing specific permissions.

## Core Capabilities (Tools)

The server offers a range of tools for interacting with files and directories:

**Reading:**
*   `read_file`: Reads the entire content of a single file.
*   `read_multiple_files`: Reads content from multiple files simultaneously.

**Writing & Creation:**
*   `create_file`: Creates a new file with specified content. Requires `create` permission. Fails if the file exists.
*   `modify_file`: Overwrites an existing file with new content. Requires `edit` permission. Fails if the file doesn't exist.
*   `edit_file`: Makes targeted changes to specific parts of a text file while preserving the rest. Requires `edit` permission.

**Deletion:**
*   `delete_file`: Deletes a specific file. Requires `delete` permission.
*   `delete_directory`: Deletes a directory (potentially recursively). Requires `delete` permission.

**Moving & Renaming:**
*   `move_file`: Moves or renames a file or directory. Requires `move` permission.
*   `rename_file`: Renames a file. Requires `rename` permission.

**Listing & Exploration:**
*   `list_directory`: Lists the contents of a directory.
*   `directory_tree`: Provides a tree-like view of a directory structure.

**Searching:**
*   `search_files`: Finds files based on name patterns.
*   `find_files_by_extension`: Finds all files with a specific extension.

**Metadata:**
*   `get_file_info`: Retrieves information about a file or directory (size, type, timestamps).

**System Information:**
*   `list_allowed_directories`: Returns the list of directories that the server is allowed to access.
*   `get_permissions`: Returns the current permission state of the server.

**XML Operations:**
*   `xml_query`: Queries XML file using XPath expressions.
*   `xml_structure`: Analyzes XML file structure.
*   `xml_to_json`: Converts XML file to JSON format and optionally saves to a file.
*   `xml_to_json_string`: Converts XML file to a JSON string and returns it directly.

**JSON Operations:**
*   `json_query`: Queries JSON data using JSONPath expressions.
*   `json_structure`: Gets the structure of a JSON file.
*   `json_filter`: Filters JSON array data using flexible conditions.
*   `json_get_value`: Gets a specific value from a JSON file.
*   `json_transform`: Transforms JSON data using sequence operations.
*   `json_sample`: Samples JSON data from a JSON file.
*   `json_validate`: Validates JSON data against a JSON schema.
*   `json_search_kv`: Searches for key-value pairs in a JSON file.

## Permissions Model

Understanding the active permissions for the server instance is **critical** before attempting operations, especially write operations.

*   **Default:** If no permission flags are specified, the server operates in **read-only** mode.
*   `--readonly`: Explicitly sets read-only mode. **This flag overrides all other permission flags.**
*   `--full-access`: Grants permission for **all** operations (read, create, edit, move, rename, delete).
*   `--allow-create`: Grants permission to create files/directories.
*   `--allow-edit`: Grants permission to modify files.
*   `--allow-move`: Grants permission to move files/directories.
*   `--allow-rename`: Grants permission to rename files/directories.
*   `--allow-delete`: Grants permission to delete files/directories.

**Action:** Always check the server configuration (usually in `.cursor/mcp.json`) to identify the specific server instance being used (e.g., `mcp-test-readonly`, `filesystem`) and determine its active permissions (`--readonly`, `--full-access`, `--allow-*`) and allowed directories. **Do not assume write permissions are available.**

## Security Considerations

*   **Sandboxing:** All operations are strictly confined to the directories specified when the server was launched. Path traversal outside these directories is prevented.
*   **Symlinks:** By default, the server might follow symbolic links. If the `--no-follow-symlinks` flag is used, the server will refuse to operate on or through symlinks, enhancing security. Check the server configuration.
*   **Path Validation:** Input paths are normalized and validated against the allowed directories.
*   **Large File Handling:** Always check file size with `get_file_info` before reading file contents to prevent memory issues with large files. Consider using alternative approaches for very large files, such as targeted searches or incremental processing.
*   **Large Directory Trees:** Use extreme caution when requesting directory trees, especially for root directories or large project folders. Always use `get_file_info` first to check the directory size and entry count. For large directories (e.g., >1000 entries), prefer targeted `list_directory` operations or use search with specific patterns instead of full tree traversal.

## Common Use Cases

*   Reading configuration or data files.
*   Modifying source code within a designated project directory.
*   Creating new components or modules.
*   Searching for specific functions, variables, or text across project files.
*   Refactoring code by moving or renaming files/directories.
*   Cleaning up temporary files or build artifacts.
*   Analyzing the structure of a project directory.

## Efficient Tool Chaining & Workflows

Combine tools strategically for efficient task execution:

1.  **Exploration & Reading:**
    *   Start with `list_directory` to see directory contents.
    *   **Always use `get_file_info` first** to:
        - Check if a path exists and its type (file/directory)
        - Verify file sizes before reading contents
        - Check directory entry counts before requesting trees
    *   For large files (e.g., >5MB), consider if you actually need the entire file content or if targeted operations would be more efficient.
    *   For directories:
        - Start with non-recursive `list_directory` to assess directory size
        - Only use `directory_tree` for manageable directories (<1000 entries)
        - For large directories, use targeted `list_directory` operations
        - Consider using search operations instead of full tree traversal
    *   Use `read_file` for single files or `read_multiple_files` for several files identified via listing/searching.

2.  **Searching:**
    *   Use `search_files` to locate files by name/pattern.
    *   Use `find_files_by_extension` to find files of a specific type.
    *   Follow up with `read_file` or `read_multiple_files` on the search results.

3.  **Modification (Requires Permissions):**
    *   **Verify Permissions:** Check permissions with `get_permissions` first.
    *   Use `get_file_info` to confirm the file exists before attempting modification.
    *   Use `modify_file` for simple overwrites or `edit_file` for targeted changes.
    *   Consider reading the file (`read_file`) first if the modification depends on existing content.

4.  **Creation (Requires Permissions):**
    *   **Verify Permissions:** Check permissions with `get_permissions`.
    *   Use `get_file_info` to ensure the file/directory *does not* already exist.
    *   Use `create_file` or `create_directory`.

5.  **Refactoring (Requires Permissions):**
    *   **Verify Permissions:** Check permissions with `get_permissions`.
    *   Use `list_directory` to identify targets.
    *   Use `move_file` or `rename_file`. Use `get_file_info` first to confirm the source exists and the target doesn't (if renaming/moving to a specific new name).

6.  **Deletion (Requires Permissions):**
    *   **Verify Permissions:** Check permissions with `get_permissions`.
    *   Use `get_file_info` to confirm the target exists.
    *   Use `delete_file` or `delete_directory`. Be cautious with recursive directory deletion.

## Summary

Before using the filesystem server:
1.  **Identify the specific server instance** configured (e.g., in `.cursor/mcp.json`).
2.  **Check its configured allowed directories** using `list_allowed_directories`.
3.  **Check its active permissions** using `get_permissions`.
4.  **Check metadata before heavy operations:**
    - File sizes before reading contents
    - Directory entry counts before tree traversal
5.  **Choose the appropriate tool(s)** for the task.
6.  **Respect the sandbox** and permissions. Do not attempt operations known to be disallowed.
```

--------------------------------------------------------------------------------
/src/handlers/utility-handlers.ts:
--------------------------------------------------------------------------------

```typescript
import fs from 'fs/promises';
import path from 'path';
import { XMLParser } from 'fast-xml-parser';
import { Permissions } from '../config/permissions.js';
import { validatePath } from '../utils/path-utils.js';
import { parseArgs } from '../utils/schema-utils.js';
import { searchFiles, findFilesByExtension, regexSearchContent } from '../utils/file-utils.js';
import {
  GetPermissionsArgsSchema,
  SearchFilesArgsSchema,
  FindFilesByExtensionArgsSchema,
  XmlToJsonArgsSchema,
  XmlToJsonStringArgsSchema,
  RegexSearchContentArgsSchema, // Added import
  type GetPermissionsArgs,
  type SearchFilesArgs,
  type FindFilesByExtensionArgs,
  type XmlToJsonArgs,
  type XmlToJsonStringArgs,
  type RegexSearchContentArgs
} from '../schemas/utility-operations.js';

export function handleGetPermissions(
  args: unknown,
  permissions: Permissions,
  readonlyFlag: boolean,
  noFollowSymlinks: boolean,
  allowedDirectories: string[]
) {
  parseArgs(GetPermissionsArgsSchema, args, 'get_permissions');

  return {
    content: [{
      type: "text",
      text: `Current permission state:
readOnly: ${readonlyFlag}
followSymlinks: ${!noFollowSymlinks}
fullAccess: ${permissions.fullAccess}

Operations allowed:
- create: ${permissions.create}
- edit: ${permissions.edit}
- move: ${permissions.move}
- rename: ${permissions.rename}
- delete: ${permissions.delete}

Server was started with ${allowedDirectories.length} allowed ${allowedDirectories.length === 1 ? 'directory' : 'directories'}.
Use 'list_allowed_directories' to see them.`
    }],
  };
}

export async function handleSearchFiles(
  args: unknown,
  allowedDirectories: string[],
  symlinksMap: Map<string, string>,
  noFollowSymlinks: boolean
) {
  const parsed = parseArgs(SearchFilesArgsSchema, args, 'search_files');
  const { path: startPath, pattern, excludePatterns, maxDepth, maxResults } = parsed;
  const validPath = await validatePath(startPath, allowedDirectories, symlinksMap, noFollowSymlinks);
  const results = await searchFiles(validPath, pattern, excludePatterns, maxDepth, maxResults);
  return {
    content: [{ type: "text", text: results.length > 0 ? results.join("\n") : "No matches found" }],
  };
}

export async function handleFindFilesByExtension(
  args: unknown,
  allowedDirectories: string[],
  symlinksMap: Map<string, string>,
  noFollowSymlinks: boolean
) {
  const parsed = parseArgs(FindFilesByExtensionArgsSchema, args, 'find_files_by_extension');
  const { path: startPath, extension, excludePatterns, maxDepth, maxResults } = parsed;
  const validPath = await validatePath(startPath, allowedDirectories, symlinksMap, noFollowSymlinks);
  const results = await findFilesByExtension(
    validPath,
    extension,
    excludePatterns,
    maxDepth,
    maxResults
  );
  return {
    content: [{ type: "text", text: results.length > 0 ? results.join("\n") : "No matching files found" }],
  };
}

export async function handleXmlToJson(
  args: unknown,
  permissions: Permissions,
  allowedDirectories: string[],
  symlinksMap: Map<string, string>,
  noFollowSymlinks: boolean
) {
  const parsed = parseArgs(XmlToJsonArgsSchema, args, 'xml_to_json');

  const { xmlPath, jsonPath, maxBytes, options } = parsed;
  const validXmlPath = await validatePath(xmlPath, allowedDirectories, symlinksMap, noFollowSymlinks); // Source must exist

  const validJsonPath = await validatePath(
    jsonPath,
    allowedDirectories,
    symlinksMap,
    noFollowSymlinks,
    { checkParentExists: false } // Add option here for output JSON path
  );
  try {
    // Read the XML file (no input size gating; limit only output)
    const xmlContent = await fs.readFile(validXmlPath, "utf-8");
    
    // Parse XML to JSON
    const parserOptions = {
      ignoreAttributes: options?.ignoreAttributes ?? false,
      preserveOrder: options?.preserveOrder ?? true,
      // Add other options as needed
    };
    
    const parser = new XMLParser(parserOptions);
    const jsonObj = parser.parse(xmlContent);
    
    // Format JSON if requested
    const format = options?.format ?? true;
    const indentSize = options?.indentSize ?? 2;
    let jsonContent = format 
      ? JSON.stringify(jsonObj, null, indentSize) 
      : JSON.stringify(jsonObj);

    // Enforce response-size cap for file write by truncating content if needed
    const responseLimit = (parsed as any).maxResponseBytes ?? maxBytes;
    if (typeof responseLimit === 'number' && responseLimit > 0) {
      const size = Buffer.byteLength(jsonContent, 'utf8');
      if (size > responseLimit) {
        // Produce a summarized payload to fit limit
        const summary = {
          _meta: {
            truncated: true,
            originalSize: size,
            note: `JSON too large to write fully; summarizing to fit ${responseLimit} bytes.`
          },
          sample: Array.isArray(jsonObj) ? jsonObj.slice(0, 3) : (typeof jsonObj === 'object' ? Object.fromEntries(Object.entries(jsonObj).slice(0, 50)) : jsonObj)
        };
        jsonContent = JSON.stringify(summary, null, indentSize);
      }
    }
    
    // Check if JSON file exists to determine if this is a create operation
    let fileExists = false;
    try {
      await fs.access(validJsonPath);
      fileExists = true;
    } catch (error) {
      // File doesn't exist - this is a create operation
    }
    
    // Enforce permission checks for writing
    if (fileExists && !permissions.edit && !permissions.fullAccess) {
      throw new Error('Cannot write to existing JSON file: edit permission not granted (requires --allow-edit)');
    }
    
    if (!fileExists && !permissions.create && !permissions.fullAccess) {
      throw new Error('Cannot create new JSON file: create permission not granted (requires --allow-create)');
    }
    
    // Write JSON to file
    // Ensure parent dir exists before writing the JSON file
    const jsonParentDir = path.dirname(validJsonPath);
    await fs.mkdir(jsonParentDir, { recursive: true }); // Ensure parent exists
    await fs.writeFile(validJsonPath, jsonContent, "utf-8");
    
    return {
      content: [{ 
        type: "text", 
        text: `Successfully converted XML from ${xmlPath} to JSON at ${jsonPath}`
      }],
    };
  } catch (error) {
    const errorMessage = error instanceof Error ? error.message : String(error);
    throw new Error(`Failed to convert XML to JSON: ${errorMessage}`);
  }
}

export async function handleXmlToJsonString(
  args: unknown,
  allowedDirectories: string[],
  symlinksMap: Map<string, string>,
  noFollowSymlinks: boolean
) {
  const parsed = parseArgs(XmlToJsonStringArgsSchema, args, 'xml_to_json_string');

  const { xmlPath, maxBytes, options } = parsed;
  const validXmlPath = await validatePath(xmlPath, allowedDirectories, symlinksMap, noFollowSymlinks);
  
  try {
    // Read the XML file (no input size gating; limit only output)
    const xmlContent = await fs.readFile(validXmlPath, "utf-8");
    
    // Parse XML to JSON
    const parserOptions = {
      ignoreAttributes: options?.ignoreAttributes ?? false,
      preserveOrder: options?.preserveOrder ?? true,
      // Add other options as needed
    };
    
    const parser = new XMLParser(parserOptions);
    const jsonObj = parser.parse(xmlContent);
    
    // Return the JSON as a string
    let jsonContent = JSON.stringify(jsonObj, null, 2);

    // Apply response-size cap
    const responseLimit = (parsed as any).maxResponseBytes ?? maxBytes ?? 200 * 1024; // default 200KB
    if (typeof responseLimit === 'number' && responseLimit > 0) {
      const size = Buffer.byteLength(jsonContent, 'utf8');
      if (size > responseLimit) {
        const summary = {
          _meta: {
            truncated: true,
            originalSize: size,
            note: `JSON too large; summarizing to fit ${responseLimit} bytes.`
          },
          sample: Array.isArray(jsonObj) ? jsonObj.slice(0, 5) : (typeof jsonObj === 'object' ? Object.fromEntries(Object.entries(jsonObj).slice(0, 100)) : jsonObj)
        };
        jsonContent = JSON.stringify(summary, null, 2);
      }
    }
    
    return {
      content: [{ type: "text", text: jsonContent }],
    };
  } catch (error) {
    const errorMessage = error instanceof Error ? error.message : String(error);
    throw new Error(`Failed to convert XML to JSON: ${errorMessage}`);
  }
}

export function handleListAllowedDirectories(
  args: unknown,
  allowedDirectories: string[]
): { content: [{ type: string; text: string }] } {
  return {
    content: [{
      type: "text",
      text: `Allowed directories:\n${allowedDirectories.join('\n')}`
    }],
  };
}

export async function handleRegexSearchContent(
  args: unknown,
  allowedDirectories: string[],
  symlinksMap: Map<string, string>,
  noFollowSymlinks: boolean
) {
  const parsed = parseArgs(RegexSearchContentArgsSchema, args, 'regex_search_content');
  const {
    path: startPath,
    regex,
    filePattern,
    maxDepth,
    maxFileSize,
    maxResults
  } = parsed;

  const validPath = await validatePath(startPath, allowedDirectories, symlinksMap, noFollowSymlinks);

  try {
    const results = await regexSearchContent(
      validPath,
      regex,
      filePattern,
      maxDepth,
      maxFileSize,
      maxResults
    );

    if (results.length === 0) {
      return { content: [{ type: "text", text: "No matches found for the given regex pattern." }] };
    }

    // Format the output
    const formattedResults = results.map(fileResult => {
      const matchesText = fileResult.matches
        .map(match => `  Line ${match.lineNumber}: ${match.lineContent.trim()}`)
        .join('\n');
      return `File: ${fileResult.path}\n${matchesText}`;
    }).join('\n\n');

    return {
      content: [{ type: "text", text: formattedResults }],
    };
  } catch (error: any) {
    // Catch errors from regexSearchContent (e.g., invalid regex)
    throw new Error(`Error during regex content search: ${error.message}`);
  }
}
```

--------------------------------------------------------------------------------
/src/handlers/file-handlers.ts:
--------------------------------------------------------------------------------

```typescript
import fs from 'fs/promises';
import { Permissions } from '../config/permissions.js';
import { validatePath } from '../utils/path-utils.js';
import { parseArgs } from '../utils/schema-utils.js';
import { getFileStats, applyFileEdits } from '../utils/file-utils.js';
import {
  ReadFileArgsSchema,
  ReadMultipleFilesArgsSchema,
  WriteFileArgsSchema,
  EditFileArgsSchema,
  GetFileInfoArgsSchema,
  MoveFileArgsSchema,
  DeleteFileArgsSchema,
  RenameFileArgsSchema,
  type ReadFileArgs,
  type ReadMultipleFilesArgs,
  type WriteFileArgs,
  type EditFileArgs,
  type GetFileInfoArgs,
  type MoveFileArgs,
  type DeleteFileArgs,
  type RenameFileArgs
} from '../schemas/file-operations.js';
import path from 'path';

export async function handleReadFile(
  args: unknown,
  allowedDirectories: string[],
  symlinksMap: Map<string, string>,
  noFollowSymlinks: boolean
) {
  const { path: filePath, maxBytes } = parseArgs(ReadFileArgsSchema, args, 'read_file');
  const validPath = await validatePath(filePath, allowedDirectories, symlinksMap, noFollowSymlinks);
  
  // Check file size before reading
  const stats = await fs.stat(validPath);
  const effectiveMaxBytes = maxBytes ?? (10 * 1024); // Default 10KB
  if (stats.size > effectiveMaxBytes) {
    throw new Error(`File size (${stats.size} bytes) exceeds the maximum allowed size (${effectiveMaxBytes} bytes).`);
  }
  
  const content = await fs.readFile(validPath, "utf-8");
  return {
    content: [{ type: "text", text: content }],
  };
}

export async function handleReadMultipleFiles(
  args: unknown,
  allowedDirectories: string[],
  symlinksMap: Map<string, string>,
  noFollowSymlinks: boolean
) {
  const { paths, maxBytesPerFile } = parseArgs(ReadMultipleFilesArgsSchema, args, 'read_multiple_files');
  const effectiveMaxBytes = maxBytesPerFile ?? (10 * 1024); // Default 10KB per file
  
  const results = await Promise.all(
    paths.map(async (filePath: string) => {
      try {
        const validPath = await validatePath(filePath, allowedDirectories, symlinksMap, noFollowSymlinks);
        
        // Check file size before reading
        const stats = await fs.stat(validPath);
        if (stats.size > effectiveMaxBytes) {
          return `${filePath}: Error - File size (${stats.size} bytes) exceeds the maximum allowed size (${effectiveMaxBytes} bytes).`;
        }
        
        const content = await fs.readFile(validPath, "utf-8");
        return `${filePath}:\n${content}\n`;
      } catch (error) {
        const errorMessage = error instanceof Error ? error.message : String(error);
        return `${filePath}: Error - ${errorMessage}`;
      }
    }),
  );
  return {
    content: [{ type: "text", text: results.join("\n---\n") }],
  };
}

export async function handleCreateFile(
  args: unknown,
  permissions: Permissions,
  allowedDirectories: string[],
  symlinksMap: Map<string, string>,
  noFollowSymlinks: boolean
) {
  const data = parseArgs(WriteFileArgsSchema, args, 'create_file');
  
  const validPath = await validatePath(
    data.path,
    allowedDirectories,
    symlinksMap,
    noFollowSymlinks,
    { checkParentExists: false } // Add this option
  );
  
  // Check if file already exists before writing
  try {
    await fs.access(validPath);
    // If access succeeds, file exists
    throw new Error(`File already exists: ${data.path}`);
  } catch (error) {
     const msg = error instanceof Error ? error.message : String(error);
     if (!msg.includes('ENOENT')) { // Rethrow if it's not a "file not found" error
       throw error;
     }
     // If ENOENT, proceed with creation
     // Ensure create permission
     if (!permissions.create && !permissions.fullAccess) {
        throw new Error('Cannot create new file: create permission not granted (requires --allow-create)');
     }
     // Ensure parent directory exists
     const parentDir = path.dirname(validPath);
     await fs.mkdir(parentDir, { recursive: true });

     await fs.writeFile(validPath, data.content, "utf-8");
     return {
       content: [{ type: "text", text: `Successfully created ${data.path}` }],
     };
  }
}

export async function handleModifyFile(
  args: unknown,
  permissions: Permissions,
  allowedDirectories: string[],
  symlinksMap: Map<string, string>,
  noFollowSymlinks: boolean
) {
  const data = parseArgs(WriteFileArgsSchema, args, 'modify_file');

  const validPath = await validatePath(data.path, allowedDirectories, symlinksMap, noFollowSymlinks);
  
  // Check if file exists
  try {
    await fs.access(validPath);
    
    if (!permissions.edit && !permissions.fullAccess) {
      throw new Error('Cannot modify file: edit permission not granted (requires --allow-edit)');
    }
    
    await fs.writeFile(validPath, data.content, "utf-8");
    return {
      content: [{ type: "text", text: `Successfully modified ${data.path}` }],
    };
  } catch (error) {
    throw new Error('Cannot modify file: file does not exist');
  }
}

export async function handleEditFile(
  args: unknown,
  permissions: Permissions,
  allowedDirectories: string[],
  symlinksMap: Map<string, string>,
  noFollowSymlinks: boolean
) {
  const parsed = parseArgs(EditFileArgsSchema, args, 'edit_file');
  
  // Enforce permission checks
  if (!permissions.edit && !permissions.fullAccess) {
    throw new Error('Cannot edit file: edit permission not granted (requires --allow-edit)');
  }
  
  const { path: filePath, edits, dryRun, maxBytes } = parsed;
  const validPath = await validatePath(filePath, allowedDirectories, symlinksMap, noFollowSymlinks);
  
  // Check file size before attempting to read/edit
  const stats = await fs.stat(validPath);
  const effectiveMaxBytes = maxBytes ?? (10 * 1024); // Default 10KB
  if (stats.size > effectiveMaxBytes) {
    throw new Error(`File size (${stats.size} bytes) exceeds the maximum allowed size (${effectiveMaxBytes} bytes) for editing.`);
  }
  
  const result = await applyFileEdits(validPath, edits, dryRun);
  return {
    content: [{ type: "text", text: result }],
  };
}

export async function handleGetFileInfo(
  args: unknown,
  allowedDirectories: string[],
  symlinksMap: Map<string, string>,
  noFollowSymlinks: boolean
) {
  const parsed = parseArgs(GetFileInfoArgsSchema, args, 'get_file_info');
  const validPath = await validatePath(parsed.path, allowedDirectories, symlinksMap, noFollowSymlinks);
  const info = await getFileStats(validPath);
  return {
    content: [{ type: "text", text: Object.entries(info)
      .map(([key, value]) => `${key}: ${value}`)
      .join("\n") }],
  };
}

export async function handleMoveFile(
  args: unknown,
  permissions: Permissions,
  allowedDirectories: string[],
  symlinksMap: Map<string, string>,
  noFollowSymlinks: boolean
) {
  const parsed = parseArgs(MoveFileArgsSchema, args, 'move_file');
  
  // Enforce permission checks
  if (!permissions.move && !permissions.fullAccess) {
    throw new Error('Cannot move file: move permission not granted (requires --allow-move)');
  }
  
  const validSourcePath = await validatePath(parsed.source, allowedDirectories, symlinksMap, noFollowSymlinks); // No option here, source must exist

  const validDestPath = await validatePath(
    parsed.destination,
    allowedDirectories,
    symlinksMap,
    noFollowSymlinks,
    { checkParentExists: false } // Add option here for destination
  );
  // Ensure destination parent exists before moving (fs.rename requires parent)
  const destParentDir = path.dirname(validDestPath);
  try {
      await fs.access(destParentDir);
  } catch {
      throw new Error(`Destination parent directory does not exist: ${path.dirname(parsed.destination)}`);
  }

  await fs.rename(validSourcePath, validDestPath);
  return {
    content: [{ type: "text", text: `Successfully moved ${parsed.source} to ${parsed.destination}` }],
  };
}

export async function handleDeleteFile(
  args: unknown,
  permissions: Permissions,
  allowedDirectories: string[],
  symlinksMap: Map<string, string>,
  noFollowSymlinks: boolean
) {
  const parsed = parseArgs(DeleteFileArgsSchema, args, 'delete_file');
  
  // Enforce permission checks
  if (!permissions.delete && !permissions.fullAccess) {
    throw new Error('Cannot delete file: delete permission not granted (requires --allow-delete)');
  }
  
  const validPath = await validatePath(parsed.path, allowedDirectories, symlinksMap, noFollowSymlinks);
  
  try {
    // Check if file exists
    await fs.access(validPath);
    await fs.unlink(validPath);
    return {
      content: [{ type: "text", text: `Successfully deleted ${parsed.path}` }],
    };
  } catch (error) {
    throw new Error(`Failed to delete file: ${error instanceof Error ? error.message : String(error)}`);
  }
}

export async function handleRenameFile(
  args: unknown,
  permissions: Permissions,
  allowedDirectories: string[],
  symlinksMap: Map<string, string>,
  noFollowSymlinks: boolean
) {
  const parsed = parseArgs(RenameFileArgsSchema, args, 'rename_file');
  
  // Enforce permission checks - rename requires the rename permission
  if (!permissions.rename && !permissions.fullAccess) {
    throw new Error('Cannot rename file: rename permission not granted (requires --allow-rename)');
  }
  
  const validSourcePath = await validatePath(parsed.path, allowedDirectories, symlinksMap, noFollowSymlinks);
  
  // Get the directory from the source path
  const directory = path.dirname(validSourcePath);
  
  // Create the destination path using the same directory and the new name
  const destinationPath = path.join(directory, parsed.newName);
  
  // Validate the destination path
  const validDestPath = await validatePath(destinationPath, allowedDirectories, symlinksMap, noFollowSymlinks);
  
  // Check if destination already exists
  try {
    await fs.access(validDestPath);
    throw new Error(`Cannot rename file: a file with name "${parsed.newName}" already exists in the directory`);
  } catch (error) {
    // We want this error - it means the destination doesn't exist yet
    if ((error as NodeJS.ErrnoException).code !== 'ENOENT') {
      throw error;
    }
  }
  
  // Perform the rename operation
  await fs.rename(validSourcePath, validDestPath);

  return {
    content: [{ type: "text", text: `Successfully renamed ${parsed.path} to ${parsed.newName}` }],
  };
}

```

--------------------------------------------------------------------------------
/src/handlers/xml-handlers.ts:
--------------------------------------------------------------------------------

```typescript
import fs from 'fs/promises';
import * as xpath from 'xpath';
import { DOMParser as XmldomDOMParser } from '@xmldom/xmldom';
import { validatePath } from '../utils/path-utils.js';
import { parseArgs } from '../utils/schema-utils.js';
import {
  XmlQueryArgsSchema,
  XmlStructureArgsSchema,
  type XmlQueryArgs,
  type XmlStructureArgs
} from '../schemas/utility-operations.js';

// Define interfaces for type safety
interface XmlNode {
  type: 'element' | 'text' | 'attribute' | 'unknown';
  name?: string;
  value?: string;
  attributes?: Array<{ name: string; value: string }>;
  children?: XmlNode[];
  nodeType?: number;
}

interface HierarchyNode {
  name: string;
  hasChildren?: boolean;
  children?: HierarchyNode[];
}

interface XmlStructureInfo {
  rootElement: string | undefined;
  elements: Record<string, number>;
  attributes?: Record<string, number>;
  namespaces: Record<string, string>;
  hierarchy?: HierarchyNode;
}

/**
 * Handler for executing XPath queries on XML files
 */
export async function handleXmlQuery(
  args: unknown,
  allowedDirectories: string[],
  symlinksMap: Map<string, string>,
  noFollowSymlinks: boolean
): Promise<{ content: Array<{ type: string; text: string }> }> {
  const parsed = parseArgs(XmlQueryArgsSchema, args, 'xml_query');

  const validPath = await validatePath(
    parsed.path,
    allowedDirectories,
    symlinksMap,
    noFollowSymlinks
  );

  try {
    const xmlContent = await fs.readFile(validPath, 'utf8');

    try {
      const responseLimit =
        (parsed as any).maxResponseBytes ?? parsed.maxBytes ?? 200 * 1024; // 200KB default
      const result = processXmlContent(
        xmlContent,
        parsed.query,
        parsed.structureOnly,
        parsed.includeAttributes,
        responseLimit
      );
      return result;
    } catch (err) {
      const errorMessage = err instanceof Error ? err.message : String(err);
      throw new Error(`Failed to process XML: ${errorMessage}`);
    }
  } catch (err) {
    const errorMessage = err instanceof Error ? err.message : String(err);
    throw new Error(`Failed to query XML file: ${errorMessage}`);
  }
}

/**
 * Handler for extracting XML structure information
 */
export async function handleXmlStructure(
  args: unknown,
  allowedDirectories: string[],
  symlinksMap: Map<string, string>,
  noFollowSymlinks: boolean
): Promise<{ content: Array<{ type: string; text: string }> }> {
  const parsed = parseArgs(XmlStructureArgsSchema, args, 'xml_structure');

  const validPath = await validatePath(
    parsed.path,
    allowedDirectories,
    symlinksMap,
    noFollowSymlinks
  );

  try {
    const xmlContent = await fs.readFile(validPath, 'utf8');

    try {
      const parser = new XmldomDOMParser();
      const doc: any = parser.parseFromString(xmlContent, 'text/xml');
      const structure = extractXmlStructure(
        doc,
        parsed.maxDepth,
        parsed.includeAttributes
      );

      const responseLimit = (parsed as any).maxResponseBytes ?? parsed.maxBytes ?? 200 * 1024; // 200KB default
      let json = JSON.stringify(structure, null, 2);

      if (typeof responseLimit === 'number' && responseLimit > 0) {
        const size = Buffer.byteLength(json, 'utf8');
        if (size > responseLimit) {
          // Fallback to a summarized structure to respect response limit
          const summary = {
            rootElement: structure.rootElement,
            namespaces: structure.namespaces,
            elementTypeCount: Object.keys(structure.elements).length,
            attributeKeyCount: structure.attributes ? Object.keys(structure.attributes).length : 0,
            hierarchy: structure.hierarchy ? { name: structure.hierarchy.name, hasChildren: structure.hierarchy.hasChildren, childrenCount: structure.hierarchy.children?.length ?? 0 } : undefined,
            _meta: {
              truncated: true,
              note: `Full structure omitted to fit response limit of ${responseLimit} bytes`
            }
          };
          json = JSON.stringify(summary, null, 2);
        }
      }

      return {
        content: [{
          type: 'text',
          text: json
        }]
      };
    } catch (err) {
      const errorMessage = err instanceof Error ? err.message : String(err);
      throw new Error(`Failed to extract XML structure: ${errorMessage}`);
    }
  } catch (err) {
    const errorMessage = err instanceof Error ? err.message : String(err);
    throw new Error(`Failed to analyze XML structure: ${errorMessage}`);
  }
}

/**
 * Process XML content with XPath or structure analysis
 */
function processXmlContent(
  xmlContent: string,
  query?: string,
  structureOnly = false,
  includeAttributes = true,
  maxResponseBytes?: number
): { content: Array<{ type: string; text: string }> } {
  const parser = new XmldomDOMParser();
  const doc: any = parser.parseFromString(xmlContent, 'text/xml');

  if (structureOnly) {
    // Extract only structure information
    const tags = new Set<string>();
    const structureQuery = "//*";
    const nodes = xpath.select(structureQuery, doc as any);
    
    if (!Array.isArray(nodes)) {
      throw new Error('Unexpected XPath result type');
    }

    nodes.forEach((node: Node) => {
      if (node.nodeName) {
        tags.add(node.nodeName);
      }
    });

    const base = {
      tags: Array.from(tags),
      count: nodes.length,
      rootElement: doc.documentElement?.nodeName
    };

    let json = JSON.stringify(base, null, 2);
    if (typeof maxResponseBytes === 'number' && maxResponseBytes > 0) {
      if (Buffer.byteLength(json, 'utf8') > maxResponseBytes) {
        // Trim tags list progressively until it fits
        const all = base.tags;
        let lo = 0;
        let hi = all.length;
        let best = 0;
        while (lo <= hi) {
          const mid = Math.floor((lo + hi) / 2);
          const candidate = { ...base, tags: all.slice(0, mid) };
          const s = JSON.stringify(candidate, null, 2);
          if (Buffer.byteLength(s, 'utf8') <= maxResponseBytes) {
            best = mid;
            lo = mid + 1;
          } else {
            hi = mid - 1;
          }
        }
        const truncated = {
          ...base,
          tags: all.slice(0, best),
          _meta: {
            truncated: true,
            omittedTagCount: all.length - best
          }
        } as const;
        json = JSON.stringify(truncated, null, 2);
      }
    }

    return {
      content: [{ type: 'text', text: json }]
    };
  } else if (query) {
    // Execute specific XPath query
    const nodes = xpath.select(query, doc as any);

    const asArray: any[] = Array.isArray(nodes) ? nodes as any[] : [nodes as any];
    const results: XmlNode[] = [];
    let omittedCount = 0;
    let currentJson = JSON.stringify(results, null, 2);
    const limit = typeof maxResponseBytes === 'number' && maxResponseBytes > 0 ? maxResponseBytes : undefined;

    for (let i = 0; i < asArray.length; i++) {
      const formatted = formatNode(asArray[i] as any, includeAttributes);
      const tentative = [...results, formatted];
      const serialized = JSON.stringify(tentative, null, 2);
      if (limit && Buffer.byteLength(serialized, 'utf8') > limit) {
        omittedCount = asArray.length - i;
        break;
      }
      results.push(formatted);
      currentJson = serialized;
    }

    if (omittedCount > 0) {
      const meta = { type: 'meta', value: `truncated: omitted ${omittedCount} result(s)` } as const;
      const tentative = [...results, meta as any];
      const serialized = JSON.stringify(tentative, null, 2);
      if (!limit || Buffer.byteLength(serialized, 'utf8') <= limit) {
        currentJson = serialized;
      }
    }

    return {
      content: [{ type: 'text', text: currentJson }]
    };
  } else {
    throw new Error('Either structureOnly or query must be specified');
  }
}

/**
 * Format a DOM node for output
 */
function formatNode(node: Node | string | number | boolean | null | undefined, includeAttributes = true): XmlNode {
  if (typeof node === 'string' || typeof node === 'number' || typeof node === 'boolean') {
    return { type: 'text', value: String(node) };
  }

  if (!node || typeof node !== 'object' || !('nodeType' in node)) {
    return { type: 'unknown', value: String(node) };
  }

  // Text node
  if (node.nodeType === 3) {
    return {
      type: 'text',
      value: node.nodeValue?.trim()
    };
  }

  // Element node
  if (node.nodeType === 1) {
    const element = node as Element;
    const result: XmlNode = {
      type: 'element',
      name: element.nodeName,
      value: element.textContent?.trim()
    };

    if (includeAttributes && element.attributes && element.attributes.length > 0) {
      result.attributes = Array.from(element.attributes).map((attr) => ({
        name: attr.nodeName,
        value: attr.nodeValue ?? ''
      }));
    }

    return result;
  }

  // Attribute node
  if (node.nodeType === 2) {
    return {
      type: 'attribute',
      name: (node as Attr).nodeName,
      value: (node as Attr).nodeValue ?? ''
    };
  }

  return {
    type: 'unknown',
    nodeType: node.nodeType,
    value: node.toString()
  };
}

/**
 * Extract structured information about XML document
 */
function extractXmlStructure(doc: any, maxDepth = 2, includeAttributes = true): XmlStructureInfo {
  const structure: XmlStructureInfo = {
    rootElement: doc.documentElement?.nodeName,
    elements: {},
    attributes: includeAttributes ? {} : undefined,
    namespaces: extractNamespaces(doc),
  };

  // Get all element names and counts
  const elementQuery = "//*";
  const elements = xpath.select(elementQuery, doc) as any[];

  elements.forEach((element) => {
    const el = element as any;
    const name = el.nodeName;
    structure.elements[name] = (structure.elements[name] || 0) + 1;

    if (includeAttributes && el.attributes && el.attributes.length > 0) {
      for (let i = 0; i < el.attributes.length; i++) {
        const attr = el.attributes[i];
        const attrKey = `${name}@${attr.nodeName}`;
        if (structure.attributes) {
          structure.attributes[attrKey] = (structure.attributes[attrKey] || 0) + 1;
        }
      }
    }
  });

  // Get child relationship structure up to maxDepth
  if (maxDepth > 0 && doc.documentElement) {
    structure.hierarchy = buildHierarchy(doc.documentElement, maxDepth);
  }

  return structure;
}

/**
 * Extract namespaces used in the document
 */
function extractNamespaces(doc: any) {
  const namespaces: Record<string, string> = {};
  const nsQuery = "//*[namespace-uri()]";

  try {
    const nsNodes = xpath.select(nsQuery, doc) as any[];
    nsNodes.forEach((node) => {
      const el = node as any;
      if (el.namespaceURI) {
        const prefix = el.prefix || '';
        namespaces[prefix] = el.namespaceURI;
      }
    });
  } catch (err) {
    // Some documents might not support namespace queries
    console.error('Error extracting namespaces:', err instanceof Error ? err.message : String(err));
  }

  return namespaces;
}

/**
 * Build element hierarchy up to maxDepth
 */
function buildHierarchy(element: any, maxDepth: number, currentDepth = 0): HierarchyNode {
  if (currentDepth >= maxDepth) {
    return { name: element.nodeName, hasChildren: element.childNodes.length > 0 };
  }

  const result: HierarchyNode = {
    name: element.nodeName,
    children: []
  };

  // Only process element nodes (type 1)
  const childElements = Array.from(element.childNodes as any[])
    .filter((node: any) => node && node.nodeType === 1) as any[];

  if (childElements.length > 0) {
    const processedChildren = new Set<string>();

    childElements.forEach((child: any) => {
      // Only add unique child element types
      if (!processedChildren.has(child.nodeName)) {
        processedChildren.add(child.nodeName);
        result.children!.push(
          buildHierarchy(child, maxDepth, currentDepth + 1)
        );
      }
    });
  }

  return result;
}

```

--------------------------------------------------------------------------------
/src/utils/file-utils.ts:
--------------------------------------------------------------------------------

```typescript
import fsPromises from 'fs/promises';
import { createReadStream, Stats } from 'fs';
import * as readline from 'readline';
import type { ReadonlyDeep } from 'type-fest';
import { createTwoFilesPatch } from 'diff';
import { minimatch } from 'minimatch';
import path from 'path';

export interface FileInfo {
  size: number;
  created: Date;
  modified: Date;
  accessed: Date;
  isDirectory: boolean;
  isFile: boolean;
  permissions: string;
}

export type ImmutableFileInfo = ReadonlyDeep<FileInfo>;

export async function getFileStats(filePath: string): Promise<ImmutableFileInfo> {
  const stats = await fsPromises.stat(filePath);
  return {
    size: stats.size,
    created: stats.birthtime,
    modified: stats.mtime,
    accessed: stats.atime,
    isDirectory: stats.isDirectory(),
    isFile: stats.isFile(),
    permissions: stats.mode.toString(8).slice(-3),
  };
}

export async function searchFiles(
  rootPath: string,
  pattern: string,
  excludePatterns: string[] = [],
  maxDepth: number = 2, // Default depth
  maxResults: number = 10 // Default results
): Promise<ReadonlyArray<string>> {
  const results: string[] = [];

  async function search(currentPath: string, currentDepth: number) {
    // Stop if max depth is reached
    if (currentDepth >= maxDepth) {
      return;
    }
    
    // Stop if max results are reached
    if (results.length >= maxResults) {
      return;
    }
    const entries = await fsPromises.readdir(currentPath, { withFileTypes: true });

    for (const entry of entries) {
      const fullPath = path.join(currentPath, entry.name);

      // Check if path matches any exclude pattern
      const relativePath = path.relative(rootPath, fullPath);
      const shouldExclude = excludePatterns.some(pattern => {
        const globPattern = pattern.includes('*') ? pattern : `**/${pattern}/**`;
        return minimatch(relativePath, globPattern, { dot: true });
      });

      if (shouldExclude) {
        continue;
      }

      if (entry.name.toLowerCase().includes(pattern.toLowerCase())) {
        if (results.length < maxResults) {
          results.push(fullPath);
        }
        // Check again if max results reached after adding
        if (results.length >= maxResults) {
          return; // Stop searching this branch
        }
      }

      if (entry.isDirectory()) {
        // Check results length before recursing
        if (results.length < maxResults) {
          await search(fullPath, currentDepth + 1);
        }
      }
    }
  }

  await search(rootPath, 0); // Start search at depth 0
  return results;
}

export async function findFilesByExtension(
  rootPath: string,
  extension: string,
  excludePatterns: string[] = [],
  maxDepth: number = 2, // Default depth
  maxResults: number = 10 // Default results
): Promise<ReadonlyArray<string>> {
  const results: string[] = [];
  
  // Normalize the extension (remove leading dot if present)
  let normalizedExtension = extension.toLowerCase();
  if (normalizedExtension.startsWith('.')) {
    normalizedExtension = normalizedExtension.substring(1);
  }
  
  async function searchDirectory(currentPath: string, currentDepth: number) {
    // Stop if max depth is reached
    if (currentDepth >= maxDepth) {
      return;
    }
    
    // Stop if max results are reached
    if (results.length >= maxResults) {
      return;
    }
    const entries = await fsPromises.readdir(currentPath, { withFileTypes: true });

    for (const entry of entries) {
      const fullPath = path.join(currentPath, entry.name);

      // Check if path matches any exclude pattern
      const relativePath = path.relative(rootPath, fullPath);
      const shouldExclude = excludePatterns.some(pattern => {
        const globPattern = pattern.includes('*') ? pattern : `**/${pattern}/**`;
        return minimatch(relativePath, globPattern, { dot: true });
      });

      if (shouldExclude) {
        continue;
      }

      if (entry.isFile()) {
        // Check if file has the requested extension
        const fileExtension = path.extname(entry.name).toLowerCase().substring(1);
        if (fileExtension === normalizedExtension) {
          if (results.length < maxResults) {
            results.push(fullPath);
          }
          // Check again if max results reached after adding
          if (results.length >= maxResults) {
            return; // Stop searching this branch
          }
        }
      } else if (entry.isDirectory()) {
        // Recursively search subdirectories
        // Check results length before recursing
        if (results.length < maxResults) {
          await searchDirectory(fullPath, currentDepth + 1);
        }
      }
    }
  }

  await searchDirectory(rootPath, 0); // Start search at depth 0
  return results;
}

export function normalizeLineEndings(text: string): string {
  return text.replace(/\r\n/g, '\n');
}

export function createUnifiedDiff(originalContent: string, newContent: string, filepath: string = 'file'): string {
  // Ensure consistent line endings for diff
  const normalizedOriginal = normalizeLineEndings(originalContent);
  const normalizedNew = normalizeLineEndings(newContent);

  return createTwoFilesPatch(
    filepath,
    filepath,
    normalizedOriginal,
    normalizedNew,
    'original',
    'modified'
  );
}

export async function applyFileEdits(
  filePath: string,
  edits: ReadonlyArray<ReadonlyDeep<{ oldText: string; newText: string }>>,
  dryRun = false
): Promise<string> {
  // Read file content and normalize line endings
  const content = normalizeLineEndings(await fsPromises.readFile(filePath, 'utf-8'));

  // Apply edits sequentially
  let modifiedContent = content;
  for (const edit of edits) {
    const normalizedOld = normalizeLineEndings(edit.oldText);
    const normalizedNew = normalizeLineEndings(edit.newText);

    // If exact match exists, use it
    if (modifiedContent.includes(normalizedOld)) {
      modifiedContent = modifiedContent.replace(normalizedOld, normalizedNew);
      continue;
    }

    // Otherwise, try line-by-line matching with flexibility for whitespace
    const oldLines = normalizedOld.split('\n');
    const contentLines = modifiedContent.split('\n');
    let matchFound = false;

    for (let i = 0; i <= contentLines.length - oldLines.length; i++) {
      const potentialMatch = contentLines.slice(i, i + oldLines.length);

      // Compare lines with normalized whitespace
      const isMatch = oldLines.every((oldLine, j) => {
        const contentLine = potentialMatch[j];
        return oldLine.trim() === contentLine.trim();
      });

      if (isMatch) {
        // Preserve original indentation of first line
        const originalIndent = contentLines[i].match(/^\s*/)?.[0] || '';
        const newLines = normalizedNew.split('\n').map((line, j) => {
          if (j === 0) return originalIndent + line.trimStart();
          // For subsequent lines, try to preserve relative indentation
          const oldIndent = oldLines[j]?.match(/^\s*/)?.[0] || '';
          const newIndent = line.match(/^\s*/)?.[0] || '';
          if (oldIndent && newIndent) {
            const relativeIndent = newIndent.length - oldIndent.length;
            return originalIndent + ' '.repeat(Math.max(0, relativeIndent)) + line.trimStart();
          }
          return line;
        });

        contentLines.splice(i, oldLines.length, ...newLines);
        modifiedContent = contentLines.join('\n');
        matchFound = true;
        break;
      }
    }

    if (!matchFound) {
      throw new Error(`Could not find exact match for edit:\n${edit.oldText}`);
    }
  }

  // Create unified diff
  const diff = createUnifiedDiff(content, modifiedContent, filePath);

  // Format diff with appropriate number of backticks
  let numBackticks = 3;
  while (diff.includes('`'.repeat(numBackticks))) {
    numBackticks++;
  }
  const formattedDiff = `${'`'.repeat(numBackticks)}diff\n${diff}${'`'.repeat(numBackticks)}\n\n`;

  if (!dryRun) {
    await fsPromises.writeFile(filePath, modifiedContent, 'utf-8');
  }

  return formattedDiff;
}

export type RegexSearchResult = ReadonlyDeep<{
  path: string;
  matches: Array<{
    lineNumber: number;
    lineContent: string;
  }>;
}>;

export async function regexSearchContent(
  rootPath: string,
  regexPattern: string,
  filePattern: string = '*',
  maxDepth: number = 2,
  maxFileSize: number = 10 * 1024 * 1024, // 10MB default
  maxResults: number = 50
): Promise<ReadonlyArray<RegexSearchResult>> {
  const results: RegexSearchResult[] = [];
  let regex: RegExp;

  try {
    regex = new RegExp(regexPattern, 'g'); // Global flag to find all matches
  } catch (error: any) {
    throw new Error(`Invalid regex pattern provided: ${error.message}`);
  }

  async function search(currentPath: string, currentDepth: number) {
    if (currentDepth >= maxDepth || results.length >= maxResults) {
      return;
    }

    let entries;
    try {
      entries = await fsPromises.readdir(currentPath, { withFileTypes: true });
    } catch (error: any) {
      console.warn(`Skipping directory ${currentPath}: ${error.message}`);
      return; // Skip directories we can't read
    }

    for (const entry of entries) {
      if (results.length >= maxResults) return; // Check results limit again

      const fullPath = path.join(currentPath, entry.name);
      const relativePath = path.relative(rootPath, fullPath);

      if (entry.isDirectory()) {
        await search(fullPath, currentDepth + 1);
      } else if (entry.isFile()) {
        // Check if file matches the filePattern glob
        // Match file pattern against the relative path (removed matchBase: true)
        if (!minimatch(relativePath, filePattern, { dot: true })) {
          continue;
        }

        try {
          const stats = await fsPromises.stat(fullPath);
          if (stats.size > maxFileSize) {
            console.warn(`Skipping large file ${fullPath}: size ${stats.size} > max ${maxFileSize}`);
            continue;
          }

          // Use streaming approach for large files
          const fileStream = createReadStream(fullPath, { encoding: 'utf-8' });
          const rl = readline.createInterface({
            input: fileStream,
            crlfDelay: Infinity, // Handle different line endings
          });

          const fileMatches: { lineNumber: number; lineContent: string }[] = [];
          let currentLineNumber = 0;

          // Wrap readline processing in a promise
          await new Promise<void>((resolve, reject) => {
            rl.on('line', (line) => {
              currentLineNumber++;
              // Reset regex lastIndex before each test if using global flag
              regex.lastIndex = 0;
              if (regex.test(line)) {
                fileMatches.push({ lineNumber: currentLineNumber, lineContent: line });
              }
            });

            rl.on('close', () => {
              resolve();
            });

            rl.on('error', (err) => {
              // Don't reject, just warn and resolve to continue processing other files
              console.warn(`Error reading file ${fullPath}: ${err.message}`);
              resolve();
            });

            fileStream.on('error', (err) => {
              // Handle stream errors (e.g., file not found during read)
               console.warn(`Error reading file stream ${fullPath}: ${err.message}`);
               resolve(); // Resolve to allow processing to continue
            });
          });

          if (fileMatches.length > 0) {
            if (results.length < maxResults) {
              results.push({ path: fullPath, matches: fileMatches });
            }
            if (results.length >= maxResults) return; // Stop searching this branch
          }
        } catch (error: any) {
          console.warn(`Skipping file ${fullPath}: ${error.message}`);
          // Continue searching other files even if one fails
        }
      }
    }
  }

  await search(rootPath, 0);
  return results;
}
```

--------------------------------------------------------------------------------
/index.ts:
--------------------------------------------------------------------------------

```typescript
#!/usr/bin/env bun

import { FastMCP } from "fastmcp";
import fs from "fs/promises";
import path from "path";
import {
  expandHome,
  normalizePath,
  validatePath,
} from "./src/utils/path-utils.js";
import { toolSchemas } from "./src/schemas/index.js";
import { toZodParameters } from "./src/utils/typebox-zod.js";
import {
  handleReadFile,
  handleReadMultipleFiles,
  handleCreateFile,
  handleModifyFile,
  handleEditFile,
  handleGetFileInfo,
  handleMoveFile,
  handleDeleteFile,
  handleRenameFile,
} from "./src/handlers/file-handlers.js";
import {
  handleCreateDirectory,
  handleListDirectory,
  handleDirectoryTree,
  handleDeleteDirectory,
} from "./src/handlers/directory-handlers.js";
import {
  handleSearchFiles,
  handleFindFilesByExtension,
  handleGetPermissions,
  handleXmlToJson,
  handleXmlToJsonString,
  handleListAllowedDirectories,
  handleRegexSearchContent,
} from "./src/handlers/utility-handlers.js";
import {
  handleXmlQuery,
  handleXmlStructure,
} from "./src/handlers/xml-handlers.js";
import {
  handleJsonQuery,
  handleJsonFilter,
  handleJsonGetValue,
  handleJsonTransform,
  handleJsonStructure,
  handleJsonSample,
  handleJsonValidate,
  handleJsonSearchKv,
} from "./src/handlers/json-handlers.js";

// parse command line
const args = process.argv.slice(2);
const readonlyFlag = args.includes("--readonly");
const noFollowSymlinks = args.includes("--no-follow-symlinks");
const fullAccessFlag = args.includes("--full-access");
const allowCreate = args.includes("--allow-create");
const allowEdit = args.includes("--allow-edit");
const allowMove = args.includes("--allow-move");
const allowDelete = args.includes("--allow-delete");
const allowRename = args.includes("--allow-rename");
const httpFlagIndex = args.indexOf("--http");
const useHttp = httpFlagIndex !== -1;
if (useHttp) args.splice(httpFlagIndex, 1);
let port = 8080;
const portIndex = args.indexOf("--port");
if (portIndex !== -1) {
  port = parseInt(args[portIndex + 1], 10);
  args.splice(portIndex, 2);
}

if (readonlyFlag) args.splice(args.indexOf("--readonly"), 1);
if (noFollowSymlinks) args.splice(args.indexOf("--no-follow-symlinks"), 1);
if (fullAccessFlag) args.splice(args.indexOf("--full-access"), 1);
if (allowCreate) args.splice(args.indexOf("--allow-create"), 1);
if (allowEdit) args.splice(args.indexOf("--allow-edit"), 1);
if (allowMove) args.splice(args.indexOf("--allow-move"), 1);
if (allowDelete) args.splice(args.indexOf("--allow-delete"), 1);
if (allowRename) args.splice(args.indexOf("--allow-rename"), 1);

const useCwdFlag = args.includes("--cwd");
if (useCwdFlag) {
  args.splice(args.indexOf("--cwd"), 1);
}

// If no explicit allowed directories, use cwd if --cwd was passed or no directory was passed at all
let allowedDirectories: string[];
if (args.length === 0 || useCwdFlag) {
  allowedDirectories = [normalizePath(process.cwd())];
} else {
  allowedDirectories = args.map((dir) =>
    normalizePath(path.resolve(expandHome(dir))),
  );
}

if (!useCwdFlag && args.length === 0) {
  console.warn(
    "No allowed directory specified. Using current working directory as root.",
  );
  console.warn(
    "Usage: mcp-server-filesystem [flags] <allowed-directory> [additional-directories...]\n       mcp-server-filesystem --cwd [flags]",
  );
}

// duplicate declaration removed; `allowedDirectories` already defined above
const symlinksMap = new Map<string, string>();

await Promise.all(
  allowedDirectories.map(async (dir) => {
    try {
      const stats = await fs.stat(dir);
      if (!stats.isDirectory()) {
        console.error(`Error: ${dir} is not a directory`);
        process.exit(1);
      }
      try {
        const realPath = await fs.realpath(dir);
        if (realPath !== dir) {
          const normalizedDir = normalizePath(path.resolve(expandHome(dir)));
          const normalizedRealPath = normalizePath(realPath);
          symlinksMap.set(normalizedRealPath, normalizedDir);
          if (!allowedDirectories.includes(normalizedRealPath))
            allowedDirectories.push(normalizedRealPath);
          await validatePath(
            normalizedRealPath,
            allowedDirectories,
            symlinksMap,
            noFollowSymlinks,
          );
        }
        await validatePath(
          dir,
          allowedDirectories,
          symlinksMap,
          noFollowSymlinks,
        );
      } catch (error) {
        console.error(
          `Warning: Could not resolve real path for ${dir}:`,
          error,
        );
      }
    } catch (error) {
      console.error(`Error accessing directory ${dir}:`, error);
      process.exit(1);
    }
  }),
);

const permissions = {
  create: !readonlyFlag && (fullAccessFlag || allowCreate),
  edit: !readonlyFlag && (fullAccessFlag || allowEdit),
  move: !readonlyFlag && (fullAccessFlag || allowMove),
  delete: !readonlyFlag && (fullAccessFlag || allowDelete),
  rename: !readonlyFlag && (fullAccessFlag || allowRename),
  fullAccess: !readonlyFlag && fullAccessFlag,
};

const server = new FastMCP({
  name: "secure-filesystem-server",
  version: "0.2.0",
});

const toolHandlers = {
  read_file: (a: unknown) =>
    handleReadFile(a, allowedDirectories, symlinksMap, noFollowSymlinks),
  read_multiple_files: (a: unknown) =>
    handleReadMultipleFiles(
      a,
      allowedDirectories,
      symlinksMap,
      noFollowSymlinks,
    ),
  create_file: (a: unknown) =>
    handleCreateFile(
      a,
      permissions,
      allowedDirectories,
      symlinksMap,
      noFollowSymlinks,
    ),
  modify_file: (a: unknown) =>
    handleModifyFile(
      a,
      permissions,
      allowedDirectories,
      symlinksMap,
      noFollowSymlinks,
    ),
  edit_file: (a: unknown) =>
    handleEditFile(
      a,
      permissions,
      allowedDirectories,
      symlinksMap,
      noFollowSymlinks,
    ),
  create_directory: (a: unknown) =>
    handleCreateDirectory(
      a,
      permissions,
      allowedDirectories,
      symlinksMap,
      noFollowSymlinks,
    ),
  list_directory: (a: unknown) =>
    handleListDirectory(a, allowedDirectories, symlinksMap, noFollowSymlinks),
  directory_tree: (a: unknown) =>
    handleDirectoryTree(a, allowedDirectories, symlinksMap, noFollowSymlinks),
  move_file: (a: unknown) =>
    handleMoveFile(
      a,
      permissions,
      allowedDirectories,
      symlinksMap,
      noFollowSymlinks,
    ),
  rename_file: (a: unknown) =>
    handleRenameFile(
      a,
      permissions,
      allowedDirectories,
      symlinksMap,
      noFollowSymlinks,
    ),
  delete_directory: (a: unknown) =>
    handleDeleteDirectory(
      a,
      permissions,
      allowedDirectories,
      symlinksMap,
      noFollowSymlinks,
    ),
  search_files: (a: unknown) =>
    handleSearchFiles(a, allowedDirectories, symlinksMap, noFollowSymlinks),
  find_files_by_extension: (a: unknown) =>
    handleFindFilesByExtension(
      a,
      allowedDirectories,
      symlinksMap,
      noFollowSymlinks,
    ),
  get_file_info: (a: unknown) =>
    handleGetFileInfo(a, allowedDirectories, symlinksMap, noFollowSymlinks),
  list_allowed_directories: (a: unknown) =>
    handleListAllowedDirectories(a, allowedDirectories),
  get_permissions: (a: unknown) =>
    handleGetPermissions(
      a,
      permissions,
      readonlyFlag,
      noFollowSymlinks,
      allowedDirectories,
    ),
  xml_query: (a: unknown) =>
    handleXmlQuery(a, allowedDirectories, symlinksMap, noFollowSymlinks),
  xml_structure: (a: unknown) =>
    handleXmlStructure(a, allowedDirectories, symlinksMap, noFollowSymlinks),
  xml_to_json: (a: unknown) =>
    handleXmlToJson(
      a,
      permissions,
      allowedDirectories,
      symlinksMap,
      noFollowSymlinks,
    ),
  xml_to_json_string: (a: unknown) =>
    handleXmlToJsonString(a, allowedDirectories, symlinksMap, noFollowSymlinks),
  delete_file: (a: unknown) =>
    handleDeleteFile(
      a,
      permissions,
      allowedDirectories,
      symlinksMap,
      noFollowSymlinks,
    ),
  json_query: (a: unknown) =>
    handleJsonQuery(a, allowedDirectories, symlinksMap, noFollowSymlinks),
  json_structure: (a: unknown) =>
    handleJsonStructure(a, allowedDirectories, symlinksMap, noFollowSymlinks),
  json_filter: (a: unknown) =>
    handleJsonFilter(a, allowedDirectories, symlinksMap, noFollowSymlinks),
  json_get_value: (a: unknown) =>
    handleJsonGetValue(a, allowedDirectories, symlinksMap, noFollowSymlinks),
  json_transform: (a: unknown) =>
    handleJsonTransform(a, allowedDirectories, symlinksMap, noFollowSymlinks),
  json_sample: (a: unknown) =>
    handleJsonSample(a, allowedDirectories, symlinksMap, noFollowSymlinks),
  json_validate: (a: unknown) =>
    handleJsonValidate(a, allowedDirectories, symlinksMap, noFollowSymlinks),
  json_search_kv: (a: unknown) =>
    handleJsonSearchKv(a, allowedDirectories, symlinksMap, noFollowSymlinks),
  regex_search_content: (a: unknown) =>
    handleRegexSearchContent(
      a,
      allowedDirectories,
      symlinksMap,
      noFollowSymlinks,
    ),
} as const;

const allTools = [
  { name: "read_file", description: "Read file contents" },
  { name: "read_multiple_files", description: "Read multiple files" },
  { name: "list_directory", description: "List directory contents" },
  { name: "directory_tree", description: "Directory tree view" },
  { name: "search_files", description: "Search files by name" },
  { name: "find_files_by_extension", description: "Find files by extension" },
  { name: "get_file_info", description: "Get file metadata" },
  { name: "list_allowed_directories", description: "List allowed directories" },
  { name: "get_permissions", description: "Get server permissions" },
  { name: "create_file", description: "Create a new file" },
  { name: "modify_file", description: "Replace file contents" },
  { name: "edit_file", description: "Edit part of a file" },
  { name: "create_directory", description: "Create a directory" },
  { name: "move_file", description: "Move a file" },
  { name: "rename_file", description: "Rename a file" },
  { name: "delete_directory", description: "Delete a directory" },
  { name: "xml_query", description: "Query XML" },
  { name: "xml_structure", description: "Analyze XML structure" },
  { name: "xml_to_json", description: "Convert XML to JSON" },
  { name: "xml_to_json_string", description: "XML to JSON string" },
  { name: "delete_file", description: "Delete a file" },
  { name: "json_query", description: "Query JSON" },
  { name: "json_structure", description: "JSON structure" },
  { name: "json_filter", description: "Filter JSON" },
  { name: "json_get_value", description: "Get value from JSON" },
  { name: "json_transform", description: "Transform JSON" },
  { name: "json_sample", description: "Sample JSON data" },
  { name: "json_validate", description: "Validate JSON" },
  { name: "json_search_kv", description: "Search key/value in JSON" },
  {
    name: "regex_search_content",
    description: "Search file content with regex",
  },
];

const tools = !permissions.fullAccess
  ? allTools.filter((t) => {
      if (
        [
          "read_file",
          "read_multiple_files",
          "list_directory",
          "directory_tree",
          "search_files",
          "find_files_by_extension",
          "get_file_info",
          "list_allowed_directories",
          "xml_to_json_string",
          "get_permissions",
          "xml_query",
          "xml_structure",
          "json_query",
          "json_filter",
          "json_get_value",
          "json_transform",
          "json_structure",
          "json_sample",
          "json_validate",
          "json_search_kv",
          "regex_search_content",
        ].includes(t.name)
      ) {
        return true;
      }
      if (
        permissions.create &&
        ["create_file", "create_directory", "xml_to_json"].includes(t.name)
      )
        return true;
      if (permissions.edit && ["modify_file", "edit_file"].includes(t.name))
        return true;
      if (permissions.move && t.name === "move_file") return true;
      if (permissions.rename && t.name === "rename_file") return true;
      if (
        permissions.delete &&
        ["delete_file", "delete_directory"].includes(t.name)
      )
        return true;
      return false;
    })
  : allTools;

for (const tool of tools) {
  const execute = toolHandlers[tool.name as keyof typeof toolHandlers];
  const schema = (toolSchemas as Record<string, any>)[tool.name];
  server.addTool({
    name: tool.name,
    description: tool.description,
    parameters: toZodParameters(schema as any) as any,
    execute: async (a) => execute(a) as any,
  });
}

async function runServer() {
  if (useHttp) {
    await server.start({ transportType: "httpStream", httpStream: { port } });
    console.error(
      `Secure MCP Filesystem Server running on HTTP stream port ${port}`,
    );
  } else {
    await server.start({ transportType: "stdio" });
    console.error("Secure MCP Filesystem Server running on stdio");
  }
  console.error("Allowed directories:", allowedDirectories);
  const permState = [] as string[];
  if (readonlyFlag) {
    console.error(
      "Server running in read-only mode (--readonly flag overrides all other permissions)",
    );
  } else if (permissions.fullAccess) {
    console.error(
      "Server running with full access (all operations enabled via --full-access)",
    );
  } else {
    if (permissions.create) permState.push("create");
    if (permissions.edit) permState.push("edit");
    if (permissions.move) permState.push("move");
    if (permissions.rename) permState.push("rename");
    if (permissions.delete) permState.push("delete");
    if (permState.length === 0) {
      console.error(
        "Server running in default read-only mode (use --full-access or specific --allow-* flags to enable write operations)",
      );
    } else {
      console.error(
        `Server running with specific permissions enabled: ${permState.join(", ")}`,
      );
    }
  }
  if (noFollowSymlinks) {
    console.error("Server running with symlink following disabled");
  }
}

runServer().catch((error) => {
  console.error("Fatal error running server:", error);
  process.exit(1);
});

```
Page 1/2FirstPrevNextLast