This is page 1 of 2. Use http://codebase.md/ch1nhpd/pentest-tools-mcp-server?lines=true&page={x} to view the full context. # Directory Structure ``` ├── config.json ├── docker-compose.yml ├── Dockerfile ├── pentest-tools-mcp-server.py ├── README.md ├── requirements.txt └── wordlists └── xss-payloads.txt ``` # Files -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- ```markdown 1 | # Pentest Tools MCP Server 2 | 3 | An MCP (Model Context Protocol) server for penetration testing tools, designed to work with various LLM clients like Claude Desktop, Roo Code, and other compatible MCP clients. 4 | 5 | ## Features 6 | 7 | - Comprehensive pentesting tools: 8 | - Directory scanning (FFuf, Dirsearch) 9 | - Vulnerability scanning (Nuclei, XSStrike) 10 | - API testing 11 | - Reconnaissance 12 | - And more... 13 | - Pre-configured wordlists from SecLists 14 | - Automated report generation 15 | - Claude Desktop integration 16 | 17 | ## Prerequisites 18 | 19 | - Docker and Docker Compose (for containerized setup) 20 | - Claude Desktop application or other MCP-compatible client 21 | - Python 3.10+ and uv (for local setup) 22 | 23 | ## Directory Setup 24 | 25 | 1. Create the required directories: 26 | ```bash 27 | # Create directories 28 | mkdir -p reports templates wordlists 29 | ``` 30 | 31 | 2. Directory structure should look like this: 32 | ``` 33 | pentest-tools/ 34 | ├── reports/ # For storing scan reports 35 | ├── templates/ # For report templates 36 | ├── wordlists/ # For custom wordlists 37 | ├── pentest-tools-mcp-server.py 38 | ├── config.json 39 | ├── requirements.txt 40 | ├── docker-compose.yml 41 | └── Dockerfile 42 | ``` 43 | 44 | ## Setup 45 | 46 | ### Docker Setup (Recommended) 47 | 48 | 1. Build and start the container: 49 | ```bash 50 | docker-compose up -d --build 51 | ``` 52 | 53 | 2. Verify the container is running: 54 | ```bash 55 | docker-compose ps 56 | ``` 57 | 58 | 3. Check logs if needed: 59 | ```bash 60 | docker-compose logs -f 61 | ``` 62 | 63 | ### Local Setup 64 | 65 | 1. Install dependencies: 66 | ```bash 67 | uv venv 68 | source .venv/bin/activate # On Windows: .venv\Scripts\activate 69 | uv pip install -r requirements.txt 70 | ``` 71 | 72 | 2. Install required system tools (example for Ubuntu/Debian): 73 | ```bash 74 | sudo apt-get install nmap whatweb dnsrecon theharvester ffuf dirsearch sqlmap 75 | ``` 76 | 77 | ## Claude Desktop Integration 78 | 79 | 1. Configure Claude Desktop: 80 | 81 | Windows: 82 | ``` 83 | %APPDATA%\Claude\claude_desktop_config.json 84 | ``` 85 | 86 | MacOS/Linux: 87 | ``` 88 | ~/Library/Application Support/Claude/claude_desktop_config.json 89 | ``` 90 | 91 | 2. Add server configuration: 92 | 93 | For Docker setup: 94 | ```json 95 | { 96 | "mcpServers": { 97 | "pentest-tools": { 98 | "command": "docker-compose", 99 | "args": [ 100 | "run", 101 | "--rm", 102 | "pentest-tools", 103 | "python3", 104 | "pentest-tools-mcp-server.py" 105 | ], 106 | "cwd": "\\Path\\to\\pentest-tools" 107 | } 108 | } 109 | } 110 | ``` 111 | 112 | If the above configuration doesn't work on Windows, try this alternative approach: 113 | 114 | ```json 115 | { 116 | "mcpServers": { 117 | "pentest-tools": { 118 | "command": "cmd", 119 | "args": [ 120 | "/c", 121 | "cd /d \\path\\to\\pentest-tools && docker-compose run --rm pentest-tools python3 pentest-tools-mcp-server.py" 122 | ] 123 | } 124 | } 125 | } 126 | ``` 127 | 128 | Note about `cwd` (Current Working Directory): 129 | - `cwd` tells Claude Desktop which directory to run the command from 130 | - It must be the absolute path to the directory containing `docker-compose.yml` 131 | - On Windows, use double backslashes (`\\`) in paths 132 | - On Linux/MacOS, use forward slashes (`/`) 133 | 134 | 3. Restart Claude Desktop 135 | 136 | ## Usage 137 | 138 | Available commands in Claude Desktop: 139 | 140 | 1. Reconnaissance: 141 | ``` 142 | /recon example.com 143 | ``` 144 | 145 | 2. Directory scanning: 146 | ``` 147 | /scan example.com --type directory 148 | ``` 149 | 150 | 3. Vulnerability scanning: 151 | ``` 152 | /scan example.com --type full 153 | /scan example.com --type xss 154 | /scan example.com --type sqli 155 | /scan example.com --type ssrf 156 | ``` 157 | 158 | 4. API testing: 159 | ``` 160 | /scan api.example.com --type api 161 | ``` 162 | 163 | Natural language commands: 164 | - "Run a full security scan on example.com" 165 | - "Check for XSS vulnerabilities on example.com" 166 | - "Perform reconnaissance on example.com" 167 | 168 | ## Directory Structure Details 169 | 170 | ``` 171 | pentest-tools/ 172 | ├── reports/ # Scan reports directory 173 | │ ├── recon/ # Reconnaissance reports 174 | │ ├── vulns/ # Vulnerability scan reports 175 | │ └── api/ # API testing reports 176 | ├── templates/ # Report templates 177 | │ ├── recon.html # Template for recon reports 178 | │ ├── vuln.html # Template for vulnerability reports 179 | │ └── api.html # Template for API test reports 180 | ├── wordlists/ # Custom wordlists 181 | │ ├── SecLists/ # Cloned from SecLists repo 182 | │ ├── custom/ # Your custom wordlists 183 | │ └── generated/ # Tool-generated wordlists 184 | ├── pentest-tools-mcp-server.py # Main MCP server 185 | ├── config.json # Tool configuration 186 | ├── requirements.txt # Python dependencies 187 | ├── docker-compose.yml # Docker configuration 188 | └── Dockerfile # Container definition 189 | ``` 190 | 191 | ## Security Notes 192 | 193 | - Always ensure you have permission to scan targets 194 | - Keep tools and dependencies updated 195 | - Review scan results carefully 196 | - Follow responsible disclosure practices 197 | 198 | 199 | 200 | ``` -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- ``` 1 | mcp[cli]>=1.2.0 2 | httpx 3 | aiohttp 4 | aiofiles 5 | beautifulsoup4 6 | pyjwt 7 | pyyaml 8 | requests 9 | sslyze 10 | python-nmap 11 | dnspython 12 | pdfkit 13 | dotenv ``` -------------------------------------------------------------------------------- /docker-compose.yml: -------------------------------------------------------------------------------- ```yaml 1 | version: '3.8' 2 | 3 | services: 4 | pentest-tools: 5 | build: . 6 | container_name: pentest-tools 7 | volumes: 8 | - ./reports:/app/reports 9 | - ./templates:/app/templates 10 | - ./wordlists:/usr/share/wordlists/pentest-tools 11 | environment: 12 | - GITHUB_TOKEN=${GITHUB_TOKEN} 13 | cap_add: 14 | - NET_ADMIN # Required for some network scanning tools 15 | - NET_RAW # Required for raw socket operations 16 | security_opt: 17 | - seccomp:unconfined # Required for some security tools 18 | stdin_open: true # Keep STDIN open for MCP 19 | tty: true # Allocate a pseudo-TTY 20 | 21 | networks: 22 | pentest-network: 23 | driver: bridge ``` -------------------------------------------------------------------------------- /config.json: -------------------------------------------------------------------------------- ```json 1 | { 2 | "wordlists": { 3 | "dirsearch": "/usr/share/wordlists/pentest-tools/dirsearch.txt", 4 | "common": "/usr/share/wordlists/pentest-tools/SecLists/Discovery/Web-Content/common.txt", 5 | "api_endpoints": "/usr/share/wordlists/pentest-tools/SecLists/Discovery/Web-Content/api-endpoints.txt", 6 | "subdomains": "/usr/share/wordlists/pentest-tools/SecLists/Discovery/DNS/subdomains-top1million-5000.txt", 7 | "passwords": "/usr/share/wordlists/pentest-tools/SecLists/Passwords/Common-Credentials/10-million-password-list-top-1000.txt", 8 | "jwt_secrets": "/usr/share/wordlists/pentest-tools/SecLists/Passwords/Common-Credentials/common-secrets.txt", 9 | "xss": "/usr/share/wordlists/pentest-tools/xss-payloads.txt", 10 | "sqli": "/usr/share/wordlists/pentest-tools/sqli-payloads.txt", 11 | "lfi": "/usr/share/wordlists/pentest-tools/lfi-payloads.txt", 12 | "ssrf": "/usr/share/wordlists/pentest-tools/ssrf-payloads.txt" 13 | }, 14 | "tools": { 15 | "xsstrike": "/root/tools/XSStrike/xsstrike.py", 16 | "nuclei_templates": "/root/tools/nuclei-templates", 17 | "crlfuzz": "/root/go/bin/crlfuzz", 18 | "graphql_path": "/root/tools/graphql-tools" 19 | }, 20 | "reporting": { 21 | "output_dir": "/app/reports", 22 | "template_dir": "/app/templates" 23 | } 24 | } ``` -------------------------------------------------------------------------------- /Dockerfile: -------------------------------------------------------------------------------- ```dockerfile 1 | FROM kalilinux/kali-rolling 2 | 3 | # Prevent interactive prompts during package installation 4 | ENV DEBIAN_FRONTEND=noninteractive 5 | 6 | # Install basic system packages 7 | RUN apt-get update && apt-get install -y \ 8 | python3 \ 9 | python3-pip \ 10 | python3-venv \ 11 | golang \ 12 | git \ 13 | wget \ 14 | nmap \ 15 | whatweb \ 16 | dnsrecon \ 17 | theharvester \ 18 | ffuf \ 19 | dirsearch \ 20 | sqlmap \ 21 | amass \ 22 | nodejs \ 23 | npm \ 24 | curl \ 25 | dnsutils \ 26 | bind9-utils \ 27 | sslyze \ 28 | && rm -rf /var/lib/apt/lists/* 29 | 30 | # Set up Python virtual environment 31 | ENV VIRTUAL_ENV=/opt/venv 32 | RUN python3 -m venv $VIRTUAL_ENV 33 | ENV PATH="$VIRTUAL_ENV/bin:$PATH" 34 | 35 | # Install Python packages in virtual environment 36 | COPY requirements.txt /tmp/ 37 | RUN pip install --no-cache-dir -r /tmp/requirements.txt 38 | 39 | # Check Go version 40 | RUN go version 41 | 42 | # Check network connectivity to GitHub 43 | RUN curl -v https://github.com 44 | 45 | # Install Go tools individually (excluding Amass) 46 | RUN go install -v github.com/projectdiscovery/subfinder/v2/cmd/subfinder@latest 47 | RUN go install -v github.com/tomnomnom/assetfinder@latest 48 | RUN go install -v github.com/tomnomnom/waybackurls@latest 49 | RUN go install -v github.com/lc/gau/v2/cmd/gau@latest 50 | RUN go install -v github.com/tillson/git-hound@latest 51 | RUN go install -v github.com/hakluke/hakrawler@latest 52 | RUN go install -v github.com/projectdiscovery/nuclei/v2/cmd/nuclei@latest 53 | RUN go install -v github.com/projectdiscovery/httpx/cmd/httpx@latest 54 | 55 | # Install npm packages 56 | RUN npm install -g wappalyzer-cli lighthouse snyk 57 | 58 | # Set up wordlists directory 59 | RUN mkdir -p /usr/share/wordlists/pentest-tools 60 | 61 | # Download SecLists 62 | RUN git clone https://github.com/danielmiessler/SecLists.git /usr/share/wordlists/pentest-tools/SecLists 63 | 64 | # Download other wordlists 65 | RUN wget -O /usr/share/wordlists/pentest-tools/dirsearch.txt https://raw.githubusercontent.com/maurosoria/dirsearch/master/db/dicc.txt && \ 66 | wget -O /usr/share/wordlists/pentest-tools/xss-payloads.txt https://raw.githubusercontent.com/payloadbox/xss-payload-list/master/Intruder/xss-payload-list.txt && \ 67 | wget -O /usr/share/wordlists/pentest-tools/sqli-payloads.txt https://raw.githubusercontent.com/payloadbox/sql-injection-payload-list/refs/heads/master/Intruder/detect/Generic_SQLI.txt && \ 68 | wget -O /usr/share/wordlists/pentest-tools/lfi-payloads.txt https://raw.githubusercontent.com/emadshanab/LFI-Payload-List/master/LFI%20payloads.txt && \ 69 | wget -O /usr/share/wordlists/pentest-tools/ssrf-payloads.txt https://gist.githubusercontent.com/rootsploit/66c9ae8fc3ef387fa5ffbb67fcef0766/raw/d5a4088d628ed05f161b9dd9bf3c6755910a164f/SSRF-Payloads.txt 70 | 71 | # Set up tools directory 72 | RUN mkdir -p /root/tools 73 | 74 | # Clone useful repositories 75 | RUN git clone https://github.com/s0md3v/XSStrike.git /root/tools/XSStrike && \ 76 | git clone https://github.com/projectdiscovery/nuclei-templates.git /root/tools/nuclei-templates && \ 77 | git clone https://github.com/graphql/graphql-js.git /root/tools/graphql-tools 78 | 79 | # Create necessary directories for the application 80 | RUN mkdir -p /app/reports /app/templates 81 | 82 | # Set working directory 83 | WORKDIR /app 84 | 85 | # Copy application files 86 | COPY . /app/ 87 | 88 | # Add Go binaries to PATH 89 | ENV PATH="/root/go/bin:$PATH" 90 | 91 | # Set entrypoint 92 | CMD ["python", "pentest-tools-mcp-server.py"] ``` -------------------------------------------------------------------------------- /pentest-tools-mcp-server.py: -------------------------------------------------------------------------------- ```python 1 | #!/usr/bin/env python3 2 | 3 | from typing import Any, List, Optional, Dict 4 | from mcp.server.fastmcp import FastMCP 5 | import httpx 6 | import subprocess 7 | import json 8 | import asyncio 9 | import re 10 | from pathlib import Path 11 | from datetime import datetime 12 | import yaml 13 | import aiofiles 14 | import aiohttp 15 | from bs4 import BeautifulSoup 16 | # import jwt 17 | import base64 18 | import logging 19 | import threading 20 | from concurrent.futures import ThreadPoolExecutor 21 | import urllib.parse 22 | import os 23 | from dotenv import load_dotenv 24 | 25 | # Set up logging 26 | logging.basicConfig( 27 | level=logging.INFO, 28 | format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', 29 | filename='pentest.log' 30 | ) 31 | logger = logging.getLogger('pentest-tools') 32 | 33 | # Initialize FastMCP server 34 | mcp = FastMCP("pentest-tools") 35 | 36 | # Configuration 37 | CONFIG = { 38 | "wordlists": { 39 | "dirsearch": "/usr/share/wordlists/pentest-tools/dirsearch.txt", 40 | "common": "/usr/share/wordlists/pentest-tools/SecLists/Discovery/Web-Content/common.txt", 41 | "api_endpoints": "/usr/share/wordlists/pentest-tools/SecLists/Discovery/Web-Content/api-endpoints.txt", 42 | "subdomains": "/usr/share/wordlists/pentest-tools/SecLists/Discovery/DNS/subdomains-top1million-5000.txt", 43 | "passwords": "/usr/share/wordlists/pentest-tools/SecLists/Passwords/Common-Credentials/10-million-password-list-top-1000.txt", 44 | "jwt_secrets": "/usr/share/wordlists/pentest-tools/SecLists/Passwords/Common-Credentials/common-secrets.txt", 45 | "xss": "/usr/share/wordlists/pentest-tools/xss-payloads.txt", 46 | "sqli": "/usr/share/wordlists/pentest-tools/sqli-payloads.txt", 47 | "lfi": "/usr/share/wordlists/pentest-tools/lfi-payloads.txt", 48 | "ssrf": "/usr/share/wordlists/pentest-tools/ssrf-payloads.txt" 49 | }, 50 | "tools": { 51 | "xsstrike": "/root/tools/XSStrike/xsstrike.py", 52 | "nuclei_templates": "/root/tools/nuclei-templates", 53 | "crlfuzz": "/root/go/bin/crlfuzz", 54 | "graphql_path": "/root/tools/graphql-tools" 55 | }, 56 | "reporting": { 57 | "output_dir": "reports", 58 | "template_dir": "templates" 59 | } 60 | } 61 | 62 | # Load environment variables 63 | load_dotenv() 64 | 65 | def format_ffuf_results(results): 66 | """Format the FFuf results in a readable format. 67 | 68 | Args: 69 | results: FFuf JSON results 70 | 71 | Returns: 72 | Formatted string with FFuf results 73 | """ 74 | formatted = [] 75 | 76 | # Add summary 77 | if "stats" in results: 78 | stats = results["stats"] 79 | formatted.append(f"Total requests: {stats.get('total', 0)}") 80 | formatted.append(f"Duration: {stats.get('elapsed', 0)} seconds") 81 | formatted.append(f"Requests per second: {stats.get('req_sec', 0)}") 82 | 83 | # Add results 84 | if "results" in results: 85 | formatted.append("\nDiscovered URLs:") 86 | for item in results["results"]: 87 | status = item.get("status", "") 88 | url = item.get("url", "") 89 | size = item.get("length", 0) 90 | formatted.append(f"[Status: {status}] [{size} bytes] {url}") 91 | 92 | return "\n".join(formatted) 93 | 94 | @mcp.tool() 95 | async def advanced_directory_scan(url: str, extensions: List[str] = None) -> str: 96 | """Advanced directory and file scanning with multiple tools and techniques. 97 | 98 | Args: 99 | url: Target URL 100 | extensions: List of file extensions to scan for 101 | """ 102 | if extensions is None: 103 | extensions = ["php", "asp", "aspx", "jsp", "js", "txt", "conf", "bak", "backup", "swp", "old", "db", "sql"] 104 | 105 | results = [] 106 | 107 | # 1. FFuf with advanced options 108 | try: 109 | ffuf_cmd = [ 110 | "ffuf", 111 | "-u", f"{url}/FUZZ", 112 | "-w", CONFIG["wordlists"]["dirsearch"], 113 | "-e", ",".join(extensions), 114 | "-recursion", 115 | "-recursion-depth", "3", 116 | "-mc", "all", 117 | "-ac", 118 | "-o", "ffuf.json", 119 | "-of", "json" 120 | ] 121 | subprocess.run(ffuf_cmd, check=True) 122 | with open("ffuf.json") as f: 123 | ffuf_results = json.load(f) 124 | results.append("=== FFuf Results ===\n" + format_ffuf_results(ffuf_results)) 125 | except Exception as e: 126 | results.append(f"FFuf error: {str(e)}") 127 | 128 | # 2. Dirsearch with advanced options 129 | try: 130 | dirsearch_cmd = [ 131 | "dirsearch", 132 | "-u", url, 133 | "-e", ",".join(extensions), 134 | "--deep-recursive", 135 | "--force-recursive", 136 | "--exclude-status", "404", 137 | "-o", "dirsearch.json", 138 | "--format", "json" 139 | ] 140 | subprocess.run(dirsearch_cmd, check=True) 141 | with open("dirsearch.json") as f: 142 | dirsearch_results = json.load(f) 143 | results.append("=== Dirsearch Results ===\n" + json.dumps(dirsearch_results, indent=2)) 144 | except Exception as e: 145 | results.append(f"Dirsearch error: {str(e)}") 146 | 147 | return "\n\n".join(results) 148 | 149 | @mcp.tool() 150 | async def advanced_api_scan(url: str) -> str: 151 | """Advanced API security testing with multiple techniques. 152 | 153 | Args: 154 | url: Target API URL 155 | """ 156 | results = [] 157 | 158 | # 1. GraphQL Security Testing 159 | if "/graphql" in url or "/graphiql" in url: 160 | try: 161 | # Introspection query 162 | graphql_query = """ 163 | query IntrospectionQuery { 164 | __schema { 165 | types { name, fields { name, type { name } } } 166 | queryType { name } 167 | mutationType { name } 168 | subscriptionType { name } 169 | } 170 | } 171 | """ 172 | async with httpx.AsyncClient(verify=False) as client: 173 | response = await client.post(f"{url}", json={"query": graphql_query}) 174 | if response.status_code == 200: 175 | results.append("=== GraphQL Schema ===\n" + json.dumps(response.json(), indent=2)) 176 | 177 | # Test for common GraphQL vulnerabilities 178 | vulns = await test_graphql_vulnerabilities(url, response.json()) 179 | results.append("=== GraphQL Vulnerabilities ===\n" + vulns) 180 | except Exception as e: 181 | results.append(f"GraphQL testing error: {str(e)}") 182 | 183 | # 2. REST API Testing 184 | try: 185 | # Test common REST endpoints 186 | common_paths = ["/v1", "/v2", "/api", "/api/v1", "/api/v2", "/swagger", "/docs", "/openapi.json"] 187 | async with httpx.AsyncClient(verify=False) as client: 188 | for path in common_paths: 189 | response = await client.get(f"{url}{path}") 190 | if response.status_code != 404: 191 | results.append(f"\nFound API endpoint: {path}") 192 | results.append(f"Status: {response.status_code}") 193 | results.append(f"Response: {response.text[:500]}...") 194 | 195 | # If Swagger/OpenAPI found, parse and test endpoints 196 | if "swagger" in path or "openapi" in path: 197 | api_spec = response.json() 198 | results.append("\n=== Testing API Endpoints ===") 199 | for endpoint, methods in api_spec.get("paths", {}).items(): 200 | for method, details in methods.items(): 201 | test_result = await test_api_endpoint(url, endpoint, method, details) 202 | results.append(test_result) 203 | except Exception as e: 204 | results.append(f"REST API testing error: {str(e)}") 205 | 206 | return "\n\n".join(results) 207 | 208 | @mcp.tool() 209 | async def advanced_xss_scan(url: str) -> str: 210 | """Advanced XSS vulnerability scanning. 211 | 212 | Args: 213 | url: Target URL 214 | """ 215 | results = [] 216 | 217 | # 1. XSStrike with advanced options 218 | try: 219 | xsstrike_cmd = [ 220 | "python3", CONFIG["tools"]["xsstrike"], 221 | "-u", url, 222 | "--crawl", 223 | "--params", 224 | "--fuzzer", 225 | "--blind", 226 | "--vectors", CONFIG["wordlists"]["xss"] 227 | ] 228 | xsstrike = subprocess.check_output(xsstrike_cmd, text=True) 229 | results.append("=== XSStrike Results ===\n" + xsstrike) 230 | except Exception as e: 231 | results.append(f"XSStrike error: {str(e)}") 232 | 233 | # 2. Custom XSS testing 234 | try: 235 | async with httpx.AsyncClient(verify=False) as client: 236 | # Get all input parameters 237 | response = await client.get(url) 238 | soup = BeautifulSoup(response.text, 'html.parser') 239 | 240 | # Test input fields 241 | for input_field in soup.find_all(['input', 'textarea']): 242 | field_name = input_field.get('name', '') 243 | if field_name: 244 | # Test various XSS payloads 245 | with open(CONFIG["wordlists"]["xss"]) as f: 246 | for payload in f: 247 | payload = payload.strip() 248 | data = {field_name: payload} 249 | response = await client.post(url, data=data) 250 | if payload in response.text: 251 | results.append(f"Potential XSS found in {field_name} with payload: {payload}") 252 | except Exception as e: 253 | results.append(f"Custom XSS testing error: {str(e)}") 254 | 255 | return "\n\n".join(results) 256 | 257 | @mcp.tool() 258 | async def advanced_sqli_scan(url: str) -> str: 259 | """Advanced SQL injection testing. 260 | 261 | Args: 262 | url: Target URL 263 | """ 264 | results = [] 265 | 266 | # 1. SQLMap with advanced options 267 | try: 268 | sqlmap_cmd = [ 269 | "sqlmap", 270 | "-u", url, 271 | "--batch", 272 | "--random-agent", 273 | "--level", "5", 274 | "--risk", "3", 275 | "--threads", "10", 276 | "--tamper=space2comment,between,randomcase", 277 | "--time-sec", "1", 278 | "--dump" 279 | ] 280 | sqlmap = subprocess.check_output(sqlmap_cmd, text=True) 281 | results.append("=== SQLMap Results ===\n" + sqlmap) 282 | except Exception as e: 283 | results.append(f"SQLMap error: {str(e)}") 284 | 285 | # 2. Custom SQL injection testing 286 | try: 287 | async with httpx.AsyncClient(verify=False) as client: 288 | # Test parameters 289 | params = urllib.parse.parse_qs(urllib.parse.urlparse(url).query) 290 | for param in params: 291 | with open(CONFIG["wordlists"]["sqli"]) as f: 292 | for payload in f: 293 | payload = payload.strip() 294 | test_url = url.replace(f"{param}={params[param][0]}", f"{param}={payload}") 295 | response = await client.get(test_url) 296 | 297 | # Check for SQL errors 298 | sql_errors = [ 299 | "SQL syntax", 300 | "mysql_fetch_array", 301 | "ORA-", 302 | "PostgreSQL", 303 | "SQLite3::" 304 | ] 305 | for error in sql_errors: 306 | if error in response.text: 307 | results.append(f"Potential SQL injection in parameter {param} with payload: {payload}") 308 | except Exception as e: 309 | results.append(f"Custom SQLi testing error: {str(e)}") 310 | 311 | return "\n\n".join(results) 312 | 313 | @mcp.tool() 314 | async def advanced_ssrf_scan(url: str) -> str: 315 | """Advanced Server-Side Request Forgery testing. 316 | 317 | Args: 318 | url: Target URL 319 | """ 320 | results = [] 321 | 322 | try: 323 | async with httpx.AsyncClient(verify=False) as client: 324 | # Test various SSRF payloads 325 | with open(CONFIG["wordlists"]["ssrf"]) as f: 326 | for payload in f: 327 | payload = payload.strip() 328 | 329 | # Test in different parameter positions 330 | params = urllib.parse.parse_qs(urllib.parse.urlparse(url).query) 331 | for param in params: 332 | test_url = url.replace(f"{param}={params[param][0]}", f"{param}={payload}") 333 | response = await client.get(test_url) 334 | 335 | # Check for successful SSRF 336 | if response.status_code == 200 and len(response.text) > 0: 337 | results.append(f"Potential SSRF in parameter {param} with payload: {payload}") 338 | results.append(f"Response length: {len(response.text)}") 339 | results.append(f"Response preview: {response.text[:200]}...") 340 | except Exception as e: 341 | results.append(f"SSRF testing error: {str(e)}") 342 | 343 | return "\n\n".join(results) 344 | 345 | @mcp.tool() 346 | async def test_graphql_vulnerabilities(url: str, schema: dict) -> str: 347 | """Test GraphQL specific vulnerabilities. 348 | 349 | Args: 350 | url: GraphQL endpoint URL 351 | schema: GraphQL schema from introspection 352 | """ 353 | results = [] 354 | 355 | try: 356 | async with httpx.AsyncClient(verify=False) as client: 357 | # 1. Test for DoS via nested queries 358 | nested_query = "query {\n " + "user { ".repeat(10) + "id " + "}".repeat(10) + "\n}" 359 | response = await client.post(url, json={"query": nested_query}) 360 | if response.status_code != 200: 361 | results.append("Potential DoS vulnerability - nested queries not properly limited") 362 | 363 | # 2. Test for sensitive data exposure 364 | sensitive_types = ["User", "Admin", "Password", "Token", "Secret"] 365 | for type_obj in schema["__schema"]["types"]: 366 | if any(sensitive in type_obj["name"] for sensitive in sensitive_types): 367 | results.append(f"Potential sensitive data exposure in type: {type_obj['name']}") 368 | 369 | # 3. Test for batch query attacks 370 | batch_query = [{"query": "{ __schema { types { name } } }"} for _ in range(100)] 371 | response = await client.post(url, json=batch_query) 372 | if response.status_code == 200: 373 | results.append("Batch queries allowed - potential DoS vector") 374 | 375 | except Exception as e: 376 | results.append(f"GraphQL vulnerability testing error: {str(e)}") 377 | 378 | return "\n\n".join(results) 379 | 380 | @mcp.tool() 381 | async def test_api_endpoint(base_url: str, endpoint: str, method: str, details: dict) -> str: 382 | """Test individual API endpoint for vulnerabilities. 383 | 384 | Args: 385 | base_url: Base API URL 386 | endpoint: API endpoint path 387 | method: HTTP method 388 | details: Endpoint details from OpenAPI spec 389 | """ 390 | results = [] 391 | 392 | try: 393 | async with httpx.AsyncClient(verify=False) as client: 394 | # 1. Test without authentication 395 | response = await client.request(method, f"{base_url}{endpoint}") 396 | if response.status_code != 401: 397 | results.append(f"Endpoint {endpoint} accessible without auth") 398 | 399 | # 2. Test parameter fuzzing 400 | if "parameters" in details: 401 | for param in details["parameters"]: 402 | if param["in"] == "query": 403 | # Test SQL injection 404 | test_url = f"{base_url}{endpoint}?{param['name']}=1' OR '1'='1" 405 | response = await client.request(method, test_url) 406 | if "error" in response.text.lower(): 407 | results.append(f"Potential SQL injection in parameter {param['name']}") 408 | 409 | # Test XSS 410 | test_url = f"{base_url}{endpoint}?{param['name']}=<script>alert(1)</script>" 411 | response = await client.request(method, test_url) 412 | if "<script>alert(1)</script>" in response.text: 413 | results.append(f"Potential XSS in parameter {param['name']}") 414 | 415 | # 3. Test for mass assignment 416 | if method.lower() in ["post", "put"] and "requestBody" in details: 417 | schema = details["requestBody"]["content"]["application/json"]["schema"] 418 | if "properties" in schema: 419 | # Try to inject admin/privileged fields 420 | payload = { 421 | "isAdmin": True, 422 | "role": "admin", 423 | "privileges": ["admin"], 424 | **{k: "test" for k in schema["properties"].keys()} 425 | } 426 | response = await client.request(method, f"{base_url}{endpoint}", json=payload) 427 | if response.status_code == 200: 428 | results.append(f"Potential mass assignment vulnerability in {endpoint}") 429 | 430 | except Exception as e: 431 | results.append(f"API endpoint testing error: {str(e)}") 432 | 433 | return "\n\n".join(results) 434 | 435 | @mcp.tool() 436 | async def advanced_recon(domain: str) -> str: 437 | """Perform advanced reconnaissance on a target domain. 438 | 439 | Args: 440 | domain: Target domain name 441 | """ 442 | results = [] 443 | 444 | # 1. Subdomain Enumeration 445 | try: 446 | # Subfinder 447 | subdomains = subprocess.check_output(["subfinder", "-d", domain, "-silent"], text=True) 448 | results.append("=== Subfinder Results ===\n" + subdomains) 449 | 450 | # Amass 451 | amass = subprocess.check_output([ 452 | "amass", "enum", 453 | "-d", domain, 454 | "-passive", 455 | "-silent" 456 | ], text=True) 457 | results.append("=== Amass Results ===\n" + amass) 458 | 459 | # Assetfinder 460 | assetfinder = subprocess.check_output(["assetfinder", "--subs-only", domain], text=True) 461 | results.append("=== Assetfinder Results ===\n" + assetfinder) 462 | 463 | # GitHub Subdomains 464 | github_subdomains = subprocess.check_output([ 465 | "github-subdomains", 466 | "-d", domain, 467 | "-t", "YOUR_GITHUB_TOKEN" 468 | ], text=True) 469 | results.append("=== GitHub Subdomains ===\n" + github_subdomains) 470 | 471 | except Exception as e: 472 | results.append(f"Subdomain enumeration error: {str(e)}") 473 | 474 | # 2. Port Scanning & Service Detection 475 | try: 476 | # Quick Nmap scan 477 | nmap_quick = subprocess.check_output([ 478 | "nmap", 479 | "-sV", "-sC", 480 | "--min-rate", "1000", 481 | "-T4", 482 | domain 483 | ], text=True) 484 | results.append("=== Quick Nmap Scan ===\n" + nmap_quick) 485 | 486 | # Detailed scan of web ports 487 | nmap_web = subprocess.check_output([ 488 | "nmap", 489 | "-p", "80,443,8080,8443", 490 | "-sV", "--script=http-enum,http-headers,http-methods,http-title", 491 | domain 492 | ], text=True) 493 | results.append("=== Web Services Scan ===\n" + nmap_web) 494 | 495 | except Exception as e: 496 | results.append(f"Port scanning error: {str(e)}") 497 | 498 | # 3. Technology Detection 499 | try: 500 | # Wappalyzer 501 | wappalyzer = subprocess.check_output(["wappalyzer", f"https://{domain}"], text=True) 502 | results.append("=== Technologies (Wappalyzer) ===\n" + wappalyzer) 503 | 504 | # Whatweb 505 | whatweb = subprocess.check_output(["whatweb", "-a", "3", domain], text=True) 506 | results.append("=== Technologies (Whatweb) ===\n" + whatweb) 507 | 508 | except Exception as e: 509 | results.append(f"Technology detection error: {str(e)}") 510 | 511 | # 4. DNS Information 512 | try: 513 | # DNSRecon 514 | dnsrecon = subprocess.check_output(["dnsrecon", "-d", domain, "-t", "std,axfr,srv"], text=True) 515 | results.append("=== DNS Information ===\n" + dnsrecon) 516 | 517 | # DNS Zone Transfer 518 | dig_axfr = subprocess.check_output(["dig", "axfr", domain], text=True) 519 | results.append("=== DNS Zone Transfer ===\n" + dig_axfr) 520 | 521 | except Exception as e: 522 | results.append(f"DNS enumeration error: {str(e)}") 523 | 524 | # 5. Web Archive 525 | try: 526 | # Waybackurls 527 | wayback = subprocess.check_output(["waybackurls", domain], text=True) 528 | results.append("=== Historical URLs ===\n" + wayback) 529 | 530 | # Gau 531 | gau = subprocess.check_output(["gau", domain], text=True) 532 | results.append("=== GAU URLs ===\n" + gau) 533 | 534 | except Exception as e: 535 | results.append(f"Web archive error: {str(e)}") 536 | 537 | # 6. SSL/TLS Analysis 538 | try: 539 | # SSLyze 540 | sslyze = subprocess.check_output([ 541 | "sslyze", 542 | "--regular", 543 | domain 544 | ], text=True) 545 | results.append("=== SSL/TLS Analysis ===\n" + sslyze) 546 | 547 | except Exception as e: 548 | results.append(f"SSL analysis error: {str(e)}") 549 | 550 | # 7. Email Discovery 551 | try: 552 | # TheHarvester 553 | harvester = subprocess.check_output([ 554 | "theHarvester", 555 | "-d", domain, 556 | "-b", "all" 557 | ], text=True) 558 | results.append("=== Email Addresses ===\n" + harvester) 559 | 560 | except Exception as e: 561 | results.append(f"Email discovery error: {str(e)}") 562 | 563 | # 8. GitHub Reconnaissance 564 | try: 565 | # GitHound 566 | githound = subprocess.check_output([ 567 | "githound", 568 | "--subdomain-file", "subdomains.txt", 569 | "--threads", "10", 570 | domain 571 | ], text=True) 572 | results.append("=== GitHub Secrets ===\n" + githound) 573 | 574 | except Exception as e: 575 | results.append(f"GitHub recon error: {str(e)}") 576 | 577 | # 9. Content Discovery 578 | try: 579 | # Hakrawler 580 | hakrawler = subprocess.check_output([ 581 | "hakrawler", 582 | "-url", f"https://{domain}", 583 | "-depth", "3", 584 | "-plain" 585 | ], text=True) 586 | results.append("=== Content Discovery ===\n" + hakrawler) 587 | 588 | except Exception as e: 589 | results.append(f"Content discovery error: {str(e)}") 590 | 591 | return "\n\n".join(results) 592 | 593 | @mcp.tool() 594 | async def analyze_recon_data(domain: str, recon_results: str) -> str: 595 | """Analyze reconnaissance data and identify potential security issues. 596 | 597 | Args: 598 | domain: Target domain 599 | recon_results: Results from reconnaissance 600 | """ 601 | findings = [] 602 | 603 | try: 604 | # 1. Analyze subdomains 605 | if "Subfinder Results" in recon_results: 606 | subdomains = re.findall(r'[\w\-\.]+\.'+domain, recon_results) 607 | findings.append(f"Found {len(subdomains)} subdomains") 608 | 609 | # Check for interesting subdomains 610 | interesting = [s for s in subdomains if any(x in s for x in ['dev', 'stage', 'test', 'admin', 'internal'])] 611 | if interesting: 612 | findings.append(f"Potentially sensitive subdomains: {', '.join(interesting)}") 613 | 614 | # 2. Analyze ports 615 | if "Nmap Scan" in recon_results: 616 | open_ports = re.findall(r'(\d+)/tcp\s+open', recon_results) 617 | findings.append(f"Found {len(open_ports)} open ports") 618 | 619 | # Check for dangerous ports 620 | dangerous = [p for p in open_ports if p in ['21', '23', '3389', '445', '135']] 621 | if dangerous: 622 | findings.append(f"Potentially dangerous ports open: {', '.join(dangerous)}") 623 | 624 | # 3. Analyze technologies 625 | if "Technologies" in recon_results: 626 | # Check for outdated versions 627 | outdated = re.findall(r'([\w\-]+) ([\d\.]+)', recon_results) 628 | for tech, version in outdated: 629 | findings.append(f"Detected {tech} version {version} - check for vulnerabilities") 630 | 631 | # 4. Analyze SSL/TLS 632 | if "SSL/TLS Analysis" in recon_results: 633 | if "SSLv2" in recon_results or "SSLv3" in recon_results: 634 | findings.append("WARNING: Outdated SSL protocols detected") 635 | if "TLSv1.0" in recon_results: 636 | findings.append("WARNING: TLS 1.0 is enabled") 637 | 638 | # 5. Analyze DNS 639 | if "DNS Information" in recon_results: 640 | if "AXFR" in recon_results: 641 | findings.append("WARNING: DNS Zone Transfer possible") 642 | 643 | # 6. Analyze web content 644 | if "Content Discovery" in recon_results: 645 | sensitive_files = re.findall(r'([\w\-\/]+\.(php|asp|aspx|jsp|config|env|git))', recon_results) 646 | if sensitive_files: 647 | findings.append(f"Found {len(sensitive_files)} potentially sensitive files") 648 | 649 | # 7. Analyze GitHub findings 650 | if "GitHub Secrets" in recon_results: 651 | if any(x in recon_results.lower() for x in ['password', 'secret', 'key', 'token']): 652 | findings.append("WARNING: Potential secrets found in GitHub repositories") 653 | 654 | except Exception as e: 655 | findings.append(f"Analysis error: {str(e)}") 656 | 657 | return "\n\n".join(findings) 658 | 659 | @mcp.tool() 660 | async def advanced_full_scan(target: str, options: dict = None) -> str: 661 | """Perform comprehensive security assessment with advanced options. 662 | 663 | Args: 664 | target: Target domain/IP/application 665 | options: Scan configuration options 666 | """ 667 | if options is None: 668 | options = { 669 | "recon": True, 670 | "directory": True, 671 | "api": True, 672 | "xss": True, 673 | "sqli": True, 674 | "ssrf": True, 675 | "report": True 676 | } 677 | 678 | results = {} 679 | scan_tasks = [] 680 | 681 | # 1. Reconnaissance 682 | if options.get("recon", True): 683 | recon_results = await advanced_recon(target) 684 | analysis = await analyze_recon_data(target, recon_results) 685 | results["recon"] = { 686 | "raw_results": recon_results, 687 | "analysis": analysis 688 | } 689 | 690 | # 2. Directory Scanning 691 | if options.get("directory", True): 692 | scan_tasks.append(advanced_directory_scan(f"https://{target}")) 693 | 694 | # Continue with other scans... 695 | if options.get("api", True): 696 | scan_tasks.append(advanced_api_scan(f"https://{target}")) 697 | 698 | if options.get("xss", True): 699 | scan_tasks.append(advanced_xss_scan(f"https://{target}")) 700 | 701 | if options.get("sqli", True): 702 | scan_tasks.append(advanced_sqli_scan(f"https://{target}")) 703 | 704 | if options.get("ssrf", True): 705 | scan_tasks.append(advanced_ssrf_scan(f"https://{target}")) 706 | 707 | # Run remaining tasks concurrently 708 | scan_results = await asyncio.gather(*scan_tasks, return_exceptions=True) 709 | 710 | # Add other results 711 | for i, task_name in enumerate(["directory", "api", "xss", "sqli", "ssrf"]): 712 | if i < len(scan_results): 713 | results[task_name] = scan_results[i] 714 | 715 | # Generate report 716 | if options.get("report", True): 717 | report_type = options.get("report_type", "html") 718 | await generate_report({"target": target, "results": results}, report_type) 719 | 720 | return json.dumps(results, indent=2) 721 | 722 | @mcp.tool() 723 | async def generate_report(scan_results: dict, report_type: str = "html") -> str: 724 | """Generate a comprehensive security report. 725 | 726 | Args: 727 | scan_results: Dictionary containing all scan results 728 | report_type: Output format (html, pdf, json) 729 | """ 730 | timestamp = datetime.now().strftime("%Y%m%d_%H%M%S") 731 | report_file = f"{CONFIG['reporting']['output_dir']}/report_{timestamp}.{report_type}" 732 | 733 | try: 734 | if report_type == "html": 735 | # Use template to generate HTML report 736 | with open(f"{CONFIG['reporting']['template_dir']}/report.html") as f: 737 | template = f.read() 738 | 739 | # Replace placeholders with results 740 | report_content = template.replace("{{RESULTS}}", json.dumps(scan_results, indent=2)) 741 | 742 | with open(report_file, "w") as f: 743 | f.write(report_content) 744 | 745 | elif report_type == "pdf": 746 | # Convert HTML to PDF 747 | import pdfkit 748 | pdfkit.from_string(report_content, report_file) 749 | 750 | elif report_type == "json": 751 | with open(report_file, "w") as f: 752 | json.dump(scan_results, f, indent=2) 753 | 754 | return f"Report generated: {report_file}" 755 | 756 | except Exception as e: 757 | return f"Report generation error: {str(e)}" 758 | 759 | @mcp.tool() 760 | async def send_http_request(url: str, method: str = "GET", headers: dict = None, data: str = None, 761 | verify_ssl: bool = False, timeout: int = 30) -> str: 762 | """Send an HTTP request to a URL and read the response with headers. 763 | 764 | Args: 765 | url: Target URL to send request to 766 | method: HTTP method (GET, POST, PUT, DELETE, etc.) 767 | headers: Dictionary of HTTP headers to include 768 | data: Request body data 769 | verify_ssl: Whether to verify SSL certificates 770 | timeout: Request timeout in seconds 771 | 772 | Returns: 773 | String containing response headers and body 774 | """ 775 | try: 776 | result = [] 777 | result.append(f"=== Request to {url} ===") 778 | 779 | async with httpx.AsyncClient(verify=verify_ssl, timeout=float(timeout)) as client: 780 | if method.upper() == "GET": 781 | response = await client.get(url, headers=headers) 782 | elif method.upper() == "POST": 783 | response = await client.post(url, headers=headers, content=data) 784 | elif method.upper() == "PUT": 785 | response = await client.put(url, headers=headers, content=data) 786 | elif method.upper() == "DELETE": 787 | response = await client.delete(url, headers=headers) 788 | elif method.upper() == "HEAD": 789 | response = await client.head(url, headers=headers) 790 | elif method.upper() == "OPTIONS": 791 | response = await client.options(url, headers=headers) 792 | elif method.upper() == "PATCH": 793 | response = await client.patch(url, headers=headers, content=data) 794 | else: 795 | return f"Unsupported HTTP method: {method}" 796 | 797 | # Add response information 798 | result.append(f"Status Code: {response.status_code}") 799 | 800 | # Add headers 801 | result.append("\n=== Response Headers ===") 802 | for header, value in response.headers.items(): 803 | result.append(f"{header}: {value}") 804 | 805 | # Add response body 806 | result.append("\n=== Response Body ===") 807 | if 'application/json' in response.headers.get('content-type', ''): 808 | try: 809 | formatted_json = json.dumps(response.json(), indent=2) 810 | result.append(formatted_json) 811 | except: 812 | result.append(response.text) 813 | else: 814 | result.append(response.text) 815 | 816 | return "\n".join(result) 817 | except Exception as e: 818 | return f"Error sending HTTP request: {str(e)}" 819 | 820 | if __name__ == "__main__": 821 | # Create necessary directories 822 | Path(CONFIG["reporting"]["output_dir"]).mkdir(parents=True, exist_ok=True) 823 | 824 | # Initialize and run the server with stdio transport 825 | mcp.run(transport='stdio') 826 | 827 | 828 | ```