#
tokens: 16136/50000 6/7 files (page 1/2)
lines: on (toggle) GitHub
raw markdown copy reset
This is page 1 of 2. Use http://codebase.md/ujjalcal/mcp?lines=true&page={x} to view the full context.

# Directory Structure

```
├── .gitignore
├── fast_mcp_server.py
├── llms-full.txt
├── mcp_client.py
├── ne04j_mcp_server.py
├── README.md
└── requirements.txt
```

# Files

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
1 | node_modules
2 | package-lock.json
3 | package.json
4 | ujjal.json
5 | .env
6 | 
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
  1 | # MCP Python SDK
  2 | 
  3 |     which python
  4 |     python3 -m venv myenv
  5 |     source myenv/Scripts/activate
  6 |     pip install -r requirements.txt
  7 |     python fast_mcp_server.py
  8 | 
  9 | 
 10 | # Usage
 11 | ## to run the client
 12 |     
 13 |     add
 14 |         .env
 15 |         OPENAI_API_KEY=
 16 | 
 17 | ## to run the server
 18 |     
 19 |     uvicorn ne04j_mcp_server:app --host 0.0.0.0 --port 8000
 20 | 
 21 | # Data Prep
 22 | 
 23 | ## Insert Node:
 24 |     CREATE (p1:Person {name: "Tom Hanks", birthYear: 1956})
 25 |     CREATE (p2:Person {name: "Kevin Bacon", birthYear: 1958})
 26 |     CREATE (m1:Movie {title: "Forrest Gump", releaseYear: 1994})
 27 |     CREATE (m2:Movie {title: "Apollo 13", releaseYear: 1995})
 28 | 
 29 | ## Insert Relationship
 30 |     MATCH (p:Person {name: "Tom Hanks"}), (m:Movie {title: "Forrest Gump"})
 31 |     CREATE (p)-[:ACTED_IN]->(m)
 32 | 
 33 |     MATCH (p:Person {name: "Tom Hanks"}), (m:Movie {title: "Apollo 13"})
 34 |     CREATE (p)-[:ACTED_IN]->(m)
 35 | 
 36 |     MATCH (p1:Person {name: "Tom Hanks"}), (p2:Person {name: "Kevin Bacon"})
 37 |     CREATE (p1)-[:FRIENDS_WITH]->(p2)
 38 | 
 39 | ## Insert properties
 40 |     MATCH (p:Person {name: "Tom Hanks"})
 41 |     SET p.oscarsWon = 2
 42 | 
 43 |     MATCH (m:Movie {title: "Forrest Gump"})
 44 |     SET m.genre = "Drama"
 45 | 
 46 |     MATCH (p:Person {name: "Tom Hanks"})-[r:ACTED_IN]->(m:Movie {title: "Forrest Gump"})
 47 |     SET r.role = "Forrest Gump"
 48 | 
 49 | ## Insert Complex Structure
 50 |     // Create a community and then match persons to add them to the community
 51 |     CREATE (c1:Community {name: "Hollywood Stars"})
 52 |     WITH c1
 53 |     MATCH (p:Person)
 54 |     WHERE p.name IN ["Tom Hanks", "Kevin Bacon"]
 55 |     CREATE (p)-[:MEMBER_OF]->(c1)
 56 |    
 57 | # other things - boiler plate - no clue - ignore for now.
 58 | 
 59 | <div align="center">
 60 | 
 61 | <strong>Python implementation of the Model Context Protocol (MCP)</strong>
 62 | 
 63 | [![PyPI][pypi-badge]][pypi-url]
 64 | [![MIT licensed][mit-badge]][mit-url]
 65 | [![Python Version][python-badge]][python-url]
 66 | [![Documentation][docs-badge]][docs-url]
 67 | [![Specification][spec-badge]][spec-url]
 68 | [![GitHub Discussions][discussions-badge]][discussions-url]
 69 | 
 70 | </div>
 71 | 
 72 | <!-- omit in toc -->
 73 | ## Table of Contents
 74 | 
 75 | - [Overview](#overview)
 76 | - [Installation](#installation)
 77 | - [Quickstart](#quickstart)
 78 | - [What is MCP?](#what-is-mcp)
 79 | - [Core Concepts](#core-concepts)
 80 |   - [Server](#server)
 81 |   - [Resources](#resources)
 82 |   - [Tools](#tools)
 83 |   - [Prompts](#prompts)
 84 |   - [Images](#images)
 85 |   - [Context](#context)
 86 | - [Running Your Server](#running-your-server)
 87 |   - [Development Mode](#development-mode)
 88 |   - [Claude Desktop Integration](#claude-desktop-integration)
 89 |   - [Direct Execution](#direct-execution)
 90 | - [Examples](#examples)
 91 |   - [Echo Server](#echo-server)
 92 |   - [SQLite Explorer](#sqlite-explorer)
 93 | - [Advanced Usage](#advanced-usage)
 94 |   - [Low-Level Server](#low-level-server)
 95 |   - [Writing MCP Clients](#writing-mcp-clients)
 96 |   - [MCP Primitives](#mcp-primitives)
 97 |   - [Server Capabilities](#server-capabilities)
 98 | - [Documentation](#documentation)
 99 | - [Contributing](#contributing)
100 | - [License](#license)
101 | 
102 | [pypi-badge]: https://img.shields.io/pypi/v/mcp.svg
103 | [pypi-url]: https://pypi.org/project/mcp/
104 | [mit-badge]: https://img.shields.io/pypi/l/mcp.svg
105 | [mit-url]: https://github.com/modelcontextprotocol/python-sdk/blob/main/LICENSE
106 | [python-badge]: https://img.shields.io/pypi/pyversions/mcp.svg
107 | [python-url]: https://www.python.org/downloads/
108 | [docs-badge]: https://img.shields.io/badge/docs-modelcontextprotocol.io-blue.svg
109 | [docs-url]: https://modelcontextprotocol.io
110 | [spec-badge]: https://img.shields.io/badge/spec-spec.modelcontextprotocol.io-blue.svg
111 | [spec-url]: https://spec.modelcontextprotocol.io
112 | [discussions-badge]: https://img.shields.io/github/discussions/modelcontextprotocol/python-sdk
113 | [discussions-url]: https://github.com/modelcontextprotocol/python-sdk/discussions
114 | 
115 | ## Overview
116 | 
117 | The Model Context Protocol allows applications to provide context for LLMs in a standardized way, separating the concerns of providing context from the actual LLM interaction. This Python SDK implements the full MCP specification, making it easy to:
118 | 
119 | - Build MCP clients that can connect to any MCP server
120 | - Create MCP servers that expose resources, prompts and tools
121 | - Use standard transports like stdio and SSE
122 | - Handle all MCP protocol messages and lifecycle events
123 | 
124 | ## Installation
125 | 
126 | We recommend using [uv](https://docs.astral.sh/uv/) to manage your Python projects:
127 | 
128 | ```bash
129 | uv add "mcp[cli]"
130 | ```
131 | 
132 | Alternatively:
133 | ```bash
134 | pip install mcp
135 | ```
136 | 
137 | ## Quickstart
138 | 
139 | Let's create a simple MCP server that exposes a calculator tool and some data:
140 | 
141 | ```python
142 | # server.py
143 | from mcp.server.fastmcp import FastMCP
144 | 
145 | # Create an MCP server
146 | mcp = FastMCP("Demo")
147 | 
148 | # Add an addition tool
149 | @mcp.tool()
150 | def add(a: int, b: int) -> int:
151 |     """Add two numbers"""
152 |     return a + b
153 | 
154 | # Add a dynamic greeting resource
155 | @mcp.resource("greeting://{name}")
156 | def get_greeting(name: str) -> str:
157 |     """Get a personalized greeting"""
158 |     return f"Hello, {name}!"
159 | ```
160 | 
161 | You can install this server in [Claude Desktop](https://claude.ai/download) and interact with it right away by running:
162 | ```bash
163 | mcp install server.py
164 | ```
165 | 
166 | Alternatively, you can test it with the MCP Inspector:
167 | ```bash
168 | mcp dev server.py
169 | ```
170 | 
171 | ## What is MCP?
172 | 
173 | The [Model Context Protocol (MCP)](https://modelcontextprotocol.io) lets you build servers that expose data and functionality to LLM applications in a secure, standardized way. Think of it like a web API, but specifically designed for LLM interactions. MCP servers can:
174 | 
175 | - Expose data through **Resources** (think of these sort of like GET endpoints; they are used to load information into the LLM's context)
176 | - Provide functionality through **Tools** (sort of like POST endpoints; they are used to execute code or otherwise produce a side effect)
177 | - Define interaction patterns through **Prompts** (reusable templates for LLM interactions)
178 | - And more!
179 | 
180 | ## Core Concepts
181 | 
182 | ### Server
183 | 
184 | The FastMCP server is your core interface to the MCP protocol. It handles connection management, protocol compliance, and message routing:
185 | 
186 | ```python
187 | # Add lifespan support for startup/shutdown with strong typing
188 | from dataclasses import dataclass
189 | from typing import AsyncIterator
190 | from mcp.server.fastmcp import FastMCP
191 | 
192 | # Create a named server
193 | mcp = FastMCP("My App")
194 | 
195 | # Specify dependencies for deployment and development
196 | mcp = FastMCP("My App", dependencies=["pandas", "numpy"])
197 | 
198 | @dataclass
199 | class AppContext:
200 |     db: Database  # Replace with your actual DB type
201 | 
202 | @asynccontextmanager
203 | async def app_lifespan(server: FastMCP) -> AsyncIterator[AppContext]:
204 |     """Manage application lifecycle with type-safe context"""
205 |     try:
206 |         # Initialize on startup
207 |         await db.connect()
208 |         yield AppContext(db=db)
209 |     finally:
210 |         # Cleanup on shutdown
211 |         await db.disconnect()
212 | 
213 | # Pass lifespan to server
214 | mcp = FastMCP("My App", lifespan=app_lifespan)
215 | 
216 | # Access type-safe lifespan context in tools
217 | @mcp.tool()
218 | def query_db(ctx: Context) -> str:
219 |     """Tool that uses initialized resources"""
220 |     db = ctx.request_context.lifespan_context["db"]
221 |     return db.query()
222 | ```
223 | 
224 | ### Resources
225 | 
226 | Resources are how you expose data to LLMs. They're similar to GET endpoints in a REST API - they provide data but shouldn't perform significant computation or have side effects:
227 | 
228 | ```python
229 | @mcp.resource("config://app")
230 | def get_config() -> str:
231 |     """Static configuration data"""
232 |     return "App configuration here"
233 | 
234 | @mcp.resource("users://{user_id}/profile")
235 | def get_user_profile(user_id: str) -> str:
236 |     """Dynamic user data"""
237 |     return f"Profile data for user {user_id}"
238 | ```
239 | 
240 | ### Tools
241 | 
242 | Tools let LLMs take actions through your server. Unlike resources, tools are expected to perform computation and have side effects:
243 | 
244 | ```python
245 | @mcp.tool()
246 | def calculate_bmi(weight_kg: float, height_m: float) -> float:
247 |     """Calculate BMI given weight in kg and height in meters"""
248 |     return weight_kg / (height_m ** 2)
249 | 
250 | @mcp.tool()
251 | async def fetch_weather(city: str) -> str:
252 |     """Fetch current weather for a city"""
253 |     async with httpx.AsyncClient() as client:
254 |         response = await client.get(f"https://api.weather.com/{city}")
255 |         return response.text
256 | ```
257 | 
258 | ### Prompts
259 | 
260 | Prompts are reusable templates that help LLMs interact with your server effectively:
261 | 
262 | ```python
263 | @mcp.prompt()
264 | def review_code(code: str) -> str:
265 |     return f"Please review this code:\n\n{code}"
266 | 
267 | @mcp.prompt()
268 | def debug_error(error: str) -> list[Message]:
269 |     return [
270 |         UserMessage("I'm seeing this error:"),
271 |         UserMessage(error),
272 |         AssistantMessage("I'll help debug that. What have you tried so far?")
273 |     ]
274 | ```
275 | 
276 | ### Images
277 | 
278 | FastMCP provides an `Image` class that automatically handles image data:
279 | 
280 | ```python
281 | from mcp.server.fastmcp import FastMCP, Image
282 | from PIL import Image as PILImage
283 | 
284 | @mcp.tool()
285 | def create_thumbnail(image_path: str) -> Image:
286 |     """Create a thumbnail from an image"""
287 |     img = PILImage.open(image_path)
288 |     img.thumbnail((100, 100))
289 |     return Image(data=img.tobytes(), format="png")
290 | ```
291 | 
292 | ### Context
293 | 
294 | The Context object gives your tools and resources access to MCP capabilities:
295 | 
296 | ```python
297 | from mcp.server.fastmcp import FastMCP, Context
298 | 
299 | @mcp.tool()
300 | async def long_task(files: list[str], ctx: Context) -> str:
301 |     """Process multiple files with progress tracking"""
302 |     for i, file in enumerate(files):
303 |         ctx.info(f"Processing {file}")
304 |         await ctx.report_progress(i, len(files))
305 |         data, mime_type = await ctx.read_resource(f"file://{file}")
306 |     return "Processing complete"
307 | ```
308 | 
309 | ## Running Your Server
310 | 
311 | ### Development Mode
312 | 
313 | The fastest way to test and debug your server is with the MCP Inspector:
314 | 
315 | ```bash
316 | mcp dev server.py
317 | 
318 | # Add dependencies
319 | mcp dev server.py --with pandas --with numpy
320 | 
321 | # Mount local code
322 | mcp dev server.py --with-editable .
323 | ```
324 | 
325 | ### Claude Desktop Integration
326 | 
327 | Once your server is ready, install it in Claude Desktop:
328 | 
329 | ```bash
330 | mcp install server.py
331 | 
332 | # Custom name
333 | mcp install server.py --name "My Analytics Server"
334 | 
335 | # Environment variables
336 | mcp install server.py -v API_KEY=abc123 -v DB_URL=postgres://...
337 | mcp install server.py -f .env
338 | ```
339 | 
340 | ### Direct Execution
341 | 
342 | For advanced scenarios like custom deployments:
343 | 
344 | ```python
345 | from mcp.server.fastmcp import FastMCP
346 | 
347 | mcp = FastMCP("My App")
348 | 
349 | if __name__ == "__main__":
350 |     mcp.run()
351 | ```
352 | 
353 | Run it with:
354 | ```bash
355 | python server.py
356 | # or
357 | mcp run server.py
358 | ```
359 | 
360 | ## Examples
361 | 
362 | ### Echo Server
363 | 
364 | A simple server demonstrating resources, tools, and prompts:
365 | 
366 | ```python
367 | from mcp.server.fastmcp import FastMCP
368 | 
369 | mcp = FastMCP("Echo")
370 | 
371 | @mcp.resource("echo://{message}")
372 | def echo_resource(message: str) -> str:
373 |     """Echo a message as a resource"""
374 |     return f"Resource echo: {message}"
375 | 
376 | @mcp.tool()
377 | def echo_tool(message: str) -> str:
378 |     """Echo a message as a tool"""
379 |     return f"Tool echo: {message}"
380 | 
381 | @mcp.prompt()
382 | def echo_prompt(message: str) -> str:
383 |     """Create an echo prompt"""
384 |     return f"Please process this message: {message}"
385 | ```
386 | 
387 | ### SQLite Explorer
388 | 
389 | A more complex example showing database integration:
390 | 
391 | ```python
392 | from mcp.server.fastmcp import FastMCP
393 | import sqlite3
394 | 
395 | mcp = FastMCP("SQLite Explorer")
396 | 
397 | @mcp.resource("schema://main")
398 | def get_schema() -> str:
399 |     """Provide the database schema as a resource"""
400 |     conn = sqlite3.connect("database.db")
401 |     schema = conn.execute(
402 |         "SELECT sql FROM sqlite_master WHERE type='table'"
403 |     ).fetchall()
404 |     return "\n".join(sql[0] for sql in schema if sql[0])
405 | 
406 | @mcp.tool()
407 | def query_data(sql: str) -> str:
408 |     """Execute SQL queries safely"""
409 |     conn = sqlite3.connect("database.db")
410 |     try:
411 |         result = conn.execute(sql).fetchall()
412 |         return "\n".join(str(row) for row in result)
413 |     except Exception as e:
414 |         return f"Error: {str(e)}"
415 | ```
416 | 
417 | ## Advanced Usage
418 | 
419 | ### Low-Level Server
420 | 
421 | For more control, you can use the low-level server implementation directly. This gives you full access to the protocol and allows you to customize every aspect of your server, including lifecycle management through the lifespan API:
422 | 
423 | ```python
424 | from contextlib import asynccontextmanager
425 | from typing import AsyncIterator
426 | 
427 | @asynccontextmanager
428 | async def server_lifespan(server: Server) -> AsyncIterator[dict]:
429 |     """Manage server startup and shutdown lifecycle."""
430 |     try:
431 |         # Initialize resources on startup
432 |         await db.connect()
433 |         yield {"db": db}
434 |     finally:
435 |         # Clean up on shutdown
436 |         await db.disconnect()
437 | 
438 | # Pass lifespan to server
439 | server = Server("example-server", lifespan=server_lifespan)
440 | 
441 | # Access lifespan context in handlers
442 | @server.call_tool()
443 | async def query_db(name: str, arguments: dict) -> list:
444 |     ctx = server.request_context
445 |     db = ctx.lifespan_context["db"]
446 |     return await db.query(arguments["query"])
447 | ```
448 | 
449 | The lifespan API provides:
450 | - A way to initialize resources when the server starts and clean them up when it stops
451 | - Access to initialized resources through the request context in handlers
452 | - Type-safe context passing between lifespan and request handlers
453 | 
454 | ```python
455 | from mcp.server.lowlevel import Server, NotificationOptions
456 | from mcp.server.models import InitializationOptions
457 | import mcp.server.stdio
458 | import mcp.types as types
459 | 
460 | # Create a server instance
461 | server = Server("example-server")
462 | 
463 | @server.list_prompts()
464 | async def handle_list_prompts() -> list[types.Prompt]:
465 |     return [
466 |         types.Prompt(
467 |             name="example-prompt",
468 |             description="An example prompt template",
469 |             arguments=[
470 |                 types.PromptArgument(
471 |                     name="arg1",
472 |                     description="Example argument",
473 |                     required=True
474 |                 )
475 |             ]
476 |         )
477 |     ]
478 | 
479 | @server.get_prompt()
480 | async def handle_get_prompt(
481 |     name: str,
482 |     arguments: dict[str, str] | None
483 | ) -> types.GetPromptResult:
484 |     if name != "example-prompt":
485 |         raise ValueError(f"Unknown prompt: {name}")
486 | 
487 |     return types.GetPromptResult(
488 |         description="Example prompt",
489 |         messages=[
490 |             types.PromptMessage(
491 |                 role="user",
492 |                 content=types.TextContent(
493 |                     type="text",
494 |                     text="Example prompt text"
495 |                 )
496 |             )
497 |         ]
498 |     )
499 | 
500 | async def run():
501 |     async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):
502 |         await server.run(
503 |             read_stream,
504 |             write_stream,
505 |             InitializationOptions(
506 |                 server_name="example",
507 |                 server_version="0.1.0",
508 |                 capabilities=server.get_capabilities(
509 |                     notification_options=NotificationOptions(),
510 |                     experimental_capabilities={},
511 |                 )
512 |             )
513 |         )
514 | 
515 | if __name__ == "__main__":
516 |     import asyncio
517 |     asyncio.run(run())
518 | ```
519 | 
520 | ### Writing MCP Clients
521 | 
522 | The SDK provides a high-level client interface for connecting to MCP servers:
523 | 
524 | ```python
525 | from mcp import ClientSession, StdioServerParameters
526 | from mcp.client.stdio import stdio_client
527 | 
528 | # Create server parameters for stdio connection
529 | server_params = StdioServerParameters(
530 |     command="python", # Executable
531 |     args=["example_server.py"], # Optional command line arguments
532 |     env=None # Optional environment variables
533 | )
534 | 
535 | # Optional: create a sampling callback
536 | async def handle_sampling_message(message: types.CreateMessageRequestParams) -> types.CreateMessageResult:
537 |     return types.CreateMessageResult(
538 |         role="assistant",
539 |         content=types.TextContent(
540 |             type="text",
541 |             text="Hello, world! from model",
542 |         ),
543 |         model="gpt-3.5-turbo",
544 |         stopReason="endTurn",
545 |     )
546 | 
547 | async def run():
548 |     async with stdio_client(server_params) as (read, write):
549 |         async with ClientSession(read, write, sampling_callback=handle_sampling_message) as session:
550 |             # Initialize the connection
551 |             await session.initialize()
552 | 
553 |             # List available prompts
554 |             prompts = await session.list_prompts()
555 | 
556 |             # Get a prompt
557 |             prompt = await session.get_prompt("example-prompt", arguments={"arg1": "value"})
558 | 
559 |             # List available resources
560 |             resources = await session.list_resources()
561 | 
562 |             # List available tools
563 |             tools = await session.list_tools()
564 | 
565 |             # Read a resource
566 |             content, mime_type = await session.read_resource("file://some/path")
567 | 
568 |             # Call a tool
569 |             result = await session.call_tool("tool-name", arguments={"arg1": "value"})
570 | 
571 | if __name__ == "__main__":
572 |     import asyncio
573 |     asyncio.run(run())
574 | ```
575 | 
576 | ### MCP Primitives
577 | 
578 | The MCP protocol defines three core primitives that servers can implement:
579 | 
580 | | Primitive | Control               | Description                                         | Example Use                  |
581 | |-----------|-----------------------|-----------------------------------------------------|------------------------------|
582 | | Prompts   | User-controlled       | Interactive templates invoked by user choice        | Slash commands, menu options |
583 | | Resources | Application-controlled| Contextual data managed by the client application   | File contents, API responses |
584 | | Tools     | Model-controlled      | Functions exposed to the LLM to take actions        | API calls, data updates      |
585 | 
586 | ### Server Capabilities
587 | 
588 | MCP servers declare capabilities during initialization:
589 | 
590 | | Capability  | Feature Flag                 | Description                        |
591 | |-------------|------------------------------|------------------------------------|
592 | | `prompts`   | `listChanged`                | Prompt template management         |
593 | | `resources` | `subscribe`<br/>`listChanged`| Resource exposure and updates      |
594 | | `tools`     | `listChanged`                | Tool discovery and execution       |
595 | | `logging`   | -                            | Server logging configuration       |
596 | | `completion`| -                            | Argument completion suggestions    |
597 | 
598 | ## Documentation
599 | 
600 | - [Model Context Protocol documentation](https://modelcontextprotocol.io)
601 | - [Model Context Protocol specification](https://spec.modelcontextprotocol.io)
602 | - [Officially supported servers](https://github.com/modelcontextprotocol/servers)
603 | 
604 | ## Contributing
605 | 
606 | We are passionate about supporting contributors of all levels of experience and would love to see you get involved in the project. See the [contributing guide](CONTRIBUTING.md) to get started.
607 | 
608 | ## License
609 | 
610 | This project is licensed under the MIT License - see the LICENSE file for details.
611 | 
```

--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------

```
1 | neo4j
2 | pydantic
3 | mcp
4 | requests
5 | openai
6 | rich
7 | python-dotenv
8 | 
```

--------------------------------------------------------------------------------
/fast_mcp_server.py:
--------------------------------------------------------------------------------

```python
  1 | from mcp.server.fastmcp import FastMCP, Context
  2 | from neo4j import GraphDatabase
  3 | from pydantic import BaseModel
  4 | from typing import List, Dict, Any
  5 | import logging
  6 | 
  7 | logging.basicConfig(level=logging.DEBUG)
  8 | 
  9 | # Initialize FastMCP server
 10 | mcp = FastMCP("Neo4j MCP Server")
 11 | 
 12 | # Neo4j connection details
 13 | NEO4J_URI = "neo4j+s://1e30f4c4.databases.neo4j.io"
 14 | NEO4J_USER = "neo4j"
 15 | NEO4J_PASSWORD = "pDMkrbwg1L__-3BHh46r-MD9-z6Frm8wnR__ZzFiVmM"
 16 | 
 17 | # Neo4j driver connection
 18 | def get_db():
 19 |     logging.debug("Establishing Neo4j database connection")
 20 |     return GraphDatabase.driver(NEO4J_URI, auth=(NEO4J_USER, NEO4J_PASSWORD))
 21 | 
 22 | # Models
 23 | class NodeLabel(BaseModel):
 24 |     label: str
 25 |     count: int
 26 |     properties: List[str]
 27 | 
 28 | class RelationshipType(BaseModel):
 29 |     type: str
 30 |     count: int
 31 |     properties: List[str]
 32 |     source_labels: List[str]
 33 |     target_labels: List[str]
 34 | 
 35 | class QueryRequest(BaseModel):
 36 |     cypher: str
 37 |     parameters: Dict[str, Any] = {}
 38 | 
 39 | # Function to fetch node labels
 40 | def fetch_node_labels(session) -> List[NodeLabel]:
 41 |     logging.debug("Fetching node labels")
 42 |     result = session.run("""
 43 |     CALL apoc.meta.nodeTypeProperties()
 44 |     YIELD nodeType, nodeLabels, propertyName
 45 |     WITH nodeLabels, collect(propertyName) AS properties
 46 |     MATCH (n) WHERE ALL(label IN nodeLabels WHERE label IN labels(n))
 47 |     WITH nodeLabels, properties, count(n) AS nodeCount
 48 |     RETURN nodeLabels, properties, nodeCount
 49 |     ORDER BY nodeCount DESC
 50 |     """)
 51 |     
 52 |     return [NodeLabel(label=record["nodeLabels"][0] if record["nodeLabels"] else "Unknown",
 53 |                       count=record["nodeCount"],
 54 |                       properties=record["properties"]) for record in result]
 55 | 
 56 | # Function to fetch relationship types
 57 | def fetch_relationship_types(session) -> List[RelationshipType]:
 58 |     logging.debug("Fetching relationship types")
 59 |     result = session.run("""
 60 |     CALL apoc.meta.relTypeProperties()
 61 |     YIELD relType, sourceNodeLabels, targetNodeLabels, propertyName
 62 |     WITH relType, sourceNodeLabels, targetNodeLabels, collect(propertyName) AS properties
 63 |     MATCH ()-[r]->() WHERE type(r) = relType
 64 |     WITH relType, sourceNodeLabels, targetNodeLabels, properties, count(r) AS relCount
 65 |     RETURN relType, sourceNodeLabels, targetNodeLabels, properties, relCount
 66 |     ORDER BY relCount DESC
 67 |     """)
 68 |     
 69 |     return [RelationshipType(type=record["relType"],
 70 |                              count=record["relCount"],
 71 |                              properties=record["properties"],
 72 |                              source_labels=record["sourceNodeLabels"],
 73 |                              target_labels=record["targetNodeLabels"]) for record in result]
 74 | 
 75 | # Define a resource to get the database schema
 76 | @mcp.resource("schema://database")
 77 | def get_schema() -> Dict[str, Any]:
 78 |     logging.debug("get schemas...")
 79 |     driver = get_db()
 80 |     with driver.session() as session:
 81 |         nodes = fetch_node_labels(session)
 82 |         relationships = fetch_relationship_types(session)
 83 |         return {"nodes": nodes, "relationships": relationships}
 84 | 
 85 | # Define a tool to execute a query
 86 | @mcp.tool()
 87 | def execute_query(query: QueryRequest) -> Dict[str, Any]:
 88 |     logging.debug("execute query...")
 89 |     driver = get_db()
 90 |     with driver.session() as session:
 91 |         result = session.run(query.cypher, query.parameters)
 92 |         records = [record.data() for record in result]
 93 |         summary = result.consume()
 94 |         metadata = {
 95 |             "nodes_created": summary.counters.nodes_created,
 96 |             "nodes_deleted": summary.counters.nodes_deleted,
 97 |             "relationships_created": summary.counters.relationships_created,
 98 |             "relationships_deleted": summary.counters.relationships_deleted,
 99 |             "properties_set": summary.counters.properties_set,
100 |             "execution_time_ms": summary.result_available_after
101 |         }
102 |         return {"results": records, "metadata": metadata}
103 | 
104 | # Define prompts for analysis
105 | @mcp.prompt()
106 | def relationship_analysis_prompt(node_type_1: str, node_type_2: str) -> str:
107 |     logging.debug("relationship analysis prompt...")
108 |     return f"""
109 |     Given the Neo4j database with {node_type_1} and {node_type_2} nodes, 
110 |     I want to understand the relationships between them.
111 | 
112 |     Please help me:
113 |     1. Find the most common relationship types between these nodes
114 |     2. Identify the distribution of relationship properties
115 |     3. Discover any interesting patterns or outliers
116 | 
117 |     Sample Cypher query to start with:
118 |     MATCH (a:{node_type_1})-[r]->(b:{node_type_2})
119 |     RETURN type(r) AS relationship_type, count(r) AS count
120 |     ORDER BY count DESC
121 |     LIMIT 10
122 |     """
123 | 
124 | @mcp.prompt()
125 | def path_discovery_prompt(start_node_label: str, start_node_property: str, start_node_value: str, end_node_label: str, end_node_property: str, end_node_value: str, max_depth: int) -> str:
126 |     logging.debug("path discovery prompt...")
127 |     return f"""
128 |     I'm looking to understand how {start_node_label} nodes with property {start_node_property}="{start_node_value}" 
129 |     connect to {end_node_label} nodes with property {end_node_property}="{end_node_value}".
130 | 
131 |     Please help me:
132 |     1. Find all possible paths between these nodes
133 |     2. Identify the shortest path
134 |     3. Analyze what nodes and relationships appear most frequently in these paths
135 | 
136 |     Sample Cypher query to start with:
137 |     MATCH path = (a:{start_node_label} {{
138 |         {start_node_property}: "{start_node_value}"
139 |     }})-[*1..{max_depth}]->(b:{end_node_label} {{
140 |         {end_node_property}: "{end_node_value}"
141 |     }})
142 |     RETURN path LIMIT 10
143 |     """
144 | 
145 | # Run the MCP server
146 | if __name__ == "__main__":
147 |     logging.debug("Starting MCP server")
148 |     mcp.run()
149 |     # mcp.run(host="0.0.0.0")
```

--------------------------------------------------------------------------------
/ne04j_mcp_server.py:
--------------------------------------------------------------------------------

```python
  1 | import os
  2 | from typing import Dict, List, Any, Optional
  3 | from fastapi import FastAPI, HTTPException, Depends
  4 | from fastapi.middleware.cors import CORSMiddleware
  5 | from pydantic import BaseModel, Field
  6 | from neo4j import GraphDatabase, Driver
  7 | import json
  8 | from dotenv import load_dotenv
  9 | 
 10 | 
 11 | load_dotenv()  # Ensure this is called before accessing the variables
 12 | 
 13 | NEO4J_URI = "neo4j+s://1e30f4c4.databases.neo4j.io" # os.getenv("NEO4J_URI")
 14 | NEO4J_USER = "neo4j" # os.getenv("NEO4J_USER")
 15 | NEO4J_PASSWORD = "pDMkrbwg1L__-3BHh46r-MD9-z6Frm8wnR__ZzFiVmM" # os.getenv("NEO4J_PASSWORD")
 16 | 
 17 | print(f"NEO4J_URI: {NEO4J_URI}")
 18 | print(f"NEO4J_USER: {NEO4J_USER}")
 19 | print(f"NEO4J_PASSWORD: {NEO4J_PASSWORD}")
 20 | 
 21 | 
 22 | # print(f"NEO4J_URI: {NEO4J_URI}, NEO4J_USER: {NEO4J_USER}, NEO4J_PASSWORD: {NEO4J_PASSWORD}")
 23 | 
 24 | # Initialize FastAPI
 25 | app = FastAPI(title="Neo4j MCP Server", 
 26 |               description="Model-Content-Protocol server for Neo4j databases")
 27 | 
 28 | # Add CORS middleware
 29 | app.add_middleware(
 30 |     CORSMiddleware,
 31 |     allow_origins=["*"],
 32 |     allow_credentials=True,
 33 |     allow_methods=["*"],
 34 |     allow_headers=["*"],
 35 | )
 36 | 
 37 | # Neo4j driver connection
 38 | def get_db() -> Driver:
 39 |     driver = GraphDatabase.driver(NEO4J_URI, auth=(NEO4J_USER, NEO4J_PASSWORD))
 40 |     try:
 41 |         # Test connection
 42 |         driver.verify_connectivity()
 43 |         return driver
 44 |     except Exception as e:
 45 |         raise HTTPException(status_code=500, detail=f"Database connection failed: {str(e)}")
 46 | 
 47 | # Models
 48 | class NodeLabel(BaseModel):
 49 |     label: str
 50 |     count: int
 51 |     properties: List[str]
 52 | 
 53 | class RelationshipType(BaseModel):
 54 |     type: str
 55 |     count: int
 56 |     properties: List[str]
 57 |     source_labels: List[str]
 58 |     target_labels: List[str]
 59 | 
 60 | class DatabaseSchema(BaseModel):
 61 |     nodes: List[NodeLabel]
 62 |     relationships: List[RelationshipType]
 63 | 
 64 | class QueryRequest(BaseModel):
 65 |     cypher: str
 66 |     parameters: Dict[str, Any] = Field(default_factory=dict)
 67 | 
 68 | class QueryResult(BaseModel):
 69 |     results: List[Dict[str, Any]]
 70 |     metadata: Dict[str, Any]
 71 | 
 72 | class PromptTemplate(BaseModel):
 73 |     name: str
 74 |     description: str
 75 |     prompt: str
 76 |     example_parameters: Dict[str, Any] = Field(default_factory=dict)
 77 | 
 78 | # Schema extraction functions
 79 | def get_node_labels(driver):
 80 |     with driver.session() as session:
 81 |         result = session.run("""
 82 |         CALL apoc.meta.nodeTypeProperties()
 83 |         YIELD nodeType, nodeLabels, propertyName
 84 |         WITH nodeLabels, collect(propertyName) AS properties
 85 |         MATCH (n) WHERE ALL(label IN nodeLabels WHERE label IN labels(n))
 86 |         WITH nodeLabels, properties, count(n) AS nodeCount
 87 |         RETURN nodeLabels, properties, nodeCount
 88 |         ORDER BY nodeCount DESC
 89 |         """)
 90 |         
 91 |         node_labels = []
 92 |         for record in result:
 93 |             label = record["nodeLabels"][0] if record["nodeLabels"] else "Unknown"
 94 |             node_labels.append(NodeLabel(
 95 |                 label=label,
 96 |                 count=record["nodeCount"],
 97 |                 properties=record["properties"]
 98 |             ))
 99 |         return node_labels
100 | 
101 | def get_relationship_types(driver):
102 |     with driver.session() as session:
103 |         result = session.run("""
104 |         CALL apoc.meta.relTypeProperties()
105 |         YIELD relType, sourceNodeLabels, targetNodeLabels, propertyName
106 |         WITH relType, sourceNodeLabels, targetNodeLabels, collect(propertyName) AS properties
107 |         MATCH ()-[r]->() WHERE type(r) = relType
108 |         WITH relType, sourceNodeLabels, targetNodeLabels, properties, count(r) AS relCount
109 |         RETURN relType, sourceNodeLabels, targetNodeLabels, properties, relCount
110 |         ORDER BY relCount DESC
111 |         """)
112 |         
113 |         rel_types = []
114 |         for record in result:
115 |             rel_types.append(RelationshipType(
116 |                 type=record["relType"],
117 |                 count=record["relCount"],
118 |                 properties=record["properties"],
119 |                 source_labels=record["sourceNodeLabels"],
120 |                 target_labels=record["targetNodeLabels"]
121 |             ))
122 |         return rel_types
123 | 
124 | # Endpoints
125 | @app.get("/schema", response_model=DatabaseSchema)
126 | def get_schema(driver: Driver = Depends(get_db)):
127 |     """
128 |     Retrieve the complete database schema including node labels and relationship types
129 |     """
130 |     try:
131 |         nodes = get_node_labels(driver)
132 |         relationships = get_relationship_types(driver)
133 |         return DatabaseSchema(nodes=nodes, relationships=relationships)
134 |     except Exception as e:
135 |         raise HTTPException(status_code=500, detail=f"Schema retrieval failed: {str(e)}")
136 | 
137 | @app.post("/query", response_model=QueryResult)
138 | def execute_query(query: QueryRequest, driver: Driver = Depends(get_db)):
139 |     """
140 |     Execute a read-only Cypher query against the database
141 |     """
142 |     # Ensure query is read-only
143 |     lower_query = query.cypher.lower()
144 |     if any(keyword in lower_query for keyword in ["create", "delete", "remove", "set", "merge"]):
145 |         raise HTTPException(status_code=403, detail="Only read-only queries are allowed")
146 |     
147 |     try:
148 |         with driver.session() as session:
149 |             result = session.run(query.cypher, query.parameters)
150 |             records = [record.data() for record in result]
151 |             
152 |             # Get query stats
153 |             summary = result.consume()
154 |             metadata = {
155 |                 "nodes_created": summary.counters.nodes_created,
156 |                 "nodes_deleted": summary.counters.nodes_deleted,
157 |                 "relationships_created": summary.counters.relationships_created,
158 |                 "relationships_deleted": summary.counters.relationships_deleted,
159 |                 "properties_set": summary.counters.properties_set,
160 |                 "execution_time_ms": summary.result_available_after
161 |             }
162 |             
163 |             return QueryResult(results=records, metadata=metadata)
164 |     except Exception as e:
165 |         raise HTTPException(status_code=500, detail=f"Query execution failed: {str(e)}")
166 | 
167 | # Analysis prompts
168 | @app.get("/prompts", response_model=List[PromptTemplate])
169 | def get_analysis_prompts():
170 |     """
171 |     Get a list of predefined prompt templates for common Neo4j data analysis tasks
172 |     """
173 |     prompts = [
174 |         PromptTemplate(
175 |             name="Relationship Analysis",
176 |             description="Analyze relationships between two node types",
177 |             prompt="""
178 |             Given the Neo4j database with {node_type_1} and {node_type_2} nodes, 
179 |             I want to understand the relationships between them.
180 |             
181 |             Please help me:
182 |             1. Find the most common relationship types between these nodes
183 |             2. Identify the distribution of relationship properties
184 |             3. Discover any interesting patterns or outliers
185 |             
186 |             Sample Cypher query to start with:
187 |             ```
188 |             MATCH (a:{node_type_1})-[r]->(b:{node_type_2})
189 |             RETURN type(r) AS relationship_type, count(r) AS count
190 |             ORDER BY count DESC
191 |             LIMIT 10
192 |             ```
193 |             """,
194 |             example_parameters={"node_type_1": "Person", "node_type_2": "Movie"}
195 |         ),
196 |         PromptTemplate(
197 |             name="Path Discovery",
198 |             description="Find paths between nodes of interest",
199 |             prompt="""
200 |             I'm looking to understand how {start_node_label} nodes with property {start_node_property}="{start_node_value}" 
201 |             connect to {end_node_label} nodes with property {end_node_property}="{end_node_value}".
202 |             
203 |             Please help me:
204 |             1. Find all possible paths between these nodes
205 |             2. Identify the shortest path
206 |             3. Analyze what nodes and relationships appear most frequently in these paths
207 |             
208 |             Sample Cypher query to start with:
209 |             ```
210 |             MATCH path = (a:{start_node_label} {{
211 |                 {start_node_property}: "{start_node_value}"
212 |             }})-[*1..{max_depth}]->(b:{end_node_label} {{
213 |                 {end_node_property}: "{end_node_value}"
214 |             }})
215 |             RETURN path LIMIT 10
216 |             ```
217 |             """,
218 |             example_parameters={
219 |                 "start_node_label": "Person", 
220 |                 "start_node_property": "name",
221 |                 "start_node_value": "Tom Hanks",
222 |                 "end_node_label": "Person",
223 |                 "end_node_property": "name",
224 |                 "end_node_value": "Kevin Bacon",
225 |                 "max_depth": 4
226 |             }
227 |         ),
228 |         PromptTemplate(
229 |             name="Property Distribution",
230 |             description="Analyze the distribution of property values",
231 |             prompt="""
232 |             I want to understand the distribution of {property_name} across {node_label} nodes.
233 |             
234 |             Please help me:
235 |             1. Calculate basic statistics (min, max, avg, std)
236 |             2. Identify the most common values and their frequencies
237 |             3. Detect any outliers or unusual patterns
238 |             
239 |             Sample Cypher query to start with:
240 |             ```
241 |             MATCH (n:{node_label})
242 |             WHERE n.{property_name} IS NOT NULL
243 |             RETURN 
244 |                 min(n.{property_name}) AS min_value,
245 |                 max(n.{property_name}) AS max_value,
246 |                 avg(n.{property_name}) AS avg_value,
247 |                 stDev(n.{property_name}) AS std_value
248 |             ```
249 |             
250 |             And for frequency distribution:
251 |             ```
252 |             MATCH (n:{node_label})
253 |             WHERE n.{property_name} IS NOT NULL
254 |             RETURN n.{property_name} AS value, count(n) AS frequency
255 |             ORDER BY frequency DESC
256 |             LIMIT 20
257 |             ```
258 |             """,
259 |             example_parameters={"node_label": "Movie", "property_name": "runtime"}
260 |         ),
261 |         PromptTemplate(
262 |             name="Community Detection",
263 |             description="Detect communities or clusters in the graph",
264 |             prompt="""
265 |             I want to identify communities or clusters within the graph based on {relationship_type} relationships.
266 |             
267 |             Please help me:
268 |             1. Apply graph algorithms to detect communities
269 |             2. Analyze the size and composition of each community
270 |             3. Identify central nodes within each community
271 |             
272 |             Sample Cypher query to start with (requires GDS library):
273 |             ```
274 |             CALL gds.graph.project(
275 |                 'community-graph',
276 |                 '*',
277 |                 '{relationship_type}'
278 |             )
279 |             YIELD graphName;
280 |             
281 |             CALL gds.louvain.stream('community-graph')
282 |             YIELD nodeId, communityId
283 |             WITH gds.util.asNode(nodeId) AS node, communityId
284 |             RETURN communityId, collect(node.{label_property}) AS members, count(*) AS size
285 |             ORDER BY size DESC
286 |             LIMIT 10
287 |             ```
288 |             """,
289 |             example_parameters={"relationship_type": "FRIENDS_WITH", "label_property": "name"}
290 |         )
291 |     ]
292 |     return prompts
293 | 
294 | # Main entry point
295 | if __name__ == "__main__":
296 |     uvicorn.run(app, host="0.0.0.0", port=8000)
```

--------------------------------------------------------------------------------
/mcp_client.py:
--------------------------------------------------------------------------------

```python
  1 | import requests
  2 | import json
  3 | import os
  4 | from typing import Dict, List, Any, Optional
  5 | from dotenv import load_dotenv
  6 | import argparse
  7 | import sys
  8 | from rich.console import Console
  9 | from rich.table import Table
 10 | from rich.panel import Panel
 11 | from rich.syntax import Syntax
 12 | from rich import print as rprint
 13 | from rich.prompt import Prompt, Confirm
 14 | import textwrap
 15 | from openai import OpenAI
 16 | 
 17 | # Load environment variables
 18 | load_dotenv()
 19 | 
 20 | 
 21 | # Configuration
 22 | MCP_SERVER_URL = os.getenv("MCP_SERVER_URL", "http://localhost:8000")
 23 | OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
 24 | 
 25 | console = Console()
 26 | 
 27 | import os
 28 | from openai import OpenAI
 29 | from dotenv import load_dotenv
 30 | 
 31 | 
 32 | 
 33 | # OpenAI configuration
 34 | OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
 35 | 
 36 | 
 37 | 
 38 | class MCPClient:
 39 |     """Client for interacting with the Neo4j MCP Server"""
 40 |     
 41 |     def __init__(self, server_url: str = MCP_SERVER_URL):
 42 |         self.server_url = server_url
 43 |         self.schema = None
 44 |         self.prompts = None
 45 |     
 46 |     def get_schema(self) -> Dict:
 47 |         """Fetch the database schema from the MCP server"""
 48 |         try:
 49 |             response = requests.get(f"{self.server_url}/schema")
 50 |             response.raise_for_status()
 51 |             self.schema = response.json()
 52 |             return self.schema
 53 |         except requests.exceptions.RequestException as e:
 54 |             console.print(f"[bold red]Error fetching schema: {str(e)}[/bold red]")
 55 |             return None
 56 |     
 57 |     def get_prompts(self) -> List[Dict]:
 58 |         """Fetch the available analysis prompts from the MCP server"""
 59 |         try:
 60 |             response = requests.get(f"{self.server_url}/prompts")
 61 |             response.raise_for_status()
 62 |             self.prompts = response.json()
 63 |             return self.prompts
 64 |         except requests.exceptions.RequestException as e:
 65 |             console.print(f"[bold red]Error fetching prompts: {str(e)}[/bold red]")
 66 |             return None
 67 |     
 68 |     def execute_query(self, cypher: str, parameters: Dict = None) -> Dict:
 69 |         """Execute a Cypher query against the Neo4j database"""
 70 |         if parameters is None:
 71 |             parameters = {}
 72 |             
 73 |         try:
 74 |             response = requests.post(
 75 |                 f"{self.server_url}/query",
 76 |                 json={"cypher": cypher, "parameters": parameters}
 77 |             )
 78 |             response.raise_for_status()
 79 |             return response.json()
 80 |         except requests.exceptions.RequestException as e:
 81 |             console.print(f"[bold red]Error executing query: {str(e)}[/bold red]")
 82 |             if hasattr(e, 'response') and e.response is not None:
 83 |                 console.print(f"[bold red]Server response: {e.response.text}[/bold red]")
 84 |             return None
 85 | 
 86 |     def display_schema(self):
 87 |         """Display the database schema in a readable format"""
 88 |         if not self.schema:
 89 |             self.get_schema()
 90 |             
 91 |         if not self.schema:
 92 |             return
 93 |             
 94 |         # Display node labels
 95 |         node_table = Table(title="Node Labels")
 96 |         node_table.add_column("Label", style="cyan")
 97 |         node_table.add_column("Count", style="magenta")
 98 |         node_table.add_column("Properties", style="green")
 99 |         
100 |         for node in self.schema.get("nodes", []):
101 |             node_table.add_row(
102 |                 node["label"],
103 |                 str(node["count"]),
104 |                 ", ".join(node["properties"])
105 |             )
106 |             
107 |         console.print(node_table)
108 |         
109 |         # Display relationship types
110 |         rel_table = Table(title="Relationship Types")
111 |         rel_table.add_column("Type", style="cyan")
112 |         rel_table.add_column("Count", style="magenta")
113 |         rel_table.add_column("Source → Target", style="yellow")
114 |         rel_table.add_column("Properties", style="green")
115 |         
116 |         for rel in self.schema.get("relationships", []):
117 |             rel_table.add_row(
118 |                 rel["type"],
119 |                 str(rel["count"]),
120 |                 f"{' | '.join(rel['source_labels'])} → {' | '.join(rel['target_labels'])}",
121 |                 ", ".join(rel["properties"])
122 |             )
123 |             
124 |         console.print(rel_table)
125 |     
126 |     def display_prompts(self):
127 |         """Display available analysis prompts"""
128 |         if not self.prompts:
129 |             self.get_prompts()
130 |             
131 |         if not self.prompts:
132 |             return
133 |             
134 |         for i, prompt in enumerate(self.prompts, 1):
135 |             console.print(f"[bold cyan]{i}. {prompt['name']}[/bold cyan]")
136 |             console.print(f"[italic]{prompt['description']}[/italic]")
137 |             console.print()
138 |     
139 |     def select_prompt(self) -> Dict:
140 |         """Let the user select a prompt and fill in parameters"""
141 |         if not self.prompts:
142 |             self.get_prompts()
143 |             
144 |         if not self.prompts:
145 |             return None
146 |             
147 |         self.display_prompts()
148 |         
149 |         # Select prompt
150 |         prompt_index = Prompt.ask(
151 |             "Select a prompt number", 
152 |             choices=[str(i) for i in range(1, len(self.prompts) + 1)]
153 |         )
154 |         
155 |         selected_prompt = self.prompts[int(prompt_index) - 1]
156 |         console.print(f"\n[bold]Selected: {selected_prompt['name']}[/bold]\n")
157 |         
158 |         # Display prompt details
159 |         prompt_text = selected_prompt["prompt"]
160 |         console.print(Panel(prompt_text, title="Prompt Template"))
161 |         
162 |         # Fill in parameters
163 |         parameters = {}
164 |         example_parameters = selected_prompt.get("example_parameters", {})
165 |         
166 |         if example_parameters:
167 |             console.print("\n[bold]Example parameters:[/bold]")
168 |             for key, value in example_parameters.items():
169 |                 console.print(f"  {key}: {value}")
170 |         
171 |         # Extract parameter placeholders from the prompt
172 |         import re
173 |         placeholders = re.findall(r'\{([^{}]+)\}', prompt_text)
174 |         unique_placeholders = set(placeholders)
175 |         
176 |         if unique_placeholders:
177 |             console.print("\n[bold]Enter values for parameters:[/bold]")
178 |             for param in unique_placeholders:
179 |                 default = example_parameters.get(param, "")
180 |                 value = Prompt.ask(f"  {param}", default=str(default))
181 |                 parameters[param] = value
182 |         
183 |         # Extract and modify sample Cypher query
184 |         sample_query_match = re.search(r'```\s*([\s\S]+?)\s*```', prompt_text)
185 |         if sample_query_match:
186 |             sample_query = sample_query_match.group(1).strip()
187 |             
188 |             # Replace placeholders with user values
189 |             for param, value in parameters.items():
190 |                 sample_query = sample_query.replace(f"{{{param}}}", value)
191 |             
192 |             console.print("\n[bold]Generated Cypher query:[/bold]")
193 |             syntax = Syntax(sample_query, "cypher", theme="monokai", line_numbers=True)
194 |             console.print(syntax)
195 |             
196 |             if Confirm.ask("Execute this query?", default=True):
197 |                 return self.execute_prompt_query(sample_query)
198 |         else:
199 |             console.print("[yellow]No sample query found in the prompt.[/yellow]")
200 |         
201 |         return None
202 |     
203 |     def execute_prompt_query(self, query: str) -> Dict:
204 |         """Execute the query generated from a prompt template"""
205 |         result = self.execute_query(query)
206 |         if result:
207 |             self.display_query_results(result)
208 |         return result
209 |     
210 |     def display_query_results(self, result: Dict):
211 |         """Display query results in a readable format"""
212 |         records = result.get("results", [])
213 |         metadata = result.get("metadata", {})
214 |         
215 |         if not records:
216 |             console.print("[yellow]No results returned.[/yellow]")
217 |             return
218 |             
219 |         # Get all unique keys from all records
220 |         all_keys = set()
221 |         for record in records:
222 |             all_keys.update(record.keys())
223 |         
224 |         # Create a table with all columns
225 |         table = Table(title=f"Query Results ({len(records)} records)")
226 |         for key in all_keys:
227 |             table.add_column(key)
228 |         
229 |         # Add rows to the table
230 |         for record in records:
231 |             row_values = []
232 |             for key in all_keys:
233 |                 value = record.get(key, "")
234 |                 
235 |                 # Handle different data types for display
236 |                 if isinstance(value, (dict, list)):
237 |                     value = json.dumps(value, indent=2)
238 |                     # Truncate long values
239 |                     if len(value) > 50:
240 |                         value = value[:47] + "..."
241 |                 elif value is None:
242 |                     value = ""
243 |                 
244 |                 row_values.append(str(value))
245 |             
246 |             table.add_row(*row_values)
247 |         
248 |         console.print(table)
249 |         
250 |         # Display metadata
251 |         if metadata:
252 |             console.print("\n[bold]Query Metadata:[/bold]")
253 |             for key, value in metadata.items():
254 |                 console.print(f"  {key}: {value}")
255 |     
256 |     def interactive_query(self):
257 |         """Allow the user to enter a custom Cypher query"""
258 |         console.print("\n[bold]Enter a Cypher query:[/bold]")
259 |         console.print("[italic](Press Enter twice when finished)[/italic]")
260 |         
261 |         lines = []
262 |         while True:
263 |             line = input()
264 |             if not line and lines and not lines[-1]:
265 |                 # Empty line after content, break
266 |                 break
267 |             lines.append(line)
268 |         
269 |         query = "\n".join(lines).strip()
270 |         
271 |         if not query:
272 |             console.print("[yellow]No query entered.[/yellow]")
273 |             return
274 |         
275 |         syntax = Syntax(query, "cypher", theme="monokai", line_numbers=True)
276 |         console.print("\n[bold]Executing query:[/bold]")
277 |         console.print(syntax)
278 |         
279 |         result = self.execute_query(query)
280 |         if result:
281 |             self.display_query_results(result)
282 | 
283 | 
284 | class MCPClientWithLLM(MCPClient):
285 |     """Extended MCP Client with OpenAI LLM integration"""
286 |     
287 |     def __init__(self, server_url=MCP_SERVER_URL, model="gpt-4"):
288 |         super().__init__(server_url)
289 |         self.openai_client = OpenAI(api_key=OPENAI_API_KEY)
290 |         self.model = model
291 |     
292 |     def generate_query_with_llm(self, user_input, schema=None):
293 |         """Use OpenAI to generate a Cypher query based on user input and schema"""
294 |         if not schema:
295 |             schema = self.get_schema()
296 |             
297 |         # Create a system message with the database schema
298 |         system_message = f"""
299 |         You are a Neo4j database expert. Given the following database schema:
300 |         
301 |         Nodes: {', '.join([node['label'] for node in schema['nodes']])}
302 |         Relationships: {', '.join([rel['type'] for rel in schema['relationships']])}
303 |         
304 |         Generate a Cypher query that answers the user's question. Return ONLY the Cypher query without any explanations.
305 |         """
306 |         
307 |         # Call the OpenAI API
308 |         response = self.openai_client.chat.completions.create(
309 |             model=self.model,
310 |             messages=[
311 |                 {"role": "system", "content": system_message},
312 |                 {"role": "user", "content": user_input}
313 |             ],
314 |             temperature=0.1  # Low temperature for more deterministic outputs
315 |         )
316 |         
317 |         # Extract the generated Cypher query
318 |         cypher_query = response.choices[0].message.content.strip()
319 |         
320 |         # Remove markdown code blocks if present
321 |         if cypher_query.startswith("``````"):
322 |             cypher_query = cypher_query.strip("```")
323 |             if cypher_query.startswith("cypher"):
324 |                 cypher_query = cypher_query[6:].strip()
325 |         
326 |         return cypher_query
327 |     
328 |     def analyze_results_with_llm(self, user_query, results):
329 |         """Use OpenAI to analyze and explain query results"""
330 |         if not results:
331 |             return "No results found."
332 |             
333 |         # Create a prompt for analyzing the results
334 |         prompt = f"""
335 |         The user asked: "{user_query}"
336 |         
337 |         The database returned these results:
338 |         {results}
339 |         
340 |         Please analyze these results and provide a clear, concise explanation.
341 |         """
342 |         
343 |         # Call the OpenAI API
344 |         response = self.openai_client.chat.completions.create(
345 |             model=self.model,
346 |             messages=[{"role": "user", "content": prompt}],
347 |             temperature=0.7
348 |         )
349 |         
350 |         # Correctly access the content of the response
351 |         return response.choices[0].message.content
352 | 
353 | def main():
354 |     """Main entry point for the CLI"""
355 |     parser = argparse.ArgumentParser(description="Neo4j MCP Client")
356 |     parser.add_argument("--server", help="MCP server URL", default=MCP_SERVER_URL)
357 |     
358 |     subparsers = parser.add_subparsers(dest="command", help="Command to execute")
359 |     
360 |     # Schema command
361 |     subparsers.add_parser("schema", help="Display database schema")
362 |     
363 |     # Query command
364 |     query_parser = subparsers.add_parser("query", help="Execute a Cypher query")
365 |     query_parser.add_argument("--file", help="File containing the Cypher query")
366 |     query_parser.add_argument("--query", help="Cypher query string")
367 |     
368 |     # Prompts command
369 |     prompt_parser = subparsers.add_parser("prompts", help="Work with analysis prompts")
370 |     prompt_parser.add_argument("--list", action="store_true", help="List available prompts")
371 |     prompt_parser.add_argument("--select", action="store_true", help="Select and use a prompt")
372 |     
373 |     # Interactive mode
374 |     subparsers.add_parser("interactive", help="Start interactive mode")
375 |     
376 |     args = parser.parse_args()
377 |     
378 |     client = MCPClientWithLLM(server_url=args.server)
379 |     
380 |     if args.command == "schema":
381 |         client.display_schema()
382 |     
383 |     elif args.command == "query":
384 |         if args.file:
385 |             try:
386 |                 with open(args.file, 'r') as f:
387 |                     query = f.read().strip()
388 |             except Exception as e:
389 |                 console.print(f"[bold red]Error reading file: {str(e)}[/bold red]")
390 |                 return
391 |         elif args.query:
392 |             query = args.query
393 |         else:
394 |             client.interactive_query()
395 |             return
396 |             
397 |         result = client.execute_query(query)
398 |         if result:
399 |             client.display_query_results(result)
400 |     
401 |     elif args.command == "prompts":
402 |         if args.list:
403 |             client.display_prompts()
404 |         elif args.select:
405 |             client.select_prompt()
406 |         else:
407 |             client.display_prompts()
408 |             client.select_prompt()
409 |     
410 |     elif args.command == "interactive" or not args.command:
411 |         llm_interactive_mode(client)
412 |     
413 |     else:
414 |         parser.print_help()
415 | 
416 | def interactive_mode(client: MCPClient):
417 |     """Run the client in interactive mode"""
418 |     console.print("[bold]Neo4j MCP Client[/bold] - Interactive Mode")
419 |     console.print("Type 'help' for available commands, 'exit' to quit\n")
420 |     
421 |     while True:
422 |         command = Prompt.ask("mcp").lower()
423 |         
424 |         if command == "exit" or command == "quit":
425 |             break
426 |             
427 |         elif command == "help":
428 |             console.print("\n[bold]Available commands:[/bold]")
429 |             console.print("  schema    - Display database schema")
430 |             console.print("  query     - Enter and execute a Cypher query")
431 |             console.print("  prompts   - List and select analysis prompts")
432 |             console.print("  examples  - Show example queries")
433 |             console.print("  clear     - Clear the screen")
434 |             console.print("  exit      - Exit the client\n")
435 |             
436 |         elif command == "schema":
437 |             client.display_schema()
438 |             
439 |         elif command == "query":
440 |             client.interactive_query()
441 |             
442 |         elif command == "prompts":
443 |             client.select_prompt()
444 |             
445 |         elif command == "examples":
446 |             console.print("\n[bold]Example queries:[/bold]")
447 |             examples = [
448 |                 ("Get all node labels", "MATCH (n) RETURN DISTINCT labels(n) AS labels, COUNT(*) AS count"),
449 |                 ("Get all relationship types", "MATCH ()-[r]->() RETURN DISTINCT type(r) AS type, COUNT(*) AS count"),
450 |                 ("Find a specific node", "MATCH (n:Loan {loanId: 105}) RETURN n"),
451 |                 ("Find connected nodes", "MATCH (n:Borrower)-[r]-(m) RETURN n.name, type(r), m LIMIT 10"),
452 |                 ("Find paths between nodes", "MATCH path = (a:Borrower)-[*1..3]-(b:Borrower) WHERE a.borrowerId <> b.borrowerId RETURN path LIMIT 5"),
453 |             ]
454 |             
455 |             for i, (desc, query) in enumerate(examples, 1):
456 |                 console.print(f"\n[bold cyan]{i}. {desc}[/bold cyan]")
457 |                 syntax = Syntax(query, "cypher", theme="monokai")
458 |                 console.print(syntax)
459 |                 
460 |             example_index = Prompt.ask(
461 |                 "\nSelect an example to run (or 0 to skip)", 
462 |                 choices=["0"] + [str(i) for i in range(1, len(examples) + 1)],
463 |                 default="0"
464 |             )
465 |             
466 |             if example_index != "0":
467 |                 query = examples[int(example_index) - 1][1]
468 |                 result = client.execute_query(query)
469 |                 if result:
470 |                     client.display_query_results(result)
471 |             
472 |         elif command == "clear":
473 |             os.system('cls' if os.name == 'nt' else 'clear')
474 |             
475 |         else:
476 |             console.print("[yellow]Unknown command. Type 'help' for available commands.[/yellow]")
477 | 
478 | def llm_interactive_mode(client: MCPClientWithLLM):
479 |     """Run the client in LLM-assisted interactive mode"""
480 |     console.print("[bold]Neo4j MCP Client with OpenAI[/bold] - Interactive Mode")
481 |     console.print("Type 'help' for available commands, 'exit' to quit\n")
482 |     
483 |     while True:
484 |         command = Prompt.ask("mcp").lower()
485 |         
486 |         if command == "exit" or command == "quit":
487 |             break
488 |             
489 |         elif command == "help":
490 |             console.print("\n[bold]Available commands:[/bold]")
491 |             console.print("  schema    - Display database schema")
492 |             console.print("  query     - Enter and execute a Cypher query")
493 |             console.print("  ask       - Ask a natural language question")
494 |             console.print("  prompts   - List and select analysis prompts")
495 |             console.print("  clear     - Clear the screen")
496 |             console.print("  exit      - Exit the client\n")
497 |             
498 |         elif command == "schema":
499 |             client.display_schema()
500 |             
501 |         elif command == "query":
502 |             client.interactive_query()
503 |             
504 |         elif command == "ask":
505 |             question = Prompt.ask("\n[bold]Enter your question about the database[/bold]")
506 |             console.print("[italic]Generating Cypher query...[/italic]")
507 |             
508 |             # Generate Cypher query using LLM
509 |             cypher_query = client.generate_query_with_llm(question)
510 |             
511 |             # Display and execute the query
512 |             console.print("\n[bold]Generated Cypher query:[/bold]")
513 |             syntax = Syntax(cypher_query, "cypher", theme="monokai", line_numbers=True)
514 |             console.print(syntax)
515 |             
516 |             if Confirm.ask("Execute this query?", default=True):
517 |                 result = client.execute_query(cypher_query)
518 |                 if result:
519 |                     client.display_query_results(result)
520 |                     
521 |                     # Analyze results with LLM
522 |                     console.print("\n[bold]Analysis:[/bold]")
523 |                     analysis = client.analyze_results_with_llm(question, result)
524 |                     console.print(Panel(analysis, title="AI Analysis"))
525 |             
526 |         elif command == "prompts":
527 |             client.select_prompt()
528 |             
529 |         elif command == "clear":
530 |             os.system('cls' if os.name == 'nt' else 'clear')
531 |             
532 |         else:
533 |             console.print("[yellow]Unknown command. Type 'help' for available commands.[/yellow]")
534 | 
535 | if __name__ == "__main__":
536 |     try:
537 |         main()
538 |     except KeyboardInterrupt:
539 |         console.print("\n[bold]Exiting...[/bold]")
540 |         sys.exit(0)
```
Page 1/2FirstPrevNextLast