#
tokens: 12268/50000 6/7 files (page 1/2)
lines: off (toggle) GitHub
raw markdown copy
This is page 1 of 2. Use http://codebase.md/ujjalcal/mcp?lines=false&page={x} to view the full context.

# Directory Structure

```
├── .gitignore
├── fast_mcp_server.py
├── llms-full.txt
├── mcp_client.py
├── ne04j_mcp_server.py
├── README.md
└── requirements.txt
```

# Files

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
node_modules
package-lock.json
package.json
ujjal.json
.env

```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
# MCP Python SDK

    which python
    python3 -m venv myenv
    source myenv/Scripts/activate
    pip install -r requirements.txt
    python fast_mcp_server.py


# Usage
## to run the client
    
    add
        .env
        OPENAI_API_KEY=

## to run the server
    
    uvicorn ne04j_mcp_server:app --host 0.0.0.0 --port 8000

# Data Prep

## Insert Node:
    CREATE (p1:Person {name: "Tom Hanks", birthYear: 1956})
    CREATE (p2:Person {name: "Kevin Bacon", birthYear: 1958})
    CREATE (m1:Movie {title: "Forrest Gump", releaseYear: 1994})
    CREATE (m2:Movie {title: "Apollo 13", releaseYear: 1995})

## Insert Relationship
    MATCH (p:Person {name: "Tom Hanks"}), (m:Movie {title: "Forrest Gump"})
    CREATE (p)-[:ACTED_IN]->(m)

    MATCH (p:Person {name: "Tom Hanks"}), (m:Movie {title: "Apollo 13"})
    CREATE (p)-[:ACTED_IN]->(m)

    MATCH (p1:Person {name: "Tom Hanks"}), (p2:Person {name: "Kevin Bacon"})
    CREATE (p1)-[:FRIENDS_WITH]->(p2)

## Insert properties
    MATCH (p:Person {name: "Tom Hanks"})
    SET p.oscarsWon = 2

    MATCH (m:Movie {title: "Forrest Gump"})
    SET m.genre = "Drama"

    MATCH (p:Person {name: "Tom Hanks"})-[r:ACTED_IN]->(m:Movie {title: "Forrest Gump"})
    SET r.role = "Forrest Gump"

## Insert Complex Structure
    // Create a community and then match persons to add them to the community
    CREATE (c1:Community {name: "Hollywood Stars"})
    WITH c1
    MATCH (p:Person)
    WHERE p.name IN ["Tom Hanks", "Kevin Bacon"]
    CREATE (p)-[:MEMBER_OF]->(c1)
   
# other things - boiler plate - no clue - ignore for now.

<div align="center">

<strong>Python implementation of the Model Context Protocol (MCP)</strong>

[![PyPI][pypi-badge]][pypi-url]
[![MIT licensed][mit-badge]][mit-url]
[![Python Version][python-badge]][python-url]
[![Documentation][docs-badge]][docs-url]
[![Specification][spec-badge]][spec-url]
[![GitHub Discussions][discussions-badge]][discussions-url]

</div>

<!-- omit in toc -->
## Table of Contents

- [Overview](#overview)
- [Installation](#installation)
- [Quickstart](#quickstart)
- [What is MCP?](#what-is-mcp)
- [Core Concepts](#core-concepts)
  - [Server](#server)
  - [Resources](#resources)
  - [Tools](#tools)
  - [Prompts](#prompts)
  - [Images](#images)
  - [Context](#context)
- [Running Your Server](#running-your-server)
  - [Development Mode](#development-mode)
  - [Claude Desktop Integration](#claude-desktop-integration)
  - [Direct Execution](#direct-execution)
- [Examples](#examples)
  - [Echo Server](#echo-server)
  - [SQLite Explorer](#sqlite-explorer)
- [Advanced Usage](#advanced-usage)
  - [Low-Level Server](#low-level-server)
  - [Writing MCP Clients](#writing-mcp-clients)
  - [MCP Primitives](#mcp-primitives)
  - [Server Capabilities](#server-capabilities)
- [Documentation](#documentation)
- [Contributing](#contributing)
- [License](#license)

[pypi-badge]: https://img.shields.io/pypi/v/mcp.svg
[pypi-url]: https://pypi.org/project/mcp/
[mit-badge]: https://img.shields.io/pypi/l/mcp.svg
[mit-url]: https://github.com/modelcontextprotocol/python-sdk/blob/main/LICENSE
[python-badge]: https://img.shields.io/pypi/pyversions/mcp.svg
[python-url]: https://www.python.org/downloads/
[docs-badge]: https://img.shields.io/badge/docs-modelcontextprotocol.io-blue.svg
[docs-url]: https://modelcontextprotocol.io
[spec-badge]: https://img.shields.io/badge/spec-spec.modelcontextprotocol.io-blue.svg
[spec-url]: https://spec.modelcontextprotocol.io
[discussions-badge]: https://img.shields.io/github/discussions/modelcontextprotocol/python-sdk
[discussions-url]: https://github.com/modelcontextprotocol/python-sdk/discussions

## Overview

The Model Context Protocol allows applications to provide context for LLMs in a standardized way, separating the concerns of providing context from the actual LLM interaction. This Python SDK implements the full MCP specification, making it easy to:

- Build MCP clients that can connect to any MCP server
- Create MCP servers that expose resources, prompts and tools
- Use standard transports like stdio and SSE
- Handle all MCP protocol messages and lifecycle events

## Installation

We recommend using [uv](https://docs.astral.sh/uv/) to manage your Python projects:

```bash
uv add "mcp[cli]"
```

Alternatively:
```bash
pip install mcp
```

## Quickstart

Let's create a simple MCP server that exposes a calculator tool and some data:

```python
# server.py
from mcp.server.fastmcp import FastMCP

# Create an MCP server
mcp = FastMCP("Demo")

# Add an addition tool
@mcp.tool()
def add(a: int, b: int) -> int:
    """Add two numbers"""
    return a + b

# Add a dynamic greeting resource
@mcp.resource("greeting://{name}")
def get_greeting(name: str) -> str:
    """Get a personalized greeting"""
    return f"Hello, {name}!"
```

You can install this server in [Claude Desktop](https://claude.ai/download) and interact with it right away by running:
```bash
mcp install server.py
```

Alternatively, you can test it with the MCP Inspector:
```bash
mcp dev server.py
```

## What is MCP?

The [Model Context Protocol (MCP)](https://modelcontextprotocol.io) lets you build servers that expose data and functionality to LLM applications in a secure, standardized way. Think of it like a web API, but specifically designed for LLM interactions. MCP servers can:

- Expose data through **Resources** (think of these sort of like GET endpoints; they are used to load information into the LLM's context)
- Provide functionality through **Tools** (sort of like POST endpoints; they are used to execute code or otherwise produce a side effect)
- Define interaction patterns through **Prompts** (reusable templates for LLM interactions)
- And more!

## Core Concepts

### Server

The FastMCP server is your core interface to the MCP protocol. It handles connection management, protocol compliance, and message routing:

```python
# Add lifespan support for startup/shutdown with strong typing
from dataclasses import dataclass
from typing import AsyncIterator
from mcp.server.fastmcp import FastMCP

# Create a named server
mcp = FastMCP("My App")

# Specify dependencies for deployment and development
mcp = FastMCP("My App", dependencies=["pandas", "numpy"])

@dataclass
class AppContext:
    db: Database  # Replace with your actual DB type

@asynccontextmanager
async def app_lifespan(server: FastMCP) -> AsyncIterator[AppContext]:
    """Manage application lifecycle with type-safe context"""
    try:
        # Initialize on startup
        await db.connect()
        yield AppContext(db=db)
    finally:
        # Cleanup on shutdown
        await db.disconnect()

# Pass lifespan to server
mcp = FastMCP("My App", lifespan=app_lifespan)

# Access type-safe lifespan context in tools
@mcp.tool()
def query_db(ctx: Context) -> str:
    """Tool that uses initialized resources"""
    db = ctx.request_context.lifespan_context["db"]
    return db.query()
```

### Resources

Resources are how you expose data to LLMs. They're similar to GET endpoints in a REST API - they provide data but shouldn't perform significant computation or have side effects:

```python
@mcp.resource("config://app")
def get_config() -> str:
    """Static configuration data"""
    return "App configuration here"

@mcp.resource("users://{user_id}/profile")
def get_user_profile(user_id: str) -> str:
    """Dynamic user data"""
    return f"Profile data for user {user_id}"
```

### Tools

Tools let LLMs take actions through your server. Unlike resources, tools are expected to perform computation and have side effects:

```python
@mcp.tool()
def calculate_bmi(weight_kg: float, height_m: float) -> float:
    """Calculate BMI given weight in kg and height in meters"""
    return weight_kg / (height_m ** 2)

@mcp.tool()
async def fetch_weather(city: str) -> str:
    """Fetch current weather for a city"""
    async with httpx.AsyncClient() as client:
        response = await client.get(f"https://api.weather.com/{city}")
        return response.text
```

### Prompts

Prompts are reusable templates that help LLMs interact with your server effectively:

```python
@mcp.prompt()
def review_code(code: str) -> str:
    return f"Please review this code:\n\n{code}"

@mcp.prompt()
def debug_error(error: str) -> list[Message]:
    return [
        UserMessage("I'm seeing this error:"),
        UserMessage(error),
        AssistantMessage("I'll help debug that. What have you tried so far?")
    ]
```

### Images

FastMCP provides an `Image` class that automatically handles image data:

```python
from mcp.server.fastmcp import FastMCP, Image
from PIL import Image as PILImage

@mcp.tool()
def create_thumbnail(image_path: str) -> Image:
    """Create a thumbnail from an image"""
    img = PILImage.open(image_path)
    img.thumbnail((100, 100))
    return Image(data=img.tobytes(), format="png")
```

### Context

The Context object gives your tools and resources access to MCP capabilities:

```python
from mcp.server.fastmcp import FastMCP, Context

@mcp.tool()
async def long_task(files: list[str], ctx: Context) -> str:
    """Process multiple files with progress tracking"""
    for i, file in enumerate(files):
        ctx.info(f"Processing {file}")
        await ctx.report_progress(i, len(files))
        data, mime_type = await ctx.read_resource(f"file://{file}")
    return "Processing complete"
```

## Running Your Server

### Development Mode

The fastest way to test and debug your server is with the MCP Inspector:

```bash
mcp dev server.py

# Add dependencies
mcp dev server.py --with pandas --with numpy

# Mount local code
mcp dev server.py --with-editable .
```

### Claude Desktop Integration

Once your server is ready, install it in Claude Desktop:

```bash
mcp install server.py

# Custom name
mcp install server.py --name "My Analytics Server"

# Environment variables
mcp install server.py -v API_KEY=abc123 -v DB_URL=postgres://...
mcp install server.py -f .env
```

### Direct Execution

For advanced scenarios like custom deployments:

```python
from mcp.server.fastmcp import FastMCP

mcp = FastMCP("My App")

if __name__ == "__main__":
    mcp.run()
```

Run it with:
```bash
python server.py
# or
mcp run server.py
```

## Examples

### Echo Server

A simple server demonstrating resources, tools, and prompts:

```python
from mcp.server.fastmcp import FastMCP

mcp = FastMCP("Echo")

@mcp.resource("echo://{message}")
def echo_resource(message: str) -> str:
    """Echo a message as a resource"""
    return f"Resource echo: {message}"

@mcp.tool()
def echo_tool(message: str) -> str:
    """Echo a message as a tool"""
    return f"Tool echo: {message}"

@mcp.prompt()
def echo_prompt(message: str) -> str:
    """Create an echo prompt"""
    return f"Please process this message: {message}"
```

### SQLite Explorer

A more complex example showing database integration:

```python
from mcp.server.fastmcp import FastMCP
import sqlite3

mcp = FastMCP("SQLite Explorer")

@mcp.resource("schema://main")
def get_schema() -> str:
    """Provide the database schema as a resource"""
    conn = sqlite3.connect("database.db")
    schema = conn.execute(
        "SELECT sql FROM sqlite_master WHERE type='table'"
    ).fetchall()
    return "\n".join(sql[0] for sql in schema if sql[0])

@mcp.tool()
def query_data(sql: str) -> str:
    """Execute SQL queries safely"""
    conn = sqlite3.connect("database.db")
    try:
        result = conn.execute(sql).fetchall()
        return "\n".join(str(row) for row in result)
    except Exception as e:
        return f"Error: {str(e)}"
```

## Advanced Usage

### Low-Level Server

For more control, you can use the low-level server implementation directly. This gives you full access to the protocol and allows you to customize every aspect of your server, including lifecycle management through the lifespan API:

```python
from contextlib import asynccontextmanager
from typing import AsyncIterator

@asynccontextmanager
async def server_lifespan(server: Server) -> AsyncIterator[dict]:
    """Manage server startup and shutdown lifecycle."""
    try:
        # Initialize resources on startup
        await db.connect()
        yield {"db": db}
    finally:
        # Clean up on shutdown
        await db.disconnect()

# Pass lifespan to server
server = Server("example-server", lifespan=server_lifespan)

# Access lifespan context in handlers
@server.call_tool()
async def query_db(name: str, arguments: dict) -> list:
    ctx = server.request_context
    db = ctx.lifespan_context["db"]
    return await db.query(arguments["query"])
```

The lifespan API provides:
- A way to initialize resources when the server starts and clean them up when it stops
- Access to initialized resources through the request context in handlers
- Type-safe context passing between lifespan and request handlers

```python
from mcp.server.lowlevel import Server, NotificationOptions
from mcp.server.models import InitializationOptions
import mcp.server.stdio
import mcp.types as types

# Create a server instance
server = Server("example-server")

@server.list_prompts()
async def handle_list_prompts() -> list[types.Prompt]:
    return [
        types.Prompt(
            name="example-prompt",
            description="An example prompt template",
            arguments=[
                types.PromptArgument(
                    name="arg1",
                    description="Example argument",
                    required=True
                )
            ]
        )
    ]

@server.get_prompt()
async def handle_get_prompt(
    name: str,
    arguments: dict[str, str] | None
) -> types.GetPromptResult:
    if name != "example-prompt":
        raise ValueError(f"Unknown prompt: {name}")

    return types.GetPromptResult(
        description="Example prompt",
        messages=[
            types.PromptMessage(
                role="user",
                content=types.TextContent(
                    type="text",
                    text="Example prompt text"
                )
            )
        ]
    )

async def run():
    async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):
        await server.run(
            read_stream,
            write_stream,
            InitializationOptions(
                server_name="example",
                server_version="0.1.0",
                capabilities=server.get_capabilities(
                    notification_options=NotificationOptions(),
                    experimental_capabilities={},
                )
            )
        )

if __name__ == "__main__":
    import asyncio
    asyncio.run(run())
```

### Writing MCP Clients

The SDK provides a high-level client interface for connecting to MCP servers:

```python
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client

# Create server parameters for stdio connection
server_params = StdioServerParameters(
    command="python", # Executable
    args=["example_server.py"], # Optional command line arguments
    env=None # Optional environment variables
)

# Optional: create a sampling callback
async def handle_sampling_message(message: types.CreateMessageRequestParams) -> types.CreateMessageResult:
    return types.CreateMessageResult(
        role="assistant",
        content=types.TextContent(
            type="text",
            text="Hello, world! from model",
        ),
        model="gpt-3.5-turbo",
        stopReason="endTurn",
    )

async def run():
    async with stdio_client(server_params) as (read, write):
        async with ClientSession(read, write, sampling_callback=handle_sampling_message) as session:
            # Initialize the connection
            await session.initialize()

            # List available prompts
            prompts = await session.list_prompts()

            # Get a prompt
            prompt = await session.get_prompt("example-prompt", arguments={"arg1": "value"})

            # List available resources
            resources = await session.list_resources()

            # List available tools
            tools = await session.list_tools()

            # Read a resource
            content, mime_type = await session.read_resource("file://some/path")

            # Call a tool
            result = await session.call_tool("tool-name", arguments={"arg1": "value"})

if __name__ == "__main__":
    import asyncio
    asyncio.run(run())
```

### MCP Primitives

The MCP protocol defines three core primitives that servers can implement:

| Primitive | Control               | Description                                         | Example Use                  |
|-----------|-----------------------|-----------------------------------------------------|------------------------------|
| Prompts   | User-controlled       | Interactive templates invoked by user choice        | Slash commands, menu options |
| Resources | Application-controlled| Contextual data managed by the client application   | File contents, API responses |
| Tools     | Model-controlled      | Functions exposed to the LLM to take actions        | API calls, data updates      |

### Server Capabilities

MCP servers declare capabilities during initialization:

| Capability  | Feature Flag                 | Description                        |
|-------------|------------------------------|------------------------------------|
| `prompts`   | `listChanged`                | Prompt template management         |
| `resources` | `subscribe`<br/>`listChanged`| Resource exposure and updates      |
| `tools`     | `listChanged`                | Tool discovery and execution       |
| `logging`   | -                            | Server logging configuration       |
| `completion`| -                            | Argument completion suggestions    |

## Documentation

- [Model Context Protocol documentation](https://modelcontextprotocol.io)
- [Model Context Protocol specification](https://spec.modelcontextprotocol.io)
- [Officially supported servers](https://github.com/modelcontextprotocol/servers)

## Contributing

We are passionate about supporting contributors of all levels of experience and would love to see you get involved in the project. See the [contributing guide](CONTRIBUTING.md) to get started.

## License

This project is licensed under the MIT License - see the LICENSE file for details.

```

--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------

```
neo4j
pydantic
mcp
requests
openai
rich
python-dotenv

```

--------------------------------------------------------------------------------
/fast_mcp_server.py:
--------------------------------------------------------------------------------

```python
from mcp.server.fastmcp import FastMCP, Context
from neo4j import GraphDatabase
from pydantic import BaseModel
from typing import List, Dict, Any
import logging

logging.basicConfig(level=logging.DEBUG)

# Initialize FastMCP server
mcp = FastMCP("Neo4j MCP Server")

# Neo4j connection details
NEO4J_URI = "neo4j+s://1e30f4c4.databases.neo4j.io"
NEO4J_USER = "neo4j"
NEO4J_PASSWORD = "pDMkrbwg1L__-3BHh46r-MD9-z6Frm8wnR__ZzFiVmM"

# Neo4j driver connection
def get_db():
    logging.debug("Establishing Neo4j database connection")
    return GraphDatabase.driver(NEO4J_URI, auth=(NEO4J_USER, NEO4J_PASSWORD))

# Models
class NodeLabel(BaseModel):
    label: str
    count: int
    properties: List[str]

class RelationshipType(BaseModel):
    type: str
    count: int
    properties: List[str]
    source_labels: List[str]
    target_labels: List[str]

class QueryRequest(BaseModel):
    cypher: str
    parameters: Dict[str, Any] = {}

# Function to fetch node labels
def fetch_node_labels(session) -> List[NodeLabel]:
    logging.debug("Fetching node labels")
    result = session.run("""
    CALL apoc.meta.nodeTypeProperties()
    YIELD nodeType, nodeLabels, propertyName
    WITH nodeLabels, collect(propertyName) AS properties
    MATCH (n) WHERE ALL(label IN nodeLabels WHERE label IN labels(n))
    WITH nodeLabels, properties, count(n) AS nodeCount
    RETURN nodeLabels, properties, nodeCount
    ORDER BY nodeCount DESC
    """)
    
    return [NodeLabel(label=record["nodeLabels"][0] if record["nodeLabels"] else "Unknown",
                      count=record["nodeCount"],
                      properties=record["properties"]) for record in result]

# Function to fetch relationship types
def fetch_relationship_types(session) -> List[RelationshipType]:
    logging.debug("Fetching relationship types")
    result = session.run("""
    CALL apoc.meta.relTypeProperties()
    YIELD relType, sourceNodeLabels, targetNodeLabels, propertyName
    WITH relType, sourceNodeLabels, targetNodeLabels, collect(propertyName) AS properties
    MATCH ()-[r]->() WHERE type(r) = relType
    WITH relType, sourceNodeLabels, targetNodeLabels, properties, count(r) AS relCount
    RETURN relType, sourceNodeLabels, targetNodeLabels, properties, relCount
    ORDER BY relCount DESC
    """)
    
    return [RelationshipType(type=record["relType"],
                             count=record["relCount"],
                             properties=record["properties"],
                             source_labels=record["sourceNodeLabels"],
                             target_labels=record["targetNodeLabels"]) for record in result]

# Define a resource to get the database schema
@mcp.resource("schema://database")
def get_schema() -> Dict[str, Any]:
    logging.debug("get schemas...")
    driver = get_db()
    with driver.session() as session:
        nodes = fetch_node_labels(session)
        relationships = fetch_relationship_types(session)
        return {"nodes": nodes, "relationships": relationships}

# Define a tool to execute a query
@mcp.tool()
def execute_query(query: QueryRequest) -> Dict[str, Any]:
    logging.debug("execute query...")
    driver = get_db()
    with driver.session() as session:
        result = session.run(query.cypher, query.parameters)
        records = [record.data() for record in result]
        summary = result.consume()
        metadata = {
            "nodes_created": summary.counters.nodes_created,
            "nodes_deleted": summary.counters.nodes_deleted,
            "relationships_created": summary.counters.relationships_created,
            "relationships_deleted": summary.counters.relationships_deleted,
            "properties_set": summary.counters.properties_set,
            "execution_time_ms": summary.result_available_after
        }
        return {"results": records, "metadata": metadata}

# Define prompts for analysis
@mcp.prompt()
def relationship_analysis_prompt(node_type_1: str, node_type_2: str) -> str:
    logging.debug("relationship analysis prompt...")
    return f"""
    Given the Neo4j database with {node_type_1} and {node_type_2} nodes, 
    I want to understand the relationships between them.

    Please help me:
    1. Find the most common relationship types between these nodes
    2. Identify the distribution of relationship properties
    3. Discover any interesting patterns or outliers

    Sample Cypher query to start with:
    MATCH (a:{node_type_1})-[r]->(b:{node_type_2})
    RETURN type(r) AS relationship_type, count(r) AS count
    ORDER BY count DESC
    LIMIT 10
    """

@mcp.prompt()
def path_discovery_prompt(start_node_label: str, start_node_property: str, start_node_value: str, end_node_label: str, end_node_property: str, end_node_value: str, max_depth: int) -> str:
    logging.debug("path discovery prompt...")
    return f"""
    I'm looking to understand how {start_node_label} nodes with property {start_node_property}="{start_node_value}" 
    connect to {end_node_label} nodes with property {end_node_property}="{end_node_value}".

    Please help me:
    1. Find all possible paths between these nodes
    2. Identify the shortest path
    3. Analyze what nodes and relationships appear most frequently in these paths

    Sample Cypher query to start with:
    MATCH path = (a:{start_node_label} {{
        {start_node_property}: "{start_node_value}"
    }})-[*1..{max_depth}]->(b:{end_node_label} {{
        {end_node_property}: "{end_node_value}"
    }})
    RETURN path LIMIT 10
    """

# Run the MCP server
if __name__ == "__main__":
    logging.debug("Starting MCP server")
    mcp.run()
    # mcp.run(host="0.0.0.0")
```

--------------------------------------------------------------------------------
/ne04j_mcp_server.py:
--------------------------------------------------------------------------------

```python
import os
from typing import Dict, List, Any, Optional
from fastapi import FastAPI, HTTPException, Depends
from fastapi.middleware.cors import CORSMiddleware
from pydantic import BaseModel, Field
from neo4j import GraphDatabase, Driver
import json
from dotenv import load_dotenv


load_dotenv()  # Ensure this is called before accessing the variables

NEO4J_URI = "neo4j+s://1e30f4c4.databases.neo4j.io" # os.getenv("NEO4J_URI")
NEO4J_USER = "neo4j" # os.getenv("NEO4J_USER")
NEO4J_PASSWORD = "pDMkrbwg1L__-3BHh46r-MD9-z6Frm8wnR__ZzFiVmM" # os.getenv("NEO4J_PASSWORD")

print(f"NEO4J_URI: {NEO4J_URI}")
print(f"NEO4J_USER: {NEO4J_USER}")
print(f"NEO4J_PASSWORD: {NEO4J_PASSWORD}")


# print(f"NEO4J_URI: {NEO4J_URI}, NEO4J_USER: {NEO4J_USER}, NEO4J_PASSWORD: {NEO4J_PASSWORD}")

# Initialize FastAPI
app = FastAPI(title="Neo4j MCP Server", 
              description="Model-Content-Protocol server for Neo4j databases")

# Add CORS middleware
app.add_middleware(
    CORSMiddleware,
    allow_origins=["*"],
    allow_credentials=True,
    allow_methods=["*"],
    allow_headers=["*"],
)

# Neo4j driver connection
def get_db() -> Driver:
    driver = GraphDatabase.driver(NEO4J_URI, auth=(NEO4J_USER, NEO4J_PASSWORD))
    try:
        # Test connection
        driver.verify_connectivity()
        return driver
    except Exception as e:
        raise HTTPException(status_code=500, detail=f"Database connection failed: {str(e)}")

# Models
class NodeLabel(BaseModel):
    label: str
    count: int
    properties: List[str]

class RelationshipType(BaseModel):
    type: str
    count: int
    properties: List[str]
    source_labels: List[str]
    target_labels: List[str]

class DatabaseSchema(BaseModel):
    nodes: List[NodeLabel]
    relationships: List[RelationshipType]

class QueryRequest(BaseModel):
    cypher: str
    parameters: Dict[str, Any] = Field(default_factory=dict)

class QueryResult(BaseModel):
    results: List[Dict[str, Any]]
    metadata: Dict[str, Any]

class PromptTemplate(BaseModel):
    name: str
    description: str
    prompt: str
    example_parameters: Dict[str, Any] = Field(default_factory=dict)

# Schema extraction functions
def get_node_labels(driver):
    with driver.session() as session:
        result = session.run("""
        CALL apoc.meta.nodeTypeProperties()
        YIELD nodeType, nodeLabels, propertyName
        WITH nodeLabels, collect(propertyName) AS properties
        MATCH (n) WHERE ALL(label IN nodeLabels WHERE label IN labels(n))
        WITH nodeLabels, properties, count(n) AS nodeCount
        RETURN nodeLabels, properties, nodeCount
        ORDER BY nodeCount DESC
        """)
        
        node_labels = []
        for record in result:
            label = record["nodeLabels"][0] if record["nodeLabels"] else "Unknown"
            node_labels.append(NodeLabel(
                label=label,
                count=record["nodeCount"],
                properties=record["properties"]
            ))
        return node_labels

def get_relationship_types(driver):
    with driver.session() as session:
        result = session.run("""
        CALL apoc.meta.relTypeProperties()
        YIELD relType, sourceNodeLabels, targetNodeLabels, propertyName
        WITH relType, sourceNodeLabels, targetNodeLabels, collect(propertyName) AS properties
        MATCH ()-[r]->() WHERE type(r) = relType
        WITH relType, sourceNodeLabels, targetNodeLabels, properties, count(r) AS relCount
        RETURN relType, sourceNodeLabels, targetNodeLabels, properties, relCount
        ORDER BY relCount DESC
        """)
        
        rel_types = []
        for record in result:
            rel_types.append(RelationshipType(
                type=record["relType"],
                count=record["relCount"],
                properties=record["properties"],
                source_labels=record["sourceNodeLabels"],
                target_labels=record["targetNodeLabels"]
            ))
        return rel_types

# Endpoints
@app.get("/schema", response_model=DatabaseSchema)
def get_schema(driver: Driver = Depends(get_db)):
    """
    Retrieve the complete database schema including node labels and relationship types
    """
    try:
        nodes = get_node_labels(driver)
        relationships = get_relationship_types(driver)
        return DatabaseSchema(nodes=nodes, relationships=relationships)
    except Exception as e:
        raise HTTPException(status_code=500, detail=f"Schema retrieval failed: {str(e)}")

@app.post("/query", response_model=QueryResult)
def execute_query(query: QueryRequest, driver: Driver = Depends(get_db)):
    """
    Execute a read-only Cypher query against the database
    """
    # Ensure query is read-only
    lower_query = query.cypher.lower()
    if any(keyword in lower_query for keyword in ["create", "delete", "remove", "set", "merge"]):
        raise HTTPException(status_code=403, detail="Only read-only queries are allowed")
    
    try:
        with driver.session() as session:
            result = session.run(query.cypher, query.parameters)
            records = [record.data() for record in result]
            
            # Get query stats
            summary = result.consume()
            metadata = {
                "nodes_created": summary.counters.nodes_created,
                "nodes_deleted": summary.counters.nodes_deleted,
                "relationships_created": summary.counters.relationships_created,
                "relationships_deleted": summary.counters.relationships_deleted,
                "properties_set": summary.counters.properties_set,
                "execution_time_ms": summary.result_available_after
            }
            
            return QueryResult(results=records, metadata=metadata)
    except Exception as e:
        raise HTTPException(status_code=500, detail=f"Query execution failed: {str(e)}")

# Analysis prompts
@app.get("/prompts", response_model=List[PromptTemplate])
def get_analysis_prompts():
    """
    Get a list of predefined prompt templates for common Neo4j data analysis tasks
    """
    prompts = [
        PromptTemplate(
            name="Relationship Analysis",
            description="Analyze relationships between two node types",
            prompt="""
            Given the Neo4j database with {node_type_1} and {node_type_2} nodes, 
            I want to understand the relationships between them.
            
            Please help me:
            1. Find the most common relationship types between these nodes
            2. Identify the distribution of relationship properties
            3. Discover any interesting patterns or outliers
            
            Sample Cypher query to start with:
            ```
            MATCH (a:{node_type_1})-[r]->(b:{node_type_2})
            RETURN type(r) AS relationship_type, count(r) AS count
            ORDER BY count DESC
            LIMIT 10
            ```
            """,
            example_parameters={"node_type_1": "Person", "node_type_2": "Movie"}
        ),
        PromptTemplate(
            name="Path Discovery",
            description="Find paths between nodes of interest",
            prompt="""
            I'm looking to understand how {start_node_label} nodes with property {start_node_property}="{start_node_value}" 
            connect to {end_node_label} nodes with property {end_node_property}="{end_node_value}".
            
            Please help me:
            1. Find all possible paths between these nodes
            2. Identify the shortest path
            3. Analyze what nodes and relationships appear most frequently in these paths
            
            Sample Cypher query to start with:
            ```
            MATCH path = (a:{start_node_label} {{
                {start_node_property}: "{start_node_value}"
            }})-[*1..{max_depth}]->(b:{end_node_label} {{
                {end_node_property}: "{end_node_value}"
            }})
            RETURN path LIMIT 10
            ```
            """,
            example_parameters={
                "start_node_label": "Person", 
                "start_node_property": "name",
                "start_node_value": "Tom Hanks",
                "end_node_label": "Person",
                "end_node_property": "name",
                "end_node_value": "Kevin Bacon",
                "max_depth": 4
            }
        ),
        PromptTemplate(
            name="Property Distribution",
            description="Analyze the distribution of property values",
            prompt="""
            I want to understand the distribution of {property_name} across {node_label} nodes.
            
            Please help me:
            1. Calculate basic statistics (min, max, avg, std)
            2. Identify the most common values and their frequencies
            3. Detect any outliers or unusual patterns
            
            Sample Cypher query to start with:
            ```
            MATCH (n:{node_label})
            WHERE n.{property_name} IS NOT NULL
            RETURN 
                min(n.{property_name}) AS min_value,
                max(n.{property_name}) AS max_value,
                avg(n.{property_name}) AS avg_value,
                stDev(n.{property_name}) AS std_value
            ```
            
            And for frequency distribution:
            ```
            MATCH (n:{node_label})
            WHERE n.{property_name} IS NOT NULL
            RETURN n.{property_name} AS value, count(n) AS frequency
            ORDER BY frequency DESC
            LIMIT 20
            ```
            """,
            example_parameters={"node_label": "Movie", "property_name": "runtime"}
        ),
        PromptTemplate(
            name="Community Detection",
            description="Detect communities or clusters in the graph",
            prompt="""
            I want to identify communities or clusters within the graph based on {relationship_type} relationships.
            
            Please help me:
            1. Apply graph algorithms to detect communities
            2. Analyze the size and composition of each community
            3. Identify central nodes within each community
            
            Sample Cypher query to start with (requires GDS library):
            ```
            CALL gds.graph.project(
                'community-graph',
                '*',
                '{relationship_type}'
            )
            YIELD graphName;
            
            CALL gds.louvain.stream('community-graph')
            YIELD nodeId, communityId
            WITH gds.util.asNode(nodeId) AS node, communityId
            RETURN communityId, collect(node.{label_property}) AS members, count(*) AS size
            ORDER BY size DESC
            LIMIT 10
            ```
            """,
            example_parameters={"relationship_type": "FRIENDS_WITH", "label_property": "name"}
        )
    ]
    return prompts

# Main entry point
if __name__ == "__main__":
    uvicorn.run(app, host="0.0.0.0", port=8000)
```

--------------------------------------------------------------------------------
/mcp_client.py:
--------------------------------------------------------------------------------

```python
import requests
import json
import os
from typing import Dict, List, Any, Optional
from dotenv import load_dotenv
import argparse
import sys
from rich.console import Console
from rich.table import Table
from rich.panel import Panel
from rich.syntax import Syntax
from rich import print as rprint
from rich.prompt import Prompt, Confirm
import textwrap
from openai import OpenAI

# Load environment variables
load_dotenv()


# Configuration
MCP_SERVER_URL = os.getenv("MCP_SERVER_URL", "http://localhost:8000")
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")

console = Console()

import os
from openai import OpenAI
from dotenv import load_dotenv



# OpenAI configuration
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")



class MCPClient:
    """Client for interacting with the Neo4j MCP Server"""
    
    def __init__(self, server_url: str = MCP_SERVER_URL):
        self.server_url = server_url
        self.schema = None
        self.prompts = None
    
    def get_schema(self) -> Dict:
        """Fetch the database schema from the MCP server"""
        try:
            response = requests.get(f"{self.server_url}/schema")
            response.raise_for_status()
            self.schema = response.json()
            return self.schema
        except requests.exceptions.RequestException as e:
            console.print(f"[bold red]Error fetching schema: {str(e)}[/bold red]")
            return None
    
    def get_prompts(self) -> List[Dict]:
        """Fetch the available analysis prompts from the MCP server"""
        try:
            response = requests.get(f"{self.server_url}/prompts")
            response.raise_for_status()
            self.prompts = response.json()
            return self.prompts
        except requests.exceptions.RequestException as e:
            console.print(f"[bold red]Error fetching prompts: {str(e)}[/bold red]")
            return None
    
    def execute_query(self, cypher: str, parameters: Dict = None) -> Dict:
        """Execute a Cypher query against the Neo4j database"""
        if parameters is None:
            parameters = {}
            
        try:
            response = requests.post(
                f"{self.server_url}/query",
                json={"cypher": cypher, "parameters": parameters}
            )
            response.raise_for_status()
            return response.json()
        except requests.exceptions.RequestException as e:
            console.print(f"[bold red]Error executing query: {str(e)}[/bold red]")
            if hasattr(e, 'response') and e.response is not None:
                console.print(f"[bold red]Server response: {e.response.text}[/bold red]")
            return None

    def display_schema(self):
        """Display the database schema in a readable format"""
        if not self.schema:
            self.get_schema()
            
        if not self.schema:
            return
            
        # Display node labels
        node_table = Table(title="Node Labels")
        node_table.add_column("Label", style="cyan")
        node_table.add_column("Count", style="magenta")
        node_table.add_column("Properties", style="green")
        
        for node in self.schema.get("nodes", []):
            node_table.add_row(
                node["label"],
                str(node["count"]),
                ", ".join(node["properties"])
            )
            
        console.print(node_table)
        
        # Display relationship types
        rel_table = Table(title="Relationship Types")
        rel_table.add_column("Type", style="cyan")
        rel_table.add_column("Count", style="magenta")
        rel_table.add_column("Source → Target", style="yellow")
        rel_table.add_column("Properties", style="green")
        
        for rel in self.schema.get("relationships", []):
            rel_table.add_row(
                rel["type"],
                str(rel["count"]),
                f"{' | '.join(rel['source_labels'])} → {' | '.join(rel['target_labels'])}",
                ", ".join(rel["properties"])
            )
            
        console.print(rel_table)
    
    def display_prompts(self):
        """Display available analysis prompts"""
        if not self.prompts:
            self.get_prompts()
            
        if not self.prompts:
            return
            
        for i, prompt in enumerate(self.prompts, 1):
            console.print(f"[bold cyan]{i}. {prompt['name']}[/bold cyan]")
            console.print(f"[italic]{prompt['description']}[/italic]")
            console.print()
    
    def select_prompt(self) -> Dict:
        """Let the user select a prompt and fill in parameters"""
        if not self.prompts:
            self.get_prompts()
            
        if not self.prompts:
            return None
            
        self.display_prompts()
        
        # Select prompt
        prompt_index = Prompt.ask(
            "Select a prompt number", 
            choices=[str(i) for i in range(1, len(self.prompts) + 1)]
        )
        
        selected_prompt = self.prompts[int(prompt_index) - 1]
        console.print(f"\n[bold]Selected: {selected_prompt['name']}[/bold]\n")
        
        # Display prompt details
        prompt_text = selected_prompt["prompt"]
        console.print(Panel(prompt_text, title="Prompt Template"))
        
        # Fill in parameters
        parameters = {}
        example_parameters = selected_prompt.get("example_parameters", {})
        
        if example_parameters:
            console.print("\n[bold]Example parameters:[/bold]")
            for key, value in example_parameters.items():
                console.print(f"  {key}: {value}")
        
        # Extract parameter placeholders from the prompt
        import re
        placeholders = re.findall(r'\{([^{}]+)\}', prompt_text)
        unique_placeholders = set(placeholders)
        
        if unique_placeholders:
            console.print("\n[bold]Enter values for parameters:[/bold]")
            for param in unique_placeholders:
                default = example_parameters.get(param, "")
                value = Prompt.ask(f"  {param}", default=str(default))
                parameters[param] = value
        
        # Extract and modify sample Cypher query
        sample_query_match = re.search(r'```\s*([\s\S]+?)\s*```', prompt_text)
        if sample_query_match:
            sample_query = sample_query_match.group(1).strip()
            
            # Replace placeholders with user values
            for param, value in parameters.items():
                sample_query = sample_query.replace(f"{{{param}}}", value)
            
            console.print("\n[bold]Generated Cypher query:[/bold]")
            syntax = Syntax(sample_query, "cypher", theme="monokai", line_numbers=True)
            console.print(syntax)
            
            if Confirm.ask("Execute this query?", default=True):
                return self.execute_prompt_query(sample_query)
        else:
            console.print("[yellow]No sample query found in the prompt.[/yellow]")
        
        return None
    
    def execute_prompt_query(self, query: str) -> Dict:
        """Execute the query generated from a prompt template"""
        result = self.execute_query(query)
        if result:
            self.display_query_results(result)
        return result
    
    def display_query_results(self, result: Dict):
        """Display query results in a readable format"""
        records = result.get("results", [])
        metadata = result.get("metadata", {})
        
        if not records:
            console.print("[yellow]No results returned.[/yellow]")
            return
            
        # Get all unique keys from all records
        all_keys = set()
        for record in records:
            all_keys.update(record.keys())
        
        # Create a table with all columns
        table = Table(title=f"Query Results ({len(records)} records)")
        for key in all_keys:
            table.add_column(key)
        
        # Add rows to the table
        for record in records:
            row_values = []
            for key in all_keys:
                value = record.get(key, "")
                
                # Handle different data types for display
                if isinstance(value, (dict, list)):
                    value = json.dumps(value, indent=2)
                    # Truncate long values
                    if len(value) > 50:
                        value = value[:47] + "..."
                elif value is None:
                    value = ""
                
                row_values.append(str(value))
            
            table.add_row(*row_values)
        
        console.print(table)
        
        # Display metadata
        if metadata:
            console.print("\n[bold]Query Metadata:[/bold]")
            for key, value in metadata.items():
                console.print(f"  {key}: {value}")
    
    def interactive_query(self):
        """Allow the user to enter a custom Cypher query"""
        console.print("\n[bold]Enter a Cypher query:[/bold]")
        console.print("[italic](Press Enter twice when finished)[/italic]")
        
        lines = []
        while True:
            line = input()
            if not line and lines and not lines[-1]:
                # Empty line after content, break
                break
            lines.append(line)
        
        query = "\n".join(lines).strip()
        
        if not query:
            console.print("[yellow]No query entered.[/yellow]")
            return
        
        syntax = Syntax(query, "cypher", theme="monokai", line_numbers=True)
        console.print("\n[bold]Executing query:[/bold]")
        console.print(syntax)
        
        result = self.execute_query(query)
        if result:
            self.display_query_results(result)


class MCPClientWithLLM(MCPClient):
    """Extended MCP Client with OpenAI LLM integration"""
    
    def __init__(self, server_url=MCP_SERVER_URL, model="gpt-4"):
        super().__init__(server_url)
        self.openai_client = OpenAI(api_key=OPENAI_API_KEY)
        self.model = model
    
    def generate_query_with_llm(self, user_input, schema=None):
        """Use OpenAI to generate a Cypher query based on user input and schema"""
        if not schema:
            schema = self.get_schema()
            
        # Create a system message with the database schema
        system_message = f"""
        You are a Neo4j database expert. Given the following database schema:
        
        Nodes: {', '.join([node['label'] for node in schema['nodes']])}
        Relationships: {', '.join([rel['type'] for rel in schema['relationships']])}
        
        Generate a Cypher query that answers the user's question. Return ONLY the Cypher query without any explanations.
        """
        
        # Call the OpenAI API
        response = self.openai_client.chat.completions.create(
            model=self.model,
            messages=[
                {"role": "system", "content": system_message},
                {"role": "user", "content": user_input}
            ],
            temperature=0.1  # Low temperature for more deterministic outputs
        )
        
        # Extract the generated Cypher query
        cypher_query = response.choices[0].message.content.strip()
        
        # Remove markdown code blocks if present
        if cypher_query.startswith("``````"):
            cypher_query = cypher_query.strip("```")
            if cypher_query.startswith("cypher"):
                cypher_query = cypher_query[6:].strip()
        
        return cypher_query
    
    def analyze_results_with_llm(self, user_query, results):
        """Use OpenAI to analyze and explain query results"""
        if not results:
            return "No results found."
            
        # Create a prompt for analyzing the results
        prompt = f"""
        The user asked: "{user_query}"
        
        The database returned these results:
        {results}
        
        Please analyze these results and provide a clear, concise explanation.
        """
        
        # Call the OpenAI API
        response = self.openai_client.chat.completions.create(
            model=self.model,
            messages=[{"role": "user", "content": prompt}],
            temperature=0.7
        )
        
        # Correctly access the content of the response
        return response.choices[0].message.content

def main():
    """Main entry point for the CLI"""
    parser = argparse.ArgumentParser(description="Neo4j MCP Client")
    parser.add_argument("--server", help="MCP server URL", default=MCP_SERVER_URL)
    
    subparsers = parser.add_subparsers(dest="command", help="Command to execute")
    
    # Schema command
    subparsers.add_parser("schema", help="Display database schema")
    
    # Query command
    query_parser = subparsers.add_parser("query", help="Execute a Cypher query")
    query_parser.add_argument("--file", help="File containing the Cypher query")
    query_parser.add_argument("--query", help="Cypher query string")
    
    # Prompts command
    prompt_parser = subparsers.add_parser("prompts", help="Work with analysis prompts")
    prompt_parser.add_argument("--list", action="store_true", help="List available prompts")
    prompt_parser.add_argument("--select", action="store_true", help="Select and use a prompt")
    
    # Interactive mode
    subparsers.add_parser("interactive", help="Start interactive mode")
    
    args = parser.parse_args()
    
    client = MCPClientWithLLM(server_url=args.server)
    
    if args.command == "schema":
        client.display_schema()
    
    elif args.command == "query":
        if args.file:
            try:
                with open(args.file, 'r') as f:
                    query = f.read().strip()
            except Exception as e:
                console.print(f"[bold red]Error reading file: {str(e)}[/bold red]")
                return
        elif args.query:
            query = args.query
        else:
            client.interactive_query()
            return
            
        result = client.execute_query(query)
        if result:
            client.display_query_results(result)
    
    elif args.command == "prompts":
        if args.list:
            client.display_prompts()
        elif args.select:
            client.select_prompt()
        else:
            client.display_prompts()
            client.select_prompt()
    
    elif args.command == "interactive" or not args.command:
        llm_interactive_mode(client)
    
    else:
        parser.print_help()

def interactive_mode(client: MCPClient):
    """Run the client in interactive mode"""
    console.print("[bold]Neo4j MCP Client[/bold] - Interactive Mode")
    console.print("Type 'help' for available commands, 'exit' to quit\n")
    
    while True:
        command = Prompt.ask("mcp").lower()
        
        if command == "exit" or command == "quit":
            break
            
        elif command == "help":
            console.print("\n[bold]Available commands:[/bold]")
            console.print("  schema    - Display database schema")
            console.print("  query     - Enter and execute a Cypher query")
            console.print("  prompts   - List and select analysis prompts")
            console.print("  examples  - Show example queries")
            console.print("  clear     - Clear the screen")
            console.print("  exit      - Exit the client\n")
            
        elif command == "schema":
            client.display_schema()
            
        elif command == "query":
            client.interactive_query()
            
        elif command == "prompts":
            client.select_prompt()
            
        elif command == "examples":
            console.print("\n[bold]Example queries:[/bold]")
            examples = [
                ("Get all node labels", "MATCH (n) RETURN DISTINCT labels(n) AS labels, COUNT(*) AS count"),
                ("Get all relationship types", "MATCH ()-[r]->() RETURN DISTINCT type(r) AS type, COUNT(*) AS count"),
                ("Find a specific node", "MATCH (n:Loan {loanId: 105}) RETURN n"),
                ("Find connected nodes", "MATCH (n:Borrower)-[r]-(m) RETURN n.name, type(r), m LIMIT 10"),
                ("Find paths between nodes", "MATCH path = (a:Borrower)-[*1..3]-(b:Borrower) WHERE a.borrowerId <> b.borrowerId RETURN path LIMIT 5"),
            ]
            
            for i, (desc, query) in enumerate(examples, 1):
                console.print(f"\n[bold cyan]{i}. {desc}[/bold cyan]")
                syntax = Syntax(query, "cypher", theme="monokai")
                console.print(syntax)
                
            example_index = Prompt.ask(
                "\nSelect an example to run (or 0 to skip)", 
                choices=["0"] + [str(i) for i in range(1, len(examples) + 1)],
                default="0"
            )
            
            if example_index != "0":
                query = examples[int(example_index) - 1][1]
                result = client.execute_query(query)
                if result:
                    client.display_query_results(result)
            
        elif command == "clear":
            os.system('cls' if os.name == 'nt' else 'clear')
            
        else:
            console.print("[yellow]Unknown command. Type 'help' for available commands.[/yellow]")

def llm_interactive_mode(client: MCPClientWithLLM):
    """Run the client in LLM-assisted interactive mode"""
    console.print("[bold]Neo4j MCP Client with OpenAI[/bold] - Interactive Mode")
    console.print("Type 'help' for available commands, 'exit' to quit\n")
    
    while True:
        command = Prompt.ask("mcp").lower()
        
        if command == "exit" or command == "quit":
            break
            
        elif command == "help":
            console.print("\n[bold]Available commands:[/bold]")
            console.print("  schema    - Display database schema")
            console.print("  query     - Enter and execute a Cypher query")
            console.print("  ask       - Ask a natural language question")
            console.print("  prompts   - List and select analysis prompts")
            console.print("  clear     - Clear the screen")
            console.print("  exit      - Exit the client\n")
            
        elif command == "schema":
            client.display_schema()
            
        elif command == "query":
            client.interactive_query()
            
        elif command == "ask":
            question = Prompt.ask("\n[bold]Enter your question about the database[/bold]")
            console.print("[italic]Generating Cypher query...[/italic]")
            
            # Generate Cypher query using LLM
            cypher_query = client.generate_query_with_llm(question)
            
            # Display and execute the query
            console.print("\n[bold]Generated Cypher query:[/bold]")
            syntax = Syntax(cypher_query, "cypher", theme="monokai", line_numbers=True)
            console.print(syntax)
            
            if Confirm.ask("Execute this query?", default=True):
                result = client.execute_query(cypher_query)
                if result:
                    client.display_query_results(result)
                    
                    # Analyze results with LLM
                    console.print("\n[bold]Analysis:[/bold]")
                    analysis = client.analyze_results_with_llm(question, result)
                    console.print(Panel(analysis, title="AI Analysis"))
            
        elif command == "prompts":
            client.select_prompt()
            
        elif command == "clear":
            os.system('cls' if os.name == 'nt' else 'clear')
            
        else:
            console.print("[yellow]Unknown command. Type 'help' for available commands.[/yellow]")

if __name__ == "__main__":
    try:
        main()
    except KeyboardInterrupt:
        console.print("\n[bold]Exiting...[/bold]")
        sys.exit(0)
```
Page 1/2FirstPrevNextLast