#
tokens: 3538/50000 5/5 files
lines: off (toggle) GitHub
raw markdown copy
# Directory Structure

```
├── .gitignore
├── assets
│   └── MCPArchitecture.png
├── LICENSE
├── mcp-ssms-client.py
├── mcp-ssms-server.py
├── README.md
└── requirement.txt
```

# Files

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
#  Usually these files are written by a python script from a template
#  before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
.pybuilder/
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
#   For a library or package, you might want to ignore these files since the code is
#   intended to run in multiple environments; otherwise, check them in:
# .python-version

# pipenv
#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
#   However, in case of collaboration, if having platform-specific dependencies or dependencies
#   having no cross-platform support, pipenv may install dependencies that don't work, or not
#   install all needed dependencies.
#Pipfile.lock

# UV
#   Similar to Pipfile.lock, it is generally recommended to include uv.lock in version control.
#   This is especially recommended for binary packages to ensure reproducibility, and is more
#   commonly ignored for libraries.
#uv.lock

# poetry
#   Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
#   This is especially recommended for binary packages to ensure reproducibility, and is more
#   commonly ignored for libraries.
#   https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock

# pdm
#   Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
#pdm.lock
#   pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
#   in version control.
#   https://pdm.fming.dev/latest/usage/project/#working-with-version-control
.pdm.toml
.pdm-python
.pdm-build/

# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/

# pytype static type analyzer
.pytype/

# Cython debug symbols
cython_debug/

# PyCharm
#  JetBrains specific template is maintained in a separate JetBrains.gitignore that can
#  be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
#  and can be added to the global gitignore or merged into this file.  For a more nuclear
#  option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/

# Ruff stuff:
.ruff_cache/

# PyPI configuration file
.pypirc

```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
# SQL Server Agent - Modal Context Protocol
Here is the SQL Server Agent that let's you Interact with the SQL Server Database in the Natural Language leveraging the Modal Context Protocol as a layer between our LLMs and Data Source.

## Key Features:

* **Talk to Your Database**: Chat with SQL Server using plain English.
* **No-Code Database Operations**: Manage your database tasks entirely through natural conversations.
* **One-Click Procedure Execution**: Run stored procedures effortlessly with natural commands.
* **MCP-Enhanced Accuracy**: Achieve precise database interactions through Modal Context Protocol (MCP), intelligently connecting your commands to data.
* **Context-Aware Conversations**: Enjoy seamless interactions powered by Modal Context Protocol.

## What is MCP?
MCP (Modal Context Protocol) is a metodology that stats how we should bind the context to the LLMs.
MCP provides a standardized way to connect AI models to different data sources and tools.

## Why MCP?
MCP helps us to build the complex workflows in a simplified way to build the Agents on top of LLMs where the laguage models needs a frequent integration with the data sources and tools.

## MCP Architecture:
The MCP architecture follows a client-server model, allowing a single client to interact seamlessly with multiple servers.

![MCP Architecture](assets/MCPArchitecture.png)


**MCP-Client**: Your AI client (LLM) accessing data.

**MCP-Protocol**: Connects your client directly to the server.

**MCP-Server**: Helps your client access data sources via MCP.

**Local Database, Cloud Database, External APIs**: Sources providing data through local storage, cloud, or online APIs.

## Now, Let's Dive Into the Implementation
With an understanding of MCP and its architecture, it's time to bring it all together with the **SQL Server Agent**.

### What is SQL Server Agent?
The **SQL Server Agent** is a conversational AI Query CLI that enables you to **interact with your SQL Server Database using natural language**. Powered by the **Modal Context Protocol**, it acts as a smart layer between your language model and the database, making it possible to:

- Query your database without writing SQL
- Execute stored procedures with conversational commands
- Perform complex operations while maintaining context across multiple steps

Whether you're a developer, analyst, or non-technical user, this agent makes your data accessible through intuitive, human-like interactions.

Now, let’s walk through how to get it up and running 👇

## Prerequisites
Before you get started, make sure you have the following:

- **Python 3.12+** installed on your machine  
- A valid **OpenAI API Key**

## Getting Started
Follow these steps to get the project up and running:
### 1. Clone the Repository

```bash
git clone https://github.com/Amanp17/mcp-sql-server-natural-lang.git
cd mcp-sql-server-natural-lang
```

### 2. Install Dependencies
```bash
pip install -r requirements.txt
```
### 3. Setup Environment Variables

Create a `.env` file in the root of the project and add the following:

```dotenv
OPENAI_API_KEY=your_openai_api_key
MSSQL_SERVER=localhost
MSSQL_DATABASE=your_database_name
MSSQL_USERNAME=your_username
MSSQL_PASSWORD=your_password
MSSQL_DRIVER={ODBC Driver 17 for SQL Server}
```

## Running the SQL Server Agent
Once you've set up your environment and dependencies, you're ready to interact with the SQL Server Agent.

### Run the Client Script
Execute the following command to start the agent:

```bash
python mcp-ssms-client.py
```

Once the script starts, it will prompt you like this:

```bash
Enter your Query:
```

Now, you can type your request in plain English. For example:

```swift
Create a Employee table with 10 dummy data in it with their departments and salaries.
```

The agent will process your input using the Modal Context Protocol and return the relevant data from your SQL Server database.

🧠 Tip: You can ask follow-up questions or make requests like "show me the employees and their departments?" or "how many employees are having salary under $40K?" — the context is preserved!

## Conclusion

The **SQL Server Agent** powered by the **Modal Context Protocol (MCP)** brings the power of conversational AI to your database operations. By bridging the gap between natural language and SQL, it allows users to interact with their data effortlessly, making database access more intuitive, efficient, and accessible to everyone even those without technical expertise.

Whether you're querying data, executing procedures, or building complex workflows, this agent serves as your intelligent interface to SQL Server.

Feel free to contribute, open issues, or suggest enhancements — we're building the future of AI-driven data interaction together! 🚀

```

--------------------------------------------------------------------------------
/requirement.txt:
--------------------------------------------------------------------------------

```
python-dotenv
mcp
pyodbc
loguru
openai
```

--------------------------------------------------------------------------------
/mcp-ssms-server.py:
--------------------------------------------------------------------------------

```python
import os
import pyodbc
from loguru import logger
from mcp.server.fastmcp import FastMCP
from dotenv import load_dotenv

load_dotenv()

# Database configurations
MSSQL_SERVER = os.getenv("MSSQL_SERVER", "localhost")
MSSQL_DATABASE = os.getenv("MSSQL_DATABASE", "my_database")
MSSQL_USERNAME = os.getenv("MSSQL_USERNAME", "sa")
MSSQL_PASSWORD = os.getenv("MSSQL_PASSWORD", "your_password")
MSSQL_DRIVER = os.getenv("MSSQL_DRIVER", "{ODBC Driver 17 for SQL Server}")

# Building the connection string
connection_string = (
    f"DRIVER={MSSQL_DRIVER};"
    f"SERVER={MSSQL_SERVER};"
    f"DATABASE={MSSQL_DATABASE};"
    f"UID={MSSQL_USERNAME};"
    f"PWD={MSSQL_PASSWORD}"
)

# Creating an MCP server instance
mcp = FastMCP("Demo")

@mcp.tool()
def query_data(sql: str) -> str:
    """Execute SQL queries safely on MSSQL."""
    logger.info(f"Processing your Query...")
    try:
        conn = pyodbc.connect(connection_string)
        cursor = conn.cursor()
        cursor.execute(sql)
        
        if cursor.description is not None:
            result = cursor.fetchall()
            output = "\n".join(str(row) for row in result)
        else:
            output = "SQL executed successfully, no results returned."
        
        conn.commit() 
        return output
    except Exception as e:
        logger.error("Error executing query: " + str(e))
        return f"Error: {str(e)}"
    finally:
        conn.close()

@mcp.prompt()
def example_prompt(code: str) -> str:
    return f"Please review this code:\n\n{code}"

if __name__ == "__main__":
    print("Starting server...")
    mcp.run(transport="stdio")

```

--------------------------------------------------------------------------------
/mcp-ssms-client.py:
--------------------------------------------------------------------------------

```python
import asyncio
import os
import re
import json
from dataclasses import dataclass, field
from typing import cast
import sys
from dotenv import load_dotenv
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client

load_dotenv()

from openai import OpenAI
client = OpenAI()

# Create server parameters for stdio connection
server_params = StdioServerParameters(
    command="python",          # Executable
    args=["./mcp-ssms-server.py"],  # Command line arguments to run the server script
    env=None,                  # Optional environment variables
)

if os.name == 'nt':
    import msvcrt

    def get_input(prompt: str) -> str:
        sys.stdout.write(prompt)
        sys.stdout.flush()
        buf = []
        while True:
            ch = msvcrt.getwch()
            if ch == '\r':
                sys.stdout.write('\n')
                return ''.join(buf)
            elif ch == '\x1b':
                raise KeyboardInterrupt
            elif ch == '\b':
                if buf:
                    buf.pop()
                    sys.stdout.write('\b \b')
            else:
                buf.append(ch)
                sys.stdout.write(ch)
                sys.stdout.flush()
    
@dataclass
class Chat:
    messages: list[dict] = field(default_factory=list)
    system_prompt: str = (
        "You are a master MS SQL Server assistant. "
        "Your job is to use the tools at your disposal to execute SQL queries "
        "and provide the results to the user. "
        "When you need to execute a SQL query, respond with the following format exactly:\n"
        "TOOL: query_data, ARGS: {\"sql\": \"<YOUR_SQL_QUERY>\"}"
    )

    async def process_query(self, session: ClientSession, query: str) -> None:
        # 1) Gather available tools (for reference only)
        response = await session.list_tools()
        available_tools = [
            {
                "name": tool.name,
                "description": tool.description or "",
                "input_schema": tool.inputSchema,
            }
            for tool in response.tools
        ]

        # 2) Build the conversation for OpenAI
        openai_messages = [
            {"role": "system", "content": self.system_prompt},
        ]
        openai_messages.extend(self.messages)
        openai_messages.append({"role": "user", "content": query})

        # 3) Send to OpenAI
        completion = client.chat.completions.create(
            model="gpt-4",
            messages=openai_messages,
            max_tokens=2000,
            temperature=0.0,
        )

        assistant_reply = completion.choices[0].message.content

        self.messages.append({"role": "user", "content": query})
        self.messages.append({"role": "assistant", "content": assistant_reply})

        # 4) Look for a tool call in the assistant reply
        if "TOOL:" in assistant_reply:
            try:
                pattern = r"TOOL:\s*(\w+),\s*ARGS:\s*(\{.*\})"
                match = re.search(pattern, assistant_reply)
                if match:
                    tool_name = match.group(1)
                    tool_args_str = match.group(2)
                    tool_args = json.loads(tool_args_str)

                    # Now call the tool on the server
                    result = await session.call_tool(tool_name, cast(dict, tool_args))
                    tool_text = getattr(result.content[0], "text", "")

                    tool_result_msg = f"Tool '{tool_name}' result:\n{tool_text}"
                    self.messages.append({"role": "system", "content": tool_result_msg})

                    completion_2 = client.chat.completions.create(
                        model="gpt-4",
                        messages=[{"role": "system", "content": self.system_prompt}] + self.messages,
                        max_tokens=1000,
                        temperature=0.0,
                    )
                    final_reply = completion_2.choices[0].message.content
                    print("\nAssistant:", final_reply)
                    self.messages.append({"role": "assistant", "content": final_reply})
                else:
                    print("No valid tool command found in assistant response.")
            except Exception as e:
                print(f"Failed to parse tool usage: {e}")

    async def chat_loop(self, session: ClientSession):
        while True:
            try:
                query = get_input("Enter your Query (Press ESC to Quit): ").strip()
            except (KeyboardInterrupt, EOFError):
                print("\nExiting...")
                break
            if not query:
                break
            await self.process_query(session, query)

    async def run(self):
        async with stdio_client(server_params) as (read, write):
            async with ClientSession(read, write) as session:
                await session.initialize()
                await self.chat_loop(session)

if __name__ == "__main__":
    chat = Chat()
    asyncio.run(chat.run())

```