#
tokens: 5849/50000 5/5 files
lines: on (toggle) GitHub
raw markdown copy reset
# Directory Structure

```
├── example_usage.py
├── github_setup_instructions.md
├── README.md
├── requirements.txt
└── workflow_summarizer_mcp.py
```

# Files

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
 1 | # N8N Workflow Summarizer MCP Tool
 2 | 
 3 | An MCP tool that analyzes and summarizes n8n workflows for Claude.
 4 | 
 5 | ## Overview
 6 | 
 7 | This tool simplifies n8n workflow JSON files into clear, concise summaries. It extracts key information about nodes, connections, and functionality to help Claude understand complex workflows.
 8 | 
 9 | ## Features
10 | 
11 | - Analyzes n8n workflow JSON files
12 | - Extracts node counts and types 
13 | - Identifies connections between nodes
14 | - Produces markdown summaries
15 | - Compatible with Model Context Protocol (MCP)
16 | 
17 | ## Installation
18 | 
19 | Follow these steps to install the N8N Workflow Summarizer MCP tool:
20 | 
21 | ```bash
22 | # Clone the repository
23 | git clone https://github.com/gblack686/n8n-workflow-summarizer-mcp.git
24 | cd n8n-workflow-summarizer-mcp
25 | 
26 | # Set up your OpenAI API key
27 | export OPENAI_API_KEY=your_api_key_here
28 | 
29 | # Install dependencies
30 | pip install -r requirements.txt
31 | 
32 | # Install as MCP tool
33 | fastmcp install workflow_summarizer_mcp.py --name "N8N Workflow Summarizer"
34 | ```
35 | 
36 | ## Usage
37 | 
38 | Check the `example_usage.py` file for a complete example of how to use this tool.
39 | 
40 | ```python
41 | import asyncio
42 | from workflow_summarizer_mcp import summarize_workflow
43 | 
44 | async def main():
45 |     # Specify your workflow JSON file
46 |     workflow_file = "example_workflow.json"
47 |     
48 |     # Summarize the workflow using a specific model
49 |     summary = await summarize_workflow(workflow_file, model="gpt-4o")
50 |     
51 |     print(summary)
52 | 
53 | if __name__ == "__main__":
54 |     asyncio.run(main())
55 | ```
56 | 
57 | ## Contributing
58 | 
59 | Contributions are welcome! Please feel free to submit a Pull Request. 
```

--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------

```
1 | fastmcp>=0.4.1
2 | openai>=1.0.0
3 | pydantic>=2.0.0
4 | fastapi>=0.100.0 
```

--------------------------------------------------------------------------------
/github_setup_instructions.md:
--------------------------------------------------------------------------------

```markdown
 1 | # GitHub Repository Setup Instructions
 2 | 
 3 | Follow these steps to create a GitHub repository and push your local code:
 4 | 
 5 | ## Step 1: Create a New Repository on GitHub
 6 | 
 7 | 1. Go to [GitHub](https://github.com/) and sign in to your account
 8 | 2. Click the "+" icon in the top right corner and select "New repository"
 9 | 3. Enter repository name: `n8n-workflow-summarizer-mcp`
10 | 4. Add a description: "An MCP tool that analyzes and summarizes n8n workflows for Claude"
11 | 5. Choose "Public" visibility (or Private if you prefer)
12 | 6. **Important**: Do NOT initialize the repository with a README, .gitignore, or license since we already have these files locally
13 | 7. Click "Create repository"
14 | 
15 | ## Step 2: Connect Your Local Repository to GitHub
16 | 
17 | After creating the repository, GitHub will show you a page with instructions. Use the "push an existing repository" option.
18 | 
19 | Open a terminal in your local repository folder (`github-repo`) and run:
20 | 
21 | ```bash
22 | # Add the GitHub repository as a remote named "origin"
23 | git remote add origin https://github.com/YOUR_USERNAME/n8n-workflow-summarizer-mcp.git
24 | 
25 | # Push your code to GitHub
26 | git push -u origin master
27 | ```
28 | 
29 | Replace `YOUR_USERNAME` with your actual GitHub username.
30 | 
31 | ## Step 3: Verify the Repository
32 | 
33 | 1. Go to `https://github.com/YOUR_USERNAME/n8n-workflow-summarizer-mcp`
34 | 2. You should see all your files uploaded to GitHub
35 | 
36 | ## Step 4: Install the MCP Tool (For Users)
37 | 
38 | Once the repository is set up, users can install the tool with:
39 | 
40 | ```bash
41 | # Clone the repository
42 | git clone https://github.com/YOUR_USERNAME/n8n-workflow-summarizer-mcp.git
43 | cd n8n-workflow-summarizer-mcp
44 | 
45 | # Install the MCP tool
46 | fastmcp install workflow_summarizer_mcp.py --name "N8N Workflow Summarizer" -e OPENAI_API_KEY=your_api_key_here
47 | ``` 
```

--------------------------------------------------------------------------------
/example_usage.py:
--------------------------------------------------------------------------------

```python
 1 | #!/usr/bin/env python3
 2 | """
 3 | Example usage of the N8N Workflow Summarizer as a Python module
 4 | """
 5 | import os
 6 | import sys
 7 | from pathlib import Path
 8 | import logging
 9 | 
10 | # Configure logging
11 | logging.basicConfig(level=logging.INFO, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s")
12 | logger = logging.getLogger("example_usage")
13 | 
14 | # Try to import from the package
15 | try:
16 |     from workflow_summarizer_mcp import summarize_workflow
17 | except ImportError:
18 |     logger.error("Could not import from workflow_summarizer_mcp. Make sure it's in your PYTHONPATH.")
19 |     sys.exit(1)
20 | 
21 | async def main():
22 |     """
23 |     Example of using the workflow summarizer
24 |     """
25 |     # Check if OpenAI API key is set
26 |     if not os.environ.get("OPENAI_API_KEY"):
27 |         logger.error("OPENAI_API_KEY environment variable is not set!")
28 |         print("Please set your OpenAI API key as an environment variable:")
29 |         print("export OPENAI_API_KEY='your-key-here'  # Linux/macOS")
30 |         print("set OPENAI_API_KEY=your-key-here  # Windows CMD")
31 |         print("$env:OPENAI_API_KEY='your-key-here'  # Windows PowerShell")
32 |         sys.exit(1)
33 | 
34 |     # Path to an example workflow file
35 |     workflow_path = Path("example_workflow.json")
36 |     
37 |     if not workflow_path.exists():
38 |         logger.error(f"Workflow file not found: {workflow_path}")
39 |         print(f"Please place an n8n workflow JSON file at {workflow_path}")
40 |         sys.exit(1)
41 |     
42 |     print(f"Summarizing workflow: {workflow_path}")
43 |     
44 |     # Call the summarize_workflow function
45 |     try:
46 |         summary = await summarize_workflow(
47 |             workflow_path=str(workflow_path),
48 |             model="gpt-4o"  # You can change this to "o1" or other models
49 |         )
50 |         
51 |         print("\nSummary generated successfully!")
52 |         print(f"Output saved to: {workflow_path.stem}_summary.md")
53 |         
54 |         # Print first few lines of the summary
55 |         print("\nSummary preview:")
56 |         print("-" * 40)
57 |         print("\n".join(summary.split("\n")[:10]))
58 |         print("...")
59 |         
60 |     except Exception as e:
61 |         logger.error(f"Error generating summary: {e}")
62 |         sys.exit(1)
63 | 
64 | if __name__ == "__main__":
65 |     import asyncio
66 |     asyncio.run(main()) 
```

--------------------------------------------------------------------------------
/workflow_summarizer_mcp.py:
--------------------------------------------------------------------------------

```python
  1 | from fastmcp import FastMCP, Context
  2 | import json
  3 | import os
  4 | from pathlib import Path
  5 | import logging
  6 | import datetime
  7 | from openai import OpenAI
  8 | 
  9 | # Initialize FastMCP
 10 | mcp = FastMCP(
 11 |     "N8N Workflow Summarizer",
 12 |     description="Analyzes and summarizes n8n workflows",
 13 |     dependencies=["openai>=1.0.0"]
 14 | )
 15 | 
 16 | # Configure logging
 17 | logging.basicConfig(
 18 |     level=logging.INFO,
 19 |     format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
 20 | )
 21 | logger = logging.getLogger("workflow_summarizer_mcp")
 22 | 
 23 | @mcp.tool()
 24 | async def summarize_workflow(workflow_path: str, model: str = "gpt-4o", ctx: Context = None) -> str:
 25 |     """
 26 |     Summarize an n8n workflow JSON file
 27 |     
 28 |     Parameters:
 29 |     - workflow_path: Path to the n8n workflow JSON file
 30 |     - model: OpenAI model to use (default: gpt-4o)
 31 |     
 32 |     Returns:
 33 |     - A markdown summary of the workflow
 34 |     """
 35 |     if ctx:
 36 |         ctx.info(f"Starting to summarize workflow from: {workflow_path}")
 37 |     
 38 |     # Load workflow JSON
 39 |     try:
 40 |         with open(workflow_path, "r", encoding="utf-8") as f:
 41 |             workflow_json = json.load(f)
 42 |     except UnicodeDecodeError:
 43 |         if ctx:
 44 |             ctx.warning("UTF-8 encoding failed, trying latin-1")
 45 |         with open(workflow_path, "r", encoding="latin-1") as f:
 46 |             workflow_json = json.load(f)
 47 |     
 48 |     if ctx:
 49 |         ctx.info(f"Successfully loaded workflow JSON from {workflow_path}")
 50 |     
 51 |     # Extract workflow information
 52 |     workflow_title = workflow_json.get("name", "Untitled Workflow")
 53 |     if ctx:
 54 |         ctx.info(f"Workflow title: {workflow_title}")
 55 |     
 56 |     # Count nodes and identify types
 57 |     nodes = workflow_json.get("nodes", [])
 58 |     node_count = len(nodes)
 59 |     if ctx:
 60 |         ctx.info(f"Number of nodes: {node_count}")
 61 |     
 62 |     # Identify node types
 63 |     node_types = {}
 64 |     for node in nodes:
 65 |         node_type = node.get("type", "unknown")
 66 |         node_types[node_type] = node_types.get(node_type, 0) + 1
 67 |     
 68 |     if ctx:
 69 |         ctx.info(f"Node types identified: {node_types}")
 70 |     
 71 |     # Identify AI/agent nodes and database nodes
 72 |     ai_nodes = sum(count for type_, count in node_types.items() 
 73 |                  if "openai" in type_.lower() or "ai" in type_.lower() 
 74 |                  or "gpt" in type_.lower() or "llm" in type_.lower() 
 75 |                  or "agent" in type_.lower() or "anthropic" in type_.lower()
 76 |                  or "langchain" in type_.lower())
 77 |     
 78 |     db_nodes = sum(count for type_, count in node_types.items() 
 79 |                  if "postgres" in type_.lower() or "mysql" in type_.lower() 
 80 |                  or "database" in type_.lower() or "sql" in type_.lower() 
 81 |                  or "mongo" in type_.lower() or "supabase" in type_.lower())
 82 |     
 83 |     if ctx:
 84 |         ctx.info(f"Agent nodes: {ai_nodes}, Database nodes: {db_nodes}")
 85 |     
 86 |     # Extract prompts used in agent nodes
 87 |     user_prompts = []
 88 |     system_prompts = []
 89 |     
 90 |     for node in nodes:
 91 |         if "agent" in node.get("type", "").lower() or "openai" in node.get("type", "").lower():
 92 |             parameters = node.get("parameters", {})
 93 |             system_prompt = parameters.get("systemPrompt", "")
 94 |             user_prompt = parameters.get("text", "")
 95 |             
 96 |             if system_prompt and len(system_prompt) > 10:
 97 |                 system_prompts.append(system_prompt)
 98 |             if user_prompt and len(user_prompt) > 10:
 99 |                 user_prompts.append(user_prompt)
100 |     
101 |     if ctx:
102 |         ctx.info(f"Extracted {len(user_prompts)} user prompts and {len(system_prompts)} system prompts")
103 |     
104 |     # Get timestamp
105 |     timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
106 |     
107 |     # Initialize OpenAI client
108 |     openai_api_key = os.environ.get("OPENAI_API_KEY")
109 |     if not openai_api_key:
110 |         error_message = "OPENAI_API_KEY environment variable not set"
111 |         if ctx:
112 |             ctx.error(error_message)
113 |         return error_message
114 |     
115 |     client = OpenAI(api_key=openai_api_key)
116 |     if ctx:
117 |         ctx.info("Initializing OpenAI client")
118 |     
119 |     # Handle large workflows by simplifying
120 |     workflow_str = json.dumps(workflow_json)
121 |     workflow_size = len(workflow_str)
122 |     if ctx:
123 |         ctx.info(f"Workflow JSON size: {workflow_size} characters")
124 |     
125 |     # Estimate token count (rough approximation)
126 |     estimated_tokens = workflow_size // 4
127 |     if ctx:
128 |         ctx.info(f"Estimated token count: {estimated_tokens}")
129 |     
130 |     # Simplify workflow if needed
131 |     simplified_workflow = workflow_json.copy()
132 |     if estimated_tokens > 12000:
133 |         if ctx:
134 |             ctx.info("Simplifying workflow for token reduction")
135 |             await ctx.report_progress(0.1, f"Simplifying large workflow ({estimated_tokens} tokens)")
136 |         
137 |         # Keep only essential node information
138 |         simplified_nodes = []
139 |         for node in nodes:
140 |             simplified_node = {
141 |                 "name": node.get("name", ""),
142 |                 "type": node.get("type", ""),
143 |                 "parameters": {},
144 |                 "position": node.get("position", [])
145 |             }
146 |             
147 |             # Keep only essential parameters
148 |             parameters = node.get("parameters", {})
149 |             important_params = ["text", "systemPrompt", "operation", "table", "query"]
150 |             for param in important_params:
151 |                 if param in parameters:
152 |                     simplified_node["parameters"][param] = parameters[param]
153 |             
154 |             simplified_nodes.append(simplified_node)
155 |         
156 |         simplified_workflow["nodes"] = simplified_nodes
157 |         
158 |         # Remove unnecessary workflow properties
159 |         keys_to_remove = ["pinData", "connections", "settings", "staticData"]
160 |         for key in keys_to_remove:
161 |             if key in simplified_workflow:
162 |                 del simplified_workflow[key]
163 |         
164 |         simplified_str = json.dumps(simplified_workflow)
165 |         simplified_tokens = len(simplified_str) // 4
166 |         if ctx:
167 |             ctx.info(f"Simplified workflow size: {simplified_tokens} tokens")
168 |             await ctx.report_progress(0.2, "Workflow simplified successfully")
169 |         
170 |         # Use simplified workflow
171 |         workflow_json = simplified_workflow
172 |     
173 |     # Prepare prompt for OpenAI API
174 |     prompt = f"""
175 |     You are a workflow analysis expert. Analyze this n8n workflow JSON and provide a detailed summary.
176 |     
177 |     Here's the workflow information:
178 |     Title: {workflow_title}
179 |     Number of Nodes: {node_count}
180 |     Node Types: {json.dumps(node_types, indent=2)}
181 |     AI/Agent Nodes: {ai_nodes}
182 |     Database Nodes: {db_nodes}
183 |     
184 |     User Prompts Found: {json.dumps(user_prompts, indent=2)}
185 |     System Prompts Found: {json.dumps(system_prompts, indent=2)}
186 |     
187 |     Full Workflow JSON: {json.dumps(workflow_json, indent=2)}
188 |     
189 |     Please provide a summary with the following sections:
190 |     1) STRUCTURED SUMMARY: Basic workflow metadata in bullet points
191 |     2) CURRENT TIMESTAMP: When this analysis was performed ({timestamp})
192 |     3) DETAILED STEP-BY-STEP EXPLANATION (MARKDOWN): How the workflow functions
193 |     4) AGENT SYSTEM PROMPTS AND USER PROMPTS: Any prompts used in agent/AI nodes
194 |     5) A PYTHON FUNCTION THAT REPLICATES THE WORKFLOW: Conceptual Python code that would perform similar operations
195 |     
196 |     Format the response in Markdown.
197 |     """
198 |     
199 |     if ctx:
200 |         ctx.info("Constructing prompt for OpenAI API")
201 |         ctx.info(f"Prompt token estimate: {len(prompt) // 4}")
202 |         await ctx.report_progress(0.3, "Preparing to call OpenAI API")
203 |     
204 |     # Call OpenAI API with model-specific parameters
205 |     if ctx:
206 |         ctx.info(f"Calling OpenAI API using model: {model}")
207 |         await ctx.report_progress(0.4, f"Calling OpenAI with model: {model}")
208 |     
209 |     try:
210 |         if model == "o1":
211 |             # o1 model requires max_completion_tokens instead of max_tokens
212 |             response = client.chat.completions.create(
213 |                 model=model,
214 |                 messages=[{"role": "user", "content": prompt}],
215 |                 max_completion_tokens=4000,
216 |             )
217 |         else:
218 |             # Standard parameters for other models
219 |             response = client.chat.completions.create(
220 |                 model=model,
221 |                 messages=[{"role": "user", "content": prompt}],
222 |                 max_tokens=4000,
223 |                 temperature=0.7
224 |             )
225 |         
226 |         if ctx:
227 |             ctx.info("Successfully received response from OpenAI API")
228 |             await ctx.report_progress(0.8, "Received summary from OpenAI")
229 |         
230 |         summary = response.choices[0].message.content
231 |         
232 |         # Save to file and return the summary
233 |         output_path = Path(workflow_path).stem + "_summary.md"
234 |         with open(output_path, "w", encoding="utf-8") as f:
235 |             f.write(summary)
236 |         
237 |         if ctx:
238 |             ctx.info(f"Workflow summary saved successfully to {output_path}")
239 |             await ctx.report_progress(1.0, f"Summary saved to {output_path}")
240 |         
241 |         return summary
242 |         
243 |     except Exception as e:
244 |         error_message = f"Error calling OpenAI API: {str(e)}"
245 |         if ctx:
246 |             ctx.error(error_message)
247 |             await ctx.report_progress(1.0, f"Error: {str(e)}", "error")
248 |         
249 |         # Try with fallback model if primary fails and it's not already the fallback
250 |         if model != "gpt-3.5-turbo-16k":
251 |             if ctx:
252 |                 ctx.info("Attempting with fallback model gpt-3.5-turbo-16k")
253 |                 await ctx.report_progress(0.5, "Trying fallback model")
254 |             
255 |             # Further simplify the workflow for the fallback model
256 |             super_simplified = {
257 |                 "name": workflow_title,
258 |                 "node_count": node_count,
259 |                 "node_types": node_types,
260 |                 "nodes": simplified_workflow.get("nodes", [])[:min(20, len(simplified_workflow.get("nodes", [])))]
261 |             }
262 |             
263 |             # Shorten the prompt for the fallback model
264 |             fallback_prompt = f"""
265 |             Analyze this simplified n8n workflow and provide a summary:
266 |             
267 |             Title: {workflow_title}
268 |             Number of Nodes: {node_count}
269 |             Node Types: {json.dumps(node_types, indent=2)}
270 |             
271 |             Simplified JSON: {json.dumps(super_simplified, indent=2)}
272 |             
273 |             Format as markdown with these sections:
274 |             1. Basic summary and stats
275 |             2. Current timestamp ({timestamp})
276 |             3. Step-by-step explanation
277 |             4. Found prompts
278 |             5. Example Python function
279 |             """
280 |             
281 |             try:
282 |                 response = client.chat.completions.create(
283 |                     model="gpt-3.5-turbo-16k",
284 |                     messages=[{"role": "user", "content": fallback_prompt}],
285 |                     max_tokens=4000,
286 |                     temperature=0.7
287 |                 )
288 |                 
289 |                 if ctx:
290 |                     ctx.info("Successfully received response from fallback model")
291 |                     await ctx.report_progress(0.9, "Received summary from fallback model")
292 |                 
293 |                 summary = response.choices[0].message.content
294 |                 
295 |                 # Save to file and return the summary
296 |                 output_path = Path(workflow_path).stem + "_summary.md"
297 |                 with open(output_path, "w", encoding="utf-8") as f:
298 |                     f.write(summary)
299 |                 
300 |                 if ctx:
301 |                     ctx.info(f"Workflow summary saved successfully to {output_path}")
302 |                     await ctx.report_progress(1.0, f"Summary saved to {output_path} (fallback model)")
303 |                 
304 |                 return summary
305 |             
306 |             except Exception as fallback_error:
307 |                 if ctx:
308 |                     ctx.error(f"Error with fallback model: {str(fallback_error)}")
309 |                     await ctx.report_progress(1.0, f"All attempts failed: {str(fallback_error)}", "error")
310 |                 return f"Error with fallback model: {str(fallback_error)}"
311 |         
312 |         return error_message
313 | 
314 | from fastmcp.prompts.base import UserMessage, AssistantMessage
315 | 
316 | @mcp.prompt()
317 | def summarize_workflow_prompt() -> list:
318 |     """Prompt to guide users on using the workflow summarizer"""
319 |     return [
320 |         AssistantMessage("I can help you analyze and summarize n8n workflows. To get started, I need:"),
321 |         AssistantMessage("1. The path to your n8n workflow JSON file\n2. (Optional) Which OpenAI model to use (default: gpt-4o)"),
322 |         UserMessage("I'd like to summarize my workflow at path/to/workflow.json"),
323 |         AssistantMessage("I'll analyze that workflow for you. Would you like to use the default model (gpt-4o) or specify a different one?")
324 |     ]
325 | 
326 | @mcp.resource("docs://workflows")
327 | def workflow_docs() -> str:
328 |     """Documentation about N8N workflows and how they're structured"""
329 |     return """
330 |     # N8N Workflow Structure Documentation
331 |     
332 |     N8N workflows are defined in JSON format and consist of nodes connected together to form an automation flow.
333 |     
334 |     ## Key Components
335 |     
336 |     ### Nodes
337 |     The basic building blocks of a workflow. Each node represents an action or operation.
338 |     
339 |     ### Connections
340 |     Define the flow between nodes, determining the execution path.
341 |     
342 |     ### Parameters
343 |     Configure how each node operates, providing inputs and settings.
344 |     
345 |     ## Common Node Types
346 |     
347 |     - HTTP Request: Make API calls to external services
348 |     - Code: Execute custom JavaScript/TypeScript code
349 |     - Function: Run a function on the input data
350 |     - IF: Conditional branching based on conditions
351 |     - Split: Divide workflow execution into parallel branches
352 |     - Merge: Combine data from multiple branches
353 |     - Filter: Filter items based on conditions
354 |     - Set: Set values in the workflow data
355 |     
356 |     ## AI/Agent Nodes
357 |     
358 |     N8N supports various AI nodes for LLM integration:
359 |     - OpenAI: Connect to OpenAI's models
360 |     - Anthropic: Connect to Claude models
361 |     - LangChain: Build complex AI applications
362 |     
363 |     ## Database Nodes
364 |     
365 |     Connect to various databases:
366 |     - PostgreSQL
367 |     - MySQL
368 |     - MongoDB
369 |     - Supabase
370 |     """
371 | 
372 | if __name__ == "__main__":
373 |     mcp.run() 
```