#
tokens: 44610/50000 11/85 files (page 3/4)
lines: on (toggle) GitHub
raw markdown copy reset
This is page 3 of 4. Use http://codebase.md/stevereiner/python-alfresco-mcp-server?lines=true&page={x} to view the full context.

# Directory Structure

```
├── .gitattributes
├── .gitignore
├── .vscode
│   ├── mcp.json
│   └── settings.json
├── alfresco_mcp_server
│   ├── __init__.py
│   ├── config.py
│   ├── fastmcp_server.py
│   ├── prompts
│   │   ├── __init__.py
│   │   └── search_and_analyze.py
│   ├── resources
│   │   ├── __init__.py
│   │   └── repository_resources.py
│   ├── tools
│   │   ├── __init__.py
│   │   ├── core
│   │   │   ├── __init__.py
│   │   │   ├── browse_repository.py
│   │   │   ├── cancel_checkout.py
│   │   │   ├── checkin_document.py
│   │   │   ├── checkout_document.py
│   │   │   ├── create_folder.py
│   │   │   ├── delete_node.py
│   │   │   ├── download_document.py
│   │   │   ├── get_node_properties.py
│   │   │   ├── update_node_properties.py
│   │   │   └── upload_document.py
│   │   └── search
│   │       ├── __init__.py
│   │       ├── advanced_search.py
│   │       ├── cmis_search.py
│   │       ├── search_by_metadata.py
│   │       └── search_content.py
│   └── utils
│       ├── __init__.py
│       ├── connection.py
│       ├── file_type_analysis.py
│       └── json_utils.py
├── CHANGELOG.md
├── claude-desktop-config-pipx-macos.json
├── claude-desktop-config-pipx-windows.json
├── claude-desktop-config-uv-macos.json
├── claude-desktop-config-uv-windows.json
├── claude-desktop-config-uvx-macos.json
├── claude-desktop-config-uvx-windows.json
├── config.yaml
├── docs
│   ├── api_reference.md
│   ├── claude_desktop_setup.md
│   ├── client_configurations.md
│   ├── configuration_guide.md
│   ├── install_with_pip_pipx.md
│   ├── mcp_inspector_setup.md
│   ├── quick_start_guide.md
│   ├── README.md
│   ├── testing_guide.md
│   └── troubleshooting.md
├── examples
│   ├── batch_operations.py
│   ├── document_lifecycle.py
│   ├── error_handling.py
│   ├── examples_summary.md
│   ├── quick_start.py
│   ├── README.md
│   └── transport_examples.py
├── LICENSE
├── MANIFEST.in
├── mcp-inspector-http-pipx-config.json
├── mcp-inspector-http-uv-config.json
├── mcp-inspector-http-uvx-config.json
├── mcp-inspector-stdio-pipx-config.json
├── mcp-inspector-stdio-uv-config.json
├── mcp-inspector-stdio-uvx-config.json
├── prompts-for-claude.md
├── pyproject.toml
├── pytest.ini
├── README.md
├── run_server.py
├── sample-dot-env.txt
├── scripts
│   ├── run_tests.py
│   └── test.bat
├── tests
│   ├── __init__.py
│   ├── conftest.py
│   ├── mcp_specific
│   │   ├── MCP_INSPECTOR_CONNECTION.md
│   │   ├── mcp_testing_guide.md
│   │   ├── START_HTTP_SERVER.md
│   │   ├── START_MCP_INSPECTOR.md
│   │   ├── test_http_server.ps1
│   │   ├── test_with_mcp_inspector.md
│   │   └── TESTING_INSTRUCTIONS.md
│   ├── README.md
│   ├── test_coverage.py
│   ├── test_fastmcp_2_0.py
│   ├── test_integration.py
│   └── test_unit_tools.py
├── tests-debug
│   └── README.md
└── uv.lock
```

# Files

--------------------------------------------------------------------------------
/docs/configuration_guide.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Configuration Guide
  2 | 
  3 | Complete guide for configuring the Alfresco MCP Server. This document covers all configuration options, environment setup, and deployment scenarios.
  4 | 
  5 | ## 📋 Configuration Overview
  6 | 
  7 | The Alfresco MCP Server supports multiple configuration methods:
  8 | 
  9 | 1. **Environment Variables** (Primary - Always takes precedence)
 10 | 2. **Default Values** (Fallback when no environment variable is set)
 11 | 3. **Command Line Arguments** (Transport and server options only)
 12 | 
 13 | ### ⚠️ Configuration Precedence Order
 14 | 
 15 | **Higher priority settings override lower priority settings:**
 16 | 
 17 | 1. 🥇 **Environment Variables** (Highest Priority)
 18 | 2. 🥈 **Default Values** (Fallback)
 19 | 
 20 | **Answer to "Which setting wins?"**
 21 | - ✅ **Environment Variables ALWAYS WIN** over any other setting
 22 | - ✅ If no environment variable is set, default values are used  
 23 | - ✅ YAML configuration files are **not currently implemented** (future enhancement)
 24 | 
 25 | ### 🔄 Practical Example
 26 | 
 27 | ```bash
 28 | # If you set an environment variable:
 29 | export ALFRESCO_URL="https://prod.company.com"
 30 | export ALFRESCO_USERNAME="service-account"
 31 | 
 32 | # And later try to override in code or config:
 33 | # config.yaml (not implemented yet, but for illustration):
 34 | # alfresco:
 35 | #   url: "http://localhost:8080"  
 36 | #   username: "admin"
 37 | 
 38 | # Result: Environment variables WIN!
 39 | # ✅ ALFRESCO_URL = "https://prod.company.com"  (from env var)
 40 | # ✅ ALFRESCO_USERNAME = "service-account"      (from env var)  
 41 | # ✅ ALFRESCO_PASSWORD = "admin"                (default value - no env var set)
 42 | ```
 43 | 
 44 | **Key Takeaway:** Environment variables are the "final word" in v1.1 configuration.
 45 | 
 46 | ## 🌍 Environment Variables
 47 | 
 48 | ### Required Configuration
 49 | 
 50 | ```bash
 51 | # Alfresco Server Connection (Required)
 52 | export ALFRESCO_URL="http://localhost:8080"
 53 | export ALFRESCO_USERNAME="admin"
 54 | export ALFRESCO_PASSWORD="admin"
 55 | ```
 56 | 
 57 | ### Optional Configuration
 58 | 
 59 | ```bash
 60 | # Authentication (Alternative to username/password)
 61 | export ALFRESCO_TOKEN="your-auth-token"
 62 | 
 63 | # Connection Settings
 64 | export ALFRESCO_TIMEOUT="30"              # Request timeout in seconds
 65 | export ALFRESCO_MAX_RETRIES="3"           # Maximum retry attempts
 66 | export ALFRESCO_RETRY_DELAY="1.0"         # Delay between retries
 67 | 
 68 | # SSL/TLS Settings
 69 | export ALFRESCO_VERIFY_SSL="true"         # Verify SSL certificates
 70 | export ALFRESCO_CA_BUNDLE="/path/to/ca"   # Custom CA bundle
 71 | 
 72 | # Debug and Logging
 73 | export ALFRESCO_DEBUG="false"             # Enable debug mode
 74 | export ALFRESCO_LOG_LEVEL="INFO"          # Logging level
 75 | 
 76 | # Performance Settings
 77 | export ALFRESCO_POOL_SIZE="10"            # Connection pool size
 78 | export ALFRESCO_MAX_CONCURRENT="5"        # Max concurrent requests
 79 | ```
 80 | 
 81 | ### Development vs Production
 82 | 
 83 | **Development Environment:**
 84 | ```bash
 85 | # Development settings
 86 | export ALFRESCO_URL="http://localhost:8080"
 87 | export ALFRESCO_USERNAME="admin"
 88 | export ALFRESCO_PASSWORD="admin"
 89 | export ALFRESCO_DEBUG="true"
 90 | export ALFRESCO_LOG_LEVEL="DEBUG"
 91 | export ALFRESCO_VERIFY_SSL="false"
 92 | ```
 93 | 
 94 | **Production Environment:**
 95 | ```bash
 96 | # Production settings
 97 | export ALFRESCO_URL="https://alfresco.company.com"
 98 | export ALFRESCO_USERNAME="service_account"
 99 | export ALFRESCO_PASSWORD="secure_password"
100 | export ALFRESCO_DEBUG="false"
101 | export ALFRESCO_LOG_LEVEL="INFO"
102 | export ALFRESCO_VERIFY_SSL="true"
103 | export ALFRESCO_TIMEOUT="60"
104 | export ALFRESCO_MAX_RETRIES="5"
105 | ```
106 | 
107 | ## 📄 Configuration Files (Future Enhancement)
108 | 
109 | > ⚠️ **Note**: YAML configuration files are not currently implemented in v1.1. All configuration must be done via environment variables. YAML support is planned for a future release.
110 | 
111 | ### Planned YAML Configuration
112 | 
113 | Future versions will support `config.yaml` in your project root:
114 | 
115 | ```yaml
116 | # config.yaml
117 | alfresco:
118 |   # Connection settings
119 |   url: "http://localhost:8080"
120 |   username: "admin"
121 |   password: "admin"
122 |   
123 |   # Optional token authentication
124 |   # token: "your-auth-token"
125 |   
126 |   # Connection options
127 |   timeout: 30
128 |   max_retries: 3
129 |   retry_delay: 1.0
130 |   verify_ssl: true
131 |   
132 |   # Performance settings
133 |   pool_size: 10
134 |   max_concurrent: 5
135 | 
136 | # Server settings
137 | server:
138 |   host: "127.0.0.1"
139 |   port: 8000
140 |   transport: "stdio"  # stdio, http, sse
141 |   
142 | # Logging configuration
143 | logging:
144 |   level: "INFO"       # DEBUG, INFO, WARNING, ERROR
145 |   format: "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
146 |   file: "alfresco_mcp.log"
147 |   
148 | # Feature flags
149 | features:
150 |   enable_caching: true
151 |   enable_metrics: false
152 |   enable_tracing: false
153 | ```
154 | 
155 | ### Planned Environment-Specific Configs
156 | 
157 | **Future: config.development.yaml:**
158 | ```yaml
159 | alfresco:
160 |   url: "http://localhost:8080"
161 |   username: "admin"
162 |   password: "admin"
163 |   verify_ssl: false
164 |   
165 | logging:
166 |   level: "DEBUG"
167 |   
168 | features:
169 |   enable_caching: false
170 |   enable_metrics: true
171 | ```
172 | 
173 | **config.production.yaml:**
174 | ```yaml
175 | alfresco:
176 |   url: "${ALFRESCO_URL}"
177 |   username: "${ALFRESCO_USERNAME}"
178 |   password: "${ALFRESCO_PASSWORD}"
179 |   timeout: 60
180 |   max_retries: 5
181 |   verify_ssl: true
182 |   
183 | logging:
184 |   level: "INFO"
185 |   file: "/var/log/alfresco-mcp-server.log"
186 |   
187 | features:
188 |   enable_caching: true
189 |   enable_metrics: true
190 |   enable_tracing: true
191 | ```
192 | 
193 | ### Current Configuration Loading
194 | 
195 | ```python
196 | # Current v1.1 implementation - Environment variables only
197 | from alfresco_mcp_server.config import load_config
198 | 
199 | # Load configuration (reads environment variables + defaults)
200 | config = load_config()
201 | 
202 | # Configuration is automatically loaded from environment variables:
203 | # ALFRESCO_URL, ALFRESCO_USERNAME, ALFRESCO_PASSWORD, etc.
204 | ```
205 | 
206 | ### Planned Future Configuration Loading
207 | 
208 | ```python
209 | # Future enhancement - YAML + environment variables
210 | from alfresco_mcp_server.config import load_config
211 | 
212 | # Load environment-specific config (planned)
213 | config = load_config("config.production.yaml")
214 | 
215 | # Load with environment variable override (planned)
216 | config = load_config(env_override=True)
217 | ```
218 | 
219 | ## 🖥️ Command Line Arguments
220 | 
221 | ### FastMCP Server Options
222 | 
223 | ```bash
224 | # Basic usage
225 | python -m alfresco_mcp_server.fastmcp_server
226 | 
227 | # With custom transport
228 | python -m alfresco_mcp_server.fastmcp_server --transport http --port 8001
229 | 
230 | # With logging
231 | python -m alfresco_mcp_server.fastmcp_server --log-level DEBUG
232 | 
233 | # Full options
234 | python -m alfresco_mcp_server.fastmcp_server \
235 |   --transport sse \
236 |   --host 0.0.0.0 \
237 |   --port 8002 \
238 |   --log-level INFO \
239 |   --config config.production.yaml
240 | ```
241 | 
242 | ### Available Arguments
243 | 
244 | | Argument | Type | Default | Description |
245 | |----------|------|---------|-------------|
246 | | `--transport` | choice | `stdio` | Transport protocol (stdio, http, sse) |
247 | | `--host` | string | `127.0.0.1` | Server host address |
248 | | `--port` | integer | `8000` | Server port number |
249 | | `--log-level` | choice | `INFO` | Logging level (DEBUG, INFO, WARNING, ERROR) |
250 | | `--config` | path | `config.yaml` | Configuration file path |
251 | | `--help` | flag | - | Show help message |
252 | 
253 | 
254 | 
255 | 
256 | 
257 | ## 🔧 Development Configuration
258 | 
259 | ### Development Setup
260 | 
261 | **Option A: UV (Recommended - Automatic dependency management):**
262 | 
263 | ```bash
264 | # Clone the repository
265 | git clone https://github.com/stevereiner/python-alfresco-mcp-server.git
266 | cd python-alfresco-mcp-server
267 | 
268 | # UV handles everything automatically - no manual venv needed!
269 | uv sync --extra dev          # Install with development dependencies
270 | uv run python-alfresco-mcp-server --help  # Test installation
271 | 
272 | # Set development environment variables
273 | export ALFRESCO_URL="http://localhost:8080"
274 | export ALFRESCO_USERNAME="admin"
275 | export ALFRESCO_PASSWORD="admin"
276 | export ALFRESCO_DEBUG="true"
277 | export ALFRESCO_LOG_LEVEL="DEBUG"
278 | export ALFRESCO_VERIFY_SSL="false"
279 | 
280 | # Run with UV (recommended)
281 | uv run python-alfresco-mcp-server --transport stdio
282 | ```
283 | 
284 | **Option B: Traditional Python (Manual venv management):**
285 | 
286 | ```bash
287 | # Clone the repository
288 | git clone https://github.com/stevereiner/python-alfresco-mcp-server.git
289 | cd python-alfresco-mcp-server
290 | 
291 | # Create development environment
292 | python -m venv venv
293 | source venv/bin/activate       # Linux/macOS
294 | # venv\Scripts\activate        # Windows
295 | 
296 | # Install in development mode
297 | pip install -e .[dev]
298 | 
299 | # Set development environment
300 | export ALFRESCO_URL="http://localhost:8080"
301 | export ALFRESCO_USERNAME="admin"
302 | export ALFRESCO_PASSWORD="admin"
303 | export ALFRESCO_DEBUG="true"
304 | export ALFRESCO_LOG_LEVEL="DEBUG"
305 | export ALFRESCO_VERIFY_SSL="false"
306 | 
307 | # Run traditionally
308 | python-alfresco-mcp-server --transport stdio
309 | ```
310 | 
311 | ### Testing Configuration
312 | 
313 | ```yaml
314 | # config.test.yaml
315 | alfresco:
316 |   url: "http://localhost:8080"
317 |   username: "admin"
318 |   password: "admin"
319 |   timeout: 10
320 |   verify_ssl: false
321 | 
322 | logging:
323 |   level: "DEBUG"
324 |   
325 | features:
326 |   enable_caching: false
327 | ```
328 | 
329 | ### Hot Reload Setup
330 | 
331 | ```bash
332 | # Install development dependencies
333 | pip install watchdog
334 | 
335 | # Run with auto-reload
336 | python -m alfresco_mcp_server.fastmcp_server --reload
337 | ```
338 | 
339 | ## 🚀 Production Configuration
340 | 
341 | ### Production Checklist
342 | 
343 | - ✅ Use strong passwords/tokens
344 | - ✅ Enable SSL certificate verification
345 | - ✅ Configure appropriate timeouts
346 | - ✅ Set up log rotation
347 | - ✅ Configure monitoring
348 | - ✅ Use environment variables for secrets
349 | - ✅ Set appropriate resource limits
350 | 
351 | ### Docker Configuration
352 | 
353 | ```dockerfile
354 | # Dockerfile
355 | FROM python:3.9-slim
356 | 
357 | WORKDIR /app
358 | COPY . /app
359 | 
360 | RUN pip install -e .
361 | 
362 | # Configuration via environment
363 | ENV ALFRESCO_URL=""
364 | ENV ALFRESCO_USERNAME=""
365 | ENV ALFRESCO_PASSWORD=""
366 | 
367 | EXPOSE 8000
368 | 
369 | CMD ["python", "-m", "alfresco_mcp_server.fastmcp_server", "--host", "0.0.0.0"]
370 | ```
371 | 
372 | ```yaml
373 | # docker-compose.yml
374 | version: '3.8'
375 | services:
376 |   alfresco-mcp-server:
377 |     build: .
378 |     ports:
379 |       - "8000:8000"
380 |     environment:
381 |       - ALFRESCO_URL=http://alfresco:8080
382 |       - ALFRESCO_USERNAME=admin
383 |       - ALFRESCO_PASSWORD=admin
384 |     depends_on:
385 |       - alfresco
386 |     restart: unless-stopped
387 |     
388 |   alfresco:
389 |     image: alfresco/alfresco-content-repository-community:latest
390 |     ports:
391 |       - "8080:8080"
392 |     environment:
393 |       - JAVA_OPTS=-Xmx2g
394 | ```
395 | 
396 | ### Systemd Service
397 | 
398 | ```ini
399 | # /etc/systemd/system/alfresco-mcp-server.service
400 | [Unit]
401 | Description=Alfresco MCP Server
402 | After=network.target
403 | 
404 | [Service]
405 | Type=simple
406 | User=alfresco-mcp
407 | WorkingDirectory=/opt/alfresco-mcp-server
408 | Environment=ALFRESCO_URL=https://alfresco.company.com
409 | Environment=ALFRESCO_USERNAME=service_account
410 | Environment=ALFRESCO_PASSWORD=secure_password
411 | ExecStart=/opt/alfresco-mcp-server/venv/bin/python -m alfresco_mcp_server.fastmcp_server
412 | Restart=always
413 | RestartSec=10
414 | 
415 | [Install]
416 | WantedBy=multi-user.target
417 | ```
418 | 
419 | ## 🔍 Configuration Validation
420 | 
421 | ### Validation Script
422 | 
423 | ```python
424 | #!/usr/bin/env python3
425 | """Configuration validation script."""
426 | 
427 | import asyncio
428 | import sys
429 | from alfresco_mcp_server.config import load_config
430 | from fastmcp import Client
431 | from alfresco_mcp_server.fastmcp_server import mcp
432 | 
433 | async def validate_config():
434 |     """Validate configuration and connectivity."""
435 |     
436 |     try:
437 |         # Load configuration
438 |         config = load_config()
439 |         print("✅ Configuration loaded successfully")
440 |         
441 |         # Test connectivity
442 |         async with Client(mcp) as client:
443 |             tools = await client.list_tools()
444 |             print(f"✅ Connected to Alfresco, found {len(tools)} tools")
445 |             
446 |         return True
447 |         
448 |     except Exception as e:
449 |         print(f"❌ Configuration validation failed: {e}")
450 |         return False
451 | 
452 | if __name__ == "__main__":
453 |     success = asyncio.run(validate_config())
454 |     sys.exit(0 if success else 1)
455 | ```
456 | 
457 | ### Configuration Testing
458 | 
459 | ```bash
460 | # Test configuration
461 | python validate_config.py
462 | 
463 | # Test with specific config file
464 | python validate_config.py --config config.production.yaml
465 | 
466 | # Test connectivity
467 | curl -u admin:admin http://localhost:8080/alfresco/api/-default-/public/alfresco/versions/1/nodes/-root-
468 | ```
469 | 
470 | --- 
```

--------------------------------------------------------------------------------
/prompts-for-claude.md:
--------------------------------------------------------------------------------

```markdown
  1 | # MCP Test Prompts for Claude Desktop 🧪
  2 | 
  3 | **Quick Reference for Testing Alfresco MCP Server with Claude Desktop**
  4 | 
  5 | Copy and paste these prompts into Claude Desktop to systematically test all MCP server functionality. Make sure your Alfresco MCP Server is running on `http://127.0.0.1:8003/mcp/` before testing.
  6 | 
  7 | ---
  8 | 
  9 | ## 🔧 **TOOL TESTING** (15 Tools)
 10 | 
 11 | ### **1. Search Content Tool** (AFTS - Alfresco Full Text Search)
 12 | ```
 13 | I need to search for documents in Alfresco. Can you search for:
 14 | - Documents containing "test"  
 15 | - Maximum 5 results
 16 | ```
 17 | 
 18 | **Expected:** List of matching documents or "no results found" message
 19 | 
 20 | ---
 21 | 
 22 | ### **2. Browse Repository Tool**
 23 | ```
 24 | Can you show me what's in the Alfresco repository user home -my- directory? I want to see what folders and files are available.
 25 | ```
 26 | 
 27 | **Expected:** List of folders/files in repository root with names, types, and IDs
 28 | 
 29 | ---
 30 | 
 31 | ### **3. Create Folder Tool**
 32 | ```
 33 | Please create a new folder called "Claude_Test_Folder" in the repository shared folder (-shared-) with the description "Folder created during Claude MCP testing".
 34 | ```
 35 | 
 36 | **Expected:** Success message with new folder ID, or error if folder already exists
 37 | 
 38 | ---
 39 | 
 40 | ### **4. Upload Document Tool**
 41 | ```
 42 | I want to upload a simple text document. Please create a file called "claude_test_doc.txt" in the repository shared folder with this content:
 43 | 
 44 | "This is a test document created by Claude via MCP.
 45 | Created: [current date/time]
 46 | Purpose: Testing Alfresco MCP Server functionality
 47 | Status: Active"
 48 | 
 49 | Use the description "Test document uploaded via Claude MCP"
 50 | ```
 51 | 
 52 | **Expected:** Success message with document ID and upload confirmation
 53 | 
 54 | ---
 55 | 
 56 | ### **5. Get Node Properties Tool**
 57 | ```
 58 | Can you get the properties and metadata for the document we just uploaded? Use the node ID from the previous upload.
 59 | ```
 60 | 
 61 | **Expected:** Full property list including name, type, created date, size, etc.
 62 | 
 63 | ---
 64 | 
 65 | ### **6. Update Node Properties Tool**
 66 | ```
 67 | Please update the properties of that document to add:
 68 | - Title: "Claude MCP Test Document"
 69 | - Description: "Updated via Claude MCP testing session"
 70 | 
 71 | ```
 72 | 
 73 | **Expected:** Success message confirming property updates
 74 | 
 75 | ---
 76 | 
 77 | ### **7. Download Document Tool**
 78 | ```
 79 | Now download the content of that test document we created to verify it was uploaded correctly.
 80 | ```
 81 | 
 82 | **Expected:** Base64 encoded content that matches what we uploaded
 83 | 
 84 | ---
 85 | 
 86 | ### **8. Checkout Document Tool**
 87 | ```
 88 | Please checkout the test document for editing. This should lock it so others can't modify it while we're working on it.
 89 | ```
 90 | 
 91 | **Expected:** Success message indicating document is checked out/locked
 92 | 
 93 | ---
 94 | 
 95 | ### **9. Checkin Document Tool**
 96 | ```
 97 | Check the document back in as a minor version with the comment "Updated via Claude MCP testing - minor revision".
 98 | ```
 99 | 
100 | **Expected:** Success message with new version number
101 | 
102 | ---
103 | 
104 | ### **10. Cancel Checkout Tool** 
105 | ```
106 | If you have any documents currently checked out, please cancel the checkout for one of them to test this functionality. Use the node ID of a checked-out document.
107 | ```
108 | 
109 | **Expected:** Success message confirming checkout cancellation
110 | 
111 | ---
112 | 
113 | ### **11. Advanced Search Tool**
114 | ```
115 | Test the advanced search with multiple filters:
116 | - Search for documents created after "2024-01-01"
117 | - Content type: "pdf" 
118 | - Node type: "cm:content"
119 | - Maximum 10 results
120 | 
121 | Show me how advanced filtering works compared to basic search.
122 | ```
123 | 
124 | **Expected:** Filtered search results based on multiple criteria
125 | 
126 | ---
127 | 
128 | ### **12. Search by Metadata Tool**
129 | ```
130 | Search for documents by specific metadata:
131 | - Property name: "cm:title"
132 | - Property value: "test" 
133 | - Comparison: "contains"
134 | - Node type: "cm:content"
135 | 
136 | This should find documents where the title contains "test".
137 | ```
138 | 
139 | **Expected:** Documents matching the metadata criteria
140 | 
141 | ---
142 | 
143 | ### **13. CMIS Search Tool** (SQL-like Queries)
144 | ```
145 | Test CMIS SQL-like searching with these examples:
146 | 
147 | 1. First, try a preset: use "recent_documents" to see the most recently created documents
148 | 2. Then try a custom CMIS query: "SELECT * FROM cmis:document WHERE cmis:name LIKE 'test%'"
149 | 3. (Doesn't work) Search for PDF files only: "SELECT * FROM cmis:document WHERE cmis:contentStreamMimeType = 'application/pdf'"
150 | 4. (This works) Search for PDF files only "SELECT * FROM cmis:document WHERE cmis:name LIKE '%.pdf'"
151 | 
152 | Compare CMIS structured results with AFTS full-text search operators.
153 | ```
154 | 
155 | **Expected:** SQL-style structured results with precise metadata filtering
156 | 
157 | ---
158 | 
159 | ### **14. Delete Node Tool** (Use with caution!)
160 | ```
161 | Finally, let's clean up by deleting the test document we created. Please delete the test document (but keep the folder for now).
162 | ```
163 | 
164 | **Expected:** Success message confirming deletion
165 | 
166 | ---
167 | 
168 | ## 🔍 **SEARCH COMPARISON TESTING**
169 | 
170 | ### **Compare All 4 Search Tools**
171 | ```
172 | Help me understand the differences between the four search methods:
173 | 
174 | 1. Use basic search_content to find documents containing "test" (AFTS full-text search)
175 | 2. Use advanced_search with multiple filters (created after 2024-01-01, content type pdf) (AFTS with filters)  
176 | 3. Use search_by_metadata to find documents where cm:title contains "test" (AFTS property search)
177 | 4. Use cmis_search with SQL query "SELECT * FROM cmis:document WHERE cmis:name LIKE 'test%'" (CMIS SQL)
178 | 
179 | Compare the results and explain when to use each search method.
180 | ```
181 | 
182 | **Expected:** Clear comparison showing AFTS vs CMIS capabilities and different search approaches
183 | 
184 | ---
185 | 
186 | ## 🗄️ **CMIS ADVANCED TESTING**
187 | 
188 | ### **CMIS Preset Exploration**
189 | ```
190 | Test all the CMIS presets to understand different query types:
191 | 
192 | 1. Use preset "recent_documents" to see newest content
193 | 2. Use preset "large_files" to find documents over 1MB
194 | 3. Use preset "pdf_documents" to find all PDF files
195 | 4. Use preset "word_documents" to find Word documents
196 | 
197 | Show me what each preset reveals about the repository structure.
198 | ```
199 | 
200 | **Expected:** Different categories of content with SQL-precision filtering
201 | 
202 | ---
203 | 
204 | ### **Custom CMIS Queries**
205 | ```
206 | Test custom CMIS SQL queries for advanced scenarios:
207 | 
208 | 1. Find documents created today: "SELECT * FROM cmis:document WHERE cmis:creationDate > '2024-01-01T00:00:00.000Z'"
209 | 2. Find large PDFs: "SELECT * FROM cmis:document WHERE cmis:contentStreamMimeType = 'application/pdf' AND cmis:contentStreamLength > 500000"
210 | 3. Find folders with specific names: "SELECT * FROM cmis:folder WHERE cmis:name LIKE '%test%'"
211 | 
212 | This demonstrates CMIS's SQL-like precision for complex filtering.
213 | ```
214 | 
215 | **Expected:** Precise, database-style results showing CMIS's structured query power
216 | 
217 | ---
218 | 
219 | ## 📝 **PROMPT TESTING** (1 Prompt)
220 | 
221 | ### **Search and Analyze Prompt**
222 | ```
223 | Can you use the search_and_analyze prompt to help me find and analyze documents related to "project management" in the repository? I want to understand what content is available and get insights about it.
224 | ```
225 | 
226 | **Expected:** Structured analysis with search results, content summary, and insights
227 | 
228 | ---
229 | 
230 | ## 📦 **RESOURCE TESTING** (5 Resources)
231 | 
232 | ### **1. Repository Info Resource**
233 | ```
234 | Can you check the repository information resource to tell me about this Alfresco instance? I want to know version, edition, and basic details.
235 | ```
236 | 
237 | **Expected:** Repository version, edition, schema info
238 | 
239 | ---
240 | 
241 | ### **2. Repository Health Resource**
242 | ```
243 | Please check the repository health status. Is everything running normally?
244 | ```
245 | 
246 | **Expected:** Health status indicating if services are up/down
247 | 
248 | ---
249 | 
250 | ### **3. Repository Stats Resource**
251 | ```
252 | Show me the current repository statistics - how many documents, users, storage usage, etc.
253 | ```
254 | 
255 | **Expected:** Usage statistics and metrics
256 | 
257 | ---
258 | 
259 | ### **4. Repository Config Resource**
260 | ```
261 | Can you check the repository configuration details? I want to understand how this Alfresco instance is set up.
262 | ```
263 | 
264 | **Expected:** Configuration settings and parameters
265 | 
266 | ---
267 | 
268 | ### **5. Dynamic Repository Resource**
269 | ```
270 | Can you check the "users" section of repository information to see what user management details are available?
271 | ```
272 | 
273 | **Expected:** User-related repository information
274 | 
275 | ---
276 | 
277 | ## 🚀 **COMPLEX WORKFLOW TESTING**
278 | 
279 | ### **Complete Document Lifecycle**
280 | ```
281 | Let's test a complete document management workflow:
282 | 
283 | 1. Create a folder called "Project_Alpha"
284 | 2. Upload a document called "requirements.md" to that folder with some project requirements content
285 | 3. Get the document properties to verify it was created correctly
286 | 4. Update the document properties to add a title and description
287 | 5. Checkout the document for editing
288 | 6. Checkin the document as a major version with appropriate comments
289 | 7. Search for documents containing "requirements" using basic search (AFTS full-text)
290 | 8. Try advanced search with date filters to find the same document (AFTS with filters)
291 | 9. Use metadata search to find it by title property (AFTS property search)
292 | 10. Use CMIS search with SQL query to find it by name (CMIS SQL)
293 | 11. Download the document to verify content integrity
294 | 
295 | Walk me through each step and confirm success before moving to the next.
296 | ```
297 | 
298 | **Expected:** Step-by-step execution with confirmation at each stage
299 | 
300 | ---
301 | 
302 | ### **Repository Exploration**
303 | ```
304 | Help me explore this Alfresco repository systematically:
305 | 
306 | 1. Check repository health and info first
307 | 2. Browse the root directory to see what's available
308 | 3. Search for any existing content
309 | 4. Show me repository statistics
310 | 5. Summarize what you've learned about this Alfresco instance
311 | 
312 | Provide a comprehensive overview of what we're working with.
313 | ```
314 | 
315 | **Expected:** Comprehensive repository analysis and summary
316 | 
317 | ---
318 | 
319 | ## 🐛 **ERROR TESTING**
320 | 
321 | ### **Invalid Operations**
322 | ```
323 | Let's test error handling:
324 | 
325 | 1. Try to download a document with invalid ID "invalid-node-id"
326 | 2. Try to delete a non-existent node
327 | 3. Try to upload a document with missing required parameters
328 | 4. Search with an empty query
329 | 
330 | Show me how the MCP server handles these error cases.
331 | ```
332 | 
333 | **Expected:** Graceful error messages without crashes
334 | 
335 | ---
336 | 
337 | ### **Authentication Testing**
338 | ```
339 | Can you verify that authentication is working properly by:
340 | 
341 | 1. Checking repository info (requires read access)
342 | 2. Creating a test folder (requires write access)  
343 | 3. Deleting that folder (requires delete access)
344 | 
345 | This will confirm all permission levels are working.
346 | ```
347 | 
348 | **Expected:** All operations succeed, confirming proper authentication
349 | 
350 | ---
351 | 
352 | ## 📊 **PERFORMANCE TESTING**
353 | 
354 | ### **Batch Operations**
355 | ```
356 | Test performance with multiple operations:
357 | 
358 | 1. Create 3 folders with names "Batch_Test_1", "Batch_Test_2", "Batch_Test_3"
359 | 2. Upload a small document to each folder
360 | 3. Search for "Batch_Test" to find all created content
361 | 4. Clean up by deleting all test content
362 | 
363 | Monitor response times and any issues with multiple rapid operations.
364 | ```
365 | 
366 | **Expected:** All operations complete successfully with reasonable response times
367 | 
368 | ---
369 | 
370 | ## ✅ **SUCCESS CRITERIA**
371 | 
372 | For a fully functional MCP server, you should see:
373 | 
374 | - ✅ All 15 tools respond without errors (including new CMIS search)
375 | - ✅ The search_and_analyze prompt works
376 | - ✅ All 5 resources return data
377 | - ✅ Authentication works for read/write/delete operations
378 | - ✅ AFTS and CMIS search both work properly
379 | - ✅ Error handling is graceful
380 | - ✅ Complex workflows complete successfully
381 | - ✅ Performance is acceptable
382 | 
383 | ## 🔍 **TROUBLESHOOTING**
384 | 
385 | If tests fail:
386 | 
387 | 1. **Check server status**: Verify MCP server is running on http://127.0.0.1:8003/mcp/
388 | 2. **Check Alfresco**: Ensure Alfresco is running on http://localhost:8080
389 | 3. **Check authentication**: Verify credentials in config.yaml
390 | 4. **Check logs**: Review server console output for errors
391 | 5. **Check network**: Ensure no firewall/proxy issues
392 | 
393 | ## 📝 **LOGGING CLAUDE'S RESPONSES**
394 | 
395 | When testing, note:
396 | - Which operations succeed/fail
397 | - Any error messages received
398 | - Response times for operations
399 | - Quality of returned data
400 | - Any unexpected behavior
401 | 
402 | This will help identify areas needing improvement in the MCP server implementation. 
```

--------------------------------------------------------------------------------
/tests/test_coverage.py:
--------------------------------------------------------------------------------

```python
  1 | """
  2 | Code coverage tests for Python Alfresco MCP Server.
  3 | Tests edge cases and error paths to ensure comprehensive coverage.
  4 | """
  5 | import pytest
  6 | import asyncio
  7 | import base64
  8 | import time
  9 | from unittest.mock import Mock, patch
 10 | from fastmcp import Client
 11 | from alfresco_mcp_server.fastmcp_server import mcp
 12 | from tests.test_utils import strip_emojis
 13 | 
 14 | 
 15 | class TestCodeCoverage:
 16 |     """Test various code paths for coverage."""
 17 |     
 18 |     @pytest.mark.asyncio
 19 |     async def test_all_tool_combinations(self):
 20 |         """Test all tools with different parameter combinations."""
 21 |         async with Client(mcp) as client:
 22 |             # Test all major tools
 23 |             tools_to_test = [
 24 |                 ("search_content", {"query": "test", "max_results": 10}),
 25 |                 ("upload_document", {"file_path": "", "base64_content": "dGVzdA==", "description": "test file"}),
 26 |                 ("download_document", {"node_id": "test-123"}),
 27 |                 ("checkout_document", {"node_id": "test-123"}),
 28 |                 ("checkin_document", {"node_id": "test-123", "file_content": "dGVzdA==", "comment": "test"}),
 29 |                 ("cancel_checkout", {"node_id": "test-123"}),
 30 |                 ("create_folder", {"folder_name": "test", "parent_id": "-shared-"}),
 31 |                 ("get_node_properties", {"node_id": "-shared-"}),
 32 |                 ("update_node_properties", {"node_id": "-shared-", "title": "Test"}),
 33 |                 ("delete_node", {"node_id": "test-123"}),
 34 |                 ("browse_repository", {"parent_id": "-shared-"}),
 35 |                 ("advanced_search", {"query": "test"}),
 36 |                 ("search_by_metadata", {"metadata_query": "test"}),
 37 |                 ("cmis_search", {"cmis_query": "SELECT * FROM cmis:document"})
 38 |             ]
 39 |             
 40 |             for tool_name, params in tools_to_test:
 41 |                 try:
 42 |                     result = await client.call_tool(tool_name, params)
 43 |                     # All tools should return valid responses (success or graceful error)
 44 |                     assert len(result.content) == 1
 45 |                     assert isinstance(result.content[0].text, str)
 46 |                     assert len(result.content[0].text) > 0
 47 |                 except Exception as e:
 48 |                     # Some tools may raise exceptions with invalid data - that's acceptable
 49 |                     assert "validation" in str(e).lower() or "error" in str(e).lower()
 50 | 
 51 |     @pytest.mark.asyncio
 52 |     async def test_search_models_import_error(self):
 53 |         """Test handling when search models can't be imported."""
 54 |         async with Client(mcp) as client:
 55 |             # Test search with potentially problematic queries
 56 |             problematic_queries = ["", "*", "SELECT * FROM cmis:document LIMIT 1000"]
 57 |             
 58 |             for query in problematic_queries:
 59 |                 try:
 60 |                     result = await client.call_tool("search_content", {
 61 |                         "query": query,
 62 |                         "max_results": 5
 63 |                     })
 64 |                     # Should handle gracefully
 65 |                     assert len(result.content) >= 1
 66 |                 except:
 67 |                     pass  # Some queries expected to fail
 68 |     
 69 |     @pytest.mark.asyncio
 70 |     async def test_all_error_paths(self):
 71 |         """Test various error conditions."""
 72 |         async with Client(mcp) as client:
 73 |             # Test with invalid parameters
 74 |             error_scenarios = [
 75 |                 ("get_node_properties", {"node_id": ""}),
 76 |                 ("download_document", {"node_id": "invalid-id-12345"}),
 77 |                 ("upload_document", {"file_path": "nonexistent.txt"}),
 78 |                 ("delete_node", {"node_id": "invalid-node"}),
 79 |                 ("checkout_document", {"node_id": "invalid-checkout"}),
 80 |             ]
 81 |             
 82 |             for tool_name, params in error_scenarios:
 83 |                 result = await client.call_tool(tool_name, params)
 84 |                 # Should handle errors gracefully
 85 |                 assert len(result.content) >= 1
 86 |                 response_text = result.content[0].text
 87 |                 assert isinstance(response_text, str)
 88 |                 # Should contain some indication of error or completion
 89 |                 assert len(response_text) > 0
 90 | 
 91 |     @pytest.mark.asyncio
 92 |     async def test_base64_edge_cases(self):
 93 |         """Test base64 content edge cases."""
 94 |         async with Client(mcp) as client:
 95 |             # Test various base64 scenarios
 96 |             base64_tests = [
 97 |                 "",  # Empty
 98 |                 "dGVzdA==",  # Valid: "test"
 99 |                 "invalid-base64!!!",  # Invalid characters
100 |                 "dGVzdA",  # Missing padding
101 |             ]
102 |             
103 |             for content in base64_tests:
104 |                 try:
105 |                     result = await client.call_tool("upload_document", {
106 |                         "file_path": "",
107 |                         "base64_content": content,
108 |                         "parent_id": "-shared-",
109 |                         "description": "Base64 test"
110 |                     })
111 |                     # Should handle various base64 inputs
112 |                     assert len(result.content) >= 1
113 |                 except Exception as e:
114 |                     # Some invalid base64 expected to fail
115 |                     assert "validation" in str(e).lower() or "base64" in str(e).lower()
116 | 
117 |     @pytest.mark.asyncio 
118 |     async def test_search_edge_cases(self):
119 |         """Test search with various edge cases."""
120 |         async with Client(mcp) as client:
121 |             edge_queries = [
122 |                 "a",  # Single character
123 |                 "test" * 100,  # Very long
124 |                 "test\nwith\nnewlines",  # Newlines
125 |                 "test\twith\ttabs",  # Tabs
126 |                 "special!@#$%chars",  # Special characters
127 |             ]
128 |             
129 |             for query in edge_queries:
130 |                 result = await client.call_tool("search_content", {
131 |                     "query": query,
132 |                     "max_results": 5
133 |                 })
134 |                 # Should handle all queries
135 |                 assert len(result.content) >= 1
136 |                 assert isinstance(result.content[0].text, str)
137 | 
138 | 
139 | class TestResourcesCoverage:
140 |     """Test resource-related coverage."""
141 |     
142 |     @pytest.mark.asyncio
143 |     async def test_all_resource_sections(self):
144 |         """Test all repository resource sections."""
145 |         async with Client(mcp) as client:
146 |             # Test the main resource
147 |             result = await client.read_resource("alfresco://repository/info")
148 |             response_text = result[0].text
149 |             assert isinstance(response_text, str)
150 |             assert len(response_text) > 0
151 | 
152 |     @pytest.mark.asyncio
153 |     async def test_resource_error_cases(self):
154 |         """Test resource error handling."""
155 |         async with Client(mcp) as client:
156 |             # Test invalid resource
157 |             try:
158 |                 await client.read_resource("alfresco://repository/unknown")
159 |                 assert False, "Should have raised an error"
160 |             except Exception as e:
161 |                 assert "unknown" in str(e).lower() or "error" in str(e).lower()
162 | 
163 | 
164 | class TestExceptionHandling:
165 |     """Test exception handling scenarios."""
166 |     
167 |     @pytest.mark.asyncio
168 |     async def test_network_timeout_simulation(self):
169 |         """Test handling of network timeouts."""
170 |         async with Client(mcp) as client:
171 |             # Test with operations that might timeout
172 |             long_operations = [
173 |                 ("search_content", {"query": "*", "max_results": 50}),
174 |                 ("browse_repository", {"parent_id": "-root-", "max_items": 50}),
175 |             ]
176 |             
177 |             for tool_name, params in long_operations:
178 |                 try:
179 |                     result = await asyncio.wait_for(
180 |                         client.call_tool(tool_name, params),
181 |                         timeout=30  # 30 second timeout
182 |                     )
183 |                     # Should complete within timeout
184 |                     assert len(result.content) >= 1
185 |                 except asyncio.TimeoutError:
186 |                     # Timeout is acceptable for this test
187 |                     pass
188 | 
189 |     @pytest.mark.asyncio
190 |     async def test_authentication_failure_simulation(self):
191 |         """Test handling of authentication failures."""
192 |         async with Client(mcp) as client:
193 |             # Test operations that might fail due to auth
194 |             auth_sensitive_ops = [
195 |                 ("upload_document", {
196 |                     "file_path": "",
197 |                     "base64_content": "dGVzdA==",
198 |                     "parent_id": "-shared-",
199 |                     "description": "Auth test"
200 |                 }),
201 |                 ("delete_node", {"node_id": "test-delete"}),
202 |                 ("checkout_document", {"node_id": "test-checkout"}),
203 |             ]
204 |             
205 |             for tool_name, params in auth_sensitive_ops:
206 |                 result = await client.call_tool(tool_name, params)
207 |                 # Should handle auth issues gracefully
208 |                 assert len(result.content) >= 1
209 |                 response_text = result.content[0].text
210 |                 assert isinstance(response_text, str)
211 | 
212 |     @pytest.mark.asyncio
213 |     async def test_malformed_response_handling(self):
214 |         """Test handling of malformed responses."""
215 |         async with Client(mcp) as client:
216 |             # Test with parameters that might cause unusual responses
217 |             unusual_params = [
218 |                 ("search_content", {"query": "\x00\x01\x02", "max_results": 1}),  # Binary chars
219 |                 ("get_node_properties", {"node_id": "../../../etc/passwd"}),  # Path traversal attempt
220 |                 ("create_folder", {"folder_name": "a" * 1000, "parent_id": "-shared-"}),  # Very long name
221 |             ]
222 |             
223 |             for tool_name, params in unusual_params:
224 |                 try:
225 |                     result = await client.call_tool(tool_name, params)
226 |                     # Should handle unusual inputs
227 |                     assert len(result.content) >= 1
228 |                     response_text = result.content[0].text
229 |                     assert isinstance(response_text, str)
230 |                 except Exception as e:
231 |                     # Some unusual inputs expected to cause validation errors
232 |                     assert "validation" in str(e).lower() or "error" in str(e).lower()
233 | 
234 | 
235 | class TestPerformanceCoverage:
236 |     """Test performance-related code paths."""
237 |     
238 |     @pytest.mark.asyncio
239 |     async def test_memory_usage_patterns(self):
240 |         """Test memory usage with various operations."""
241 |         async with Client(mcp) as client:
242 |             # Perform operations that might use different memory patterns
243 |             operations = [
244 |                 ("search_content", {"query": "test", "max_results": 1}),  # Small result
245 |                 ("search_content", {"query": "*", "max_results": 25}),    # Larger result
246 |                 ("browse_repository", {"parent_id": "-shared-", "max_items": 1}),
247 |                 ("browse_repository", {"parent_id": "-shared-", "max_items": 20}),
248 |             ]
249 |             
250 |             for tool_name, params in operations:
251 |                 result = await client.call_tool(tool_name, params)
252 |                 # All should complete without memory issues
253 |                 assert len(result.content) >= 1
254 |                 assert isinstance(result.content[0].text, str)
255 | 
256 |     @pytest.mark.asyncio
257 |     async def test_concurrent_resource_access(self):
258 |         """Test concurrent access to resources."""
259 |         async with Client(mcp) as client:
260 |             # Access repository info concurrently
261 |             tasks = [
262 |                 client.read_resource("alfresco://repository/info")
263 |                 for _ in range(3)
264 |             ]
265 |             
266 |             results = await asyncio.gather(*tasks, return_exceptions=True)
267 |             
268 |             # All should complete successfully
269 |             assert len(results) == 3
270 |             for result in results:
271 |                 if not isinstance(result, Exception):
272 |                     assert len(result) >= 1
273 |                     assert isinstance(result[0].text, str) 
```

--------------------------------------------------------------------------------
/alfresco_mcp_server/tools/core/checkin_document.py:
--------------------------------------------------------------------------------

```python
  1 | """
  2 | Checkin document tool implementation for Alfresco MCP Server.
  3 | Handles document checkin with versioning and cleanup management.
  4 | """
  5 | import logging
  6 | import os
  7 | import pathlib
  8 | import json
  9 | import urllib.parse
 10 | from io import BytesIO
 11 | from datetime import datetime
 12 | from fastmcp import Context
 13 | 
 14 | from ...utils.connection import get_core_client
 15 | from ...config import config
 16 | from ...utils.json_utils import safe_format_output
 17 | from python_alfresco_api.raw_clients.alfresco_core_client.core_client.types import File
 18 | from python_alfresco_api.raw_clients.alfresco_core_client.core_client.api.nodes.update_node_content import sync as update_node_content_sync
 19 | 
 20 | logger = logging.getLogger(__name__)
 21 | 
 22 | 
 23 | async def checkin_document_impl(
 24 |     node_id: str, 
 25 |     comment: str = "", 
 26 |     major_version: bool = False,
 27 |     file_path: str = "",
 28 |     new_name: str = "",
 29 |     ctx: Context = None
 30 | ) -> str:
 31 |     """Check in a document after editing using Alfresco REST API.
 32 |     
 33 |     Args:
 34 |         node_id: Original node ID to check in (not working copy)
 35 |         comment: Check-in comment (default: empty)
 36 |         major_version: Whether to create a major version (default: False = minor version)
 37 |         file_path: Specific file path to upload (if empty, auto-detects from checkout folder)
 38 |         new_name: Optional new name for the file during checkin (default: keep original name)
 39 |         ctx: MCP context for progress reporting
 40 |     
 41 |     Returns:
 42 |         Check-in confirmation with version details and cleanup status
 43 |     """
 44 |     if ctx:
 45 |         await ctx.info(f"Checking in document: {node_id}")
 46 |         await ctx.info("Validating parameters...")
 47 |         await ctx.report_progress(0.1)
 48 |     
 49 |     if not node_id.strip():
 50 |         return safe_format_output("❌ Error: node_id is required")
 51 |     
 52 |     try:
 53 |         logger.info(f"Starting checkin: node {node_id}")
 54 |         core_client = await get_core_client()
 55 |         
 56 |         # Clean the node ID
 57 |         clean_node_id = node_id.strip()
 58 |         if clean_node_id.startswith('alfresco://'):
 59 |             clean_node_id = clean_node_id.split('/')[-1]
 60 |         
 61 |         if ctx:
 62 |             await ctx.info("Finding checkout file...")
 63 |             await ctx.report_progress(0.2)
 64 |         
 65 |         # Find the file to upload
 66 |         checkout_file_path = None
 67 |         checkout_data = {}
 68 |         working_copy_id = None
 69 |         
 70 |         if file_path:
 71 |             # Use specific file path provided - handle quotes and path expansion
 72 |             cleaned_file_path = file_path.strip().strip('"').strip("'")
 73 |             
 74 |             # Handle macOS/Unix path expansion (~/Documents, etc.)
 75 |             if cleaned_file_path.startswith('~'):
 76 |                 cleaned_file_path = os.path.expanduser(cleaned_file_path)
 77 |             
 78 |             checkout_file_path = pathlib.Path(cleaned_file_path)
 79 |             if not checkout_file_path.exists():
 80 |                 return safe_format_output(f"❌ Specified file not found: {cleaned_file_path} (cleaned from: {file_path})")
 81 |             
 82 |             # Linux-specific: Check file permissions
 83 |             if not os.access(checkout_file_path, os.R_OK):
 84 |                 return safe_format_output(f"❌ File not readable (permission denied): {cleaned_file_path}")
 85 |         else:
 86 |             # Auto-detect from checkout folder
 87 |             downloads_dir = pathlib.Path.home() / "Downloads"
 88 |             checkout_dir = downloads_dir / "checkout"
 89 |             checkout_manifest_path = checkout_dir / ".checkout_manifest.json"
 90 |             
 91 |             if checkout_manifest_path.exists():
 92 |                 try:
 93 |                     with open(checkout_manifest_path, 'r') as f:
 94 |                         checkout_data = json.load(f)
 95 |                 except:
 96 |                     checkout_data = {}
 97 |             
 98 |             if 'checkouts' in checkout_data and clean_node_id in checkout_data['checkouts']:
 99 |                 checkout_info = checkout_data['checkouts'][clean_node_id]
100 |                 checkout_filename = checkout_info['local_file']
101 |                 locked_node_id = checkout_info.get('locked_node_id', clean_node_id)  # Updated for lock API
102 |                 checkout_file_path = checkout_dir / checkout_filename
103 |                 
104 |                 if not checkout_file_path.exists():
105 |                     return safe_format_output(f"❌ Checkout file not found: {checkout_file_path}. File may have been moved or deleted.")
106 |             else:
107 |                 return safe_format_output(f"❌ No locked document found for node {clean_node_id}. Use checkout_document first, or specify file_path manually.")
108 |         
109 |         if ctx:
110 |             await ctx.info(f"Uploading file: {checkout_file_path.name}")
111 |             await ctx.report_progress(0.4)
112 |         
113 |         # Read the file content
114 |         with open(checkout_file_path, 'rb') as f:
115 |             file_content = f.read()
116 |         
117 |         logger.info(f"Checkin file: {checkout_file_path.name} ({len(file_content)} bytes)")
118 |         # Get original node info using high-level core client
119 |         node_response = core_client.nodes.get(node_id=clean_node_id)
120 |         if not hasattr(node_response, 'entry'):
121 |             return safe_format_output(f"❌ Failed to get node information for: {clean_node_id}")
122 |         
123 |         node_info = node_response.entry
124 |         original_filename = getattr(node_info, 'name', f"document_{clean_node_id}")
125 |         
126 |         if ctx:
127 |             await ctx.info("Uploading new content with versioning using high-level API...")
128 |             await ctx.report_progress(0.7)
129 |         
130 |         # **USE HIGH-LEVEL API: update_node_content.sync()**
131 |         # Use new name if provided, otherwise keep original filename
132 |         final_filename = new_name.strip() if new_name.strip() else original_filename
133 |         
134 |         # Create File object with content
135 |         file_obj = File(
136 |             payload=BytesIO(file_content),
137 |             file_name=final_filename,
138 |             mime_type="application/octet-stream"
139 |         )
140 |         
141 |         # Use high-level update_node_content API instead of manual httpx
142 |         try:
143 |             version_type = "major" if major_version else "minor"
144 |             logger.info(f"Updating content for {clean_node_id} ({version_type} version)")
145 |             # Ensure raw client is initialized before using it
146 |             if not core_client.is_initialized:
147 |                 return safe_format_output("❌ Error: Alfresco server unavailable")
148 |             # Use high-level update_node_content API
149 |             content_response = update_node_content_sync(
150 |                 node_id=clean_node_id,
151 |                 client=core_client.raw_client,
152 |                 body=file_obj,
153 |                 major_version=major_version,
154 |                 comment=comment if comment else None,
155 |                 name=new_name.strip() if new_name.strip() else None
156 |             )
157 |             
158 |             if not content_response:
159 |                 return safe_format_output(f"❌ Failed to update document content using high-level API")
160 |             
161 |             logger.info(f"Content updated successfully for {clean_node_id}")
162 |             
163 |             # CRITICAL: Unlock the document after successful content update to complete checkin
164 |             try:
165 |                 logger.info(f"Unlocking document after successful checkin: {clean_node_id}")
166 |                 unlock_response = core_client.versions.cancel_checkout(node_id=clean_node_id)
167 |                 logger.info(f"Document unlocked successfully after checkin: {clean_node_id}")
168 |             except Exception as unlock_error:
169 |                 error_str = str(unlock_error)
170 |                 if "404" in error_str:
171 |                     logger.info(f"Document was not locked (already unlocked): {clean_node_id}")
172 |                 elif "405" in error_str:
173 |                     logger.warning(f"Server doesn't support unlock APIs: {clean_node_id}")
174 |                 else:
175 |                     logger.error(f"Failed to unlock document after checkin: {clean_node_id} - {error_str}")
176 |                     # Don't fail the entire checkin if unlock fails - content was updated successfully
177 |                 
178 |         except Exception as api_error:
179 |             return safe_format_output(f"❌ Failed to update document content: {str(api_error)}")
180 |         
181 |         # Get updated node info to show version details using high-level core client
182 |         updated_node_response = core_client.nodes.get(node_id=clean_node_id)
183 |         updated_node = updated_node_response.entry if hasattr(updated_node_response, 'entry') else {}
184 |         
185 |         # Extract version using multiple access methods (same as get_node_properties)
186 |         new_version = 'Unknown'
187 |         if hasattr(updated_node, 'properties') and updated_node.properties:
188 |             try:
189 |                 # Try to_dict() method first
190 |                 if hasattr(updated_node.properties, 'to_dict'):
191 |                     props_dict = updated_node.properties.to_dict()
192 |                     new_version = props_dict.get('cm:versionLabel', 'Unknown')
193 |                     logger.info(f"Version found via to_dict(): {new_version}")
194 |                 # Try direct attribute access
195 |                 elif hasattr(updated_node.properties, 'cm_version_label') or hasattr(updated_node.properties, 'cm:versionLabel'):
196 |                     new_version = getattr(updated_node.properties, 'cm_version_label', getattr(updated_node.properties, 'cm:versionLabel', 'Unknown'))
197 |                     logger.info(f"Version found via attributes: {new_version}")
198 |                 # Try dict-like access
199 |                 elif hasattr(updated_node.properties, '__getitem__'):
200 |                     new_version = updated_node.properties.get('cm:versionLabel', 'Unknown') if hasattr(updated_node.properties, 'get') else updated_node.properties['cm:versionLabel'] if 'cm:versionLabel' in updated_node.properties else 'Unknown'
201 |                     logger.info(f"Version found via dict access: {new_version}")
202 |                 else:
203 |                     logger.warning(f"Version properties - type: {type(updated_node.properties)}, methods: {dir(updated_node.properties)}")
204 |             except Exception as version_error:
205 |                 logger.error(f"Error extracting version: {version_error}")
206 |                 new_version = 'Unknown'
207 |         else:
208 |             logger.warning("No properties found for version extraction")
209 |         
210 |         if ctx:
211 |             await ctx.info("Cleaning up checkout tracking...")
212 |             await ctx.report_progress(0.9)
213 |         
214 |         # Clean up checkout tracking
215 |         cleanup_status = "ℹ️  No checkout tracking to clean up"
216 |         if checkout_data and 'checkouts' in checkout_data and clean_node_id in checkout_data['checkouts']:
217 |             del checkout_data['checkouts'][clean_node_id]
218 |             
219 |             checkout_manifest_path = pathlib.Path.home() / "Downloads" / "checkout" / ".checkout_manifest.json"
220 |             with open(checkout_manifest_path, 'w') as f:
221 |                 json.dump(checkout_data, f, indent=2)
222 |             
223 |             # Optionally remove the checkout file
224 |             try:
225 |                 checkout_file_path.unlink()
226 |                 cleanup_status = "🗑️  Local checkout file cleaned up"
227 |             except:
228 |                 cleanup_status = "WARNING:  Local checkout file cleanup failed"
229 |         
230 |         if ctx:
231 |             await ctx.info("Checkin completed: Content updated + Document unlocked + Version created!")
232 |             await ctx.report_progress(1.0)
233 |         
234 |         # Format file size
235 |         file_size = len(file_content)
236 |         if file_size < 1024:
237 |             size_str = f"{file_size} bytes"
238 |         elif file_size < 1024 * 1024:
239 |             size_str = f"{file_size / 1024:.1f} KB"
240 |         else:
241 |             size_str = f"{file_size / (1024 * 1024):.1f} MB"
242 |         
243 |         # Clean JSON-friendly formatting (no markdown syntax)
244 |         return safe_format_output(f"""✅ Document checked in successfully!
245 | 
246 | 📄 Document: {final_filename}
247 | 🔢 New Version: {new_version} ({version_type})
248 | 📝 Comment: {comment if comment else '(no comment)'}
249 | 📊 File Size: {size_str}
250 | 🔗 Node ID: {clean_node_id}
251 | {f"📝 Renamed: {original_filename} → {final_filename}" if new_name.strip() else ""}
252 | 
253 | {cleanup_status}
254 | 
255 | Next Steps:
256 | 🔓 Document is now UNLOCKED and available for others to edit
257 | ✅ New version has been created with your changes
258 | ✅ You can continue editing by using checkout_document again
259 | 
260 | Status: Content updated → Document unlocked → Checkin complete!""")
261 |         
262 |     except Exception as e:
263 |         logger.error(f"Checkin failed: {str(e)}")
264 |         return safe_format_output(f"❌ Checkin failed: {str(e)}") 
```

--------------------------------------------------------------------------------
/alfresco_mcp_server/tools/core/checkout_document.py:
--------------------------------------------------------------------------------

```python
  1 | """
  2 | Checkout document tool implementation for Alfresco MCP Server.
  3 | Handles document checkout with lock management and local file download.
  4 | """
  5 | import logging
  6 | import os
  7 | import pathlib
  8 | import json
  9 | import httpx
 10 | from datetime import datetime
 11 | from fastmcp import Context
 12 | from ...utils.connection import get_core_client
 13 | from ...config import config
 14 | from ...utils.json_utils import safe_format_output
 15 | 
 16 | 
 17 | logger = logging.getLogger(__name__)
 18 | 
 19 | 
 20 | async def checkout_document_impl(
 21 |     node_id: str, 
 22 |     download_for_editing: bool = True,
 23 |     ctx: Context = None
 24 | ) -> str:
 25 |     """Check out a document for editing using Alfresco REST API.
 26 |     
 27 |     Args:
 28 |         node_id: Document node ID to check out
 29 |         download_for_editing: If True, downloads file to checkout folder for editing (default, AI-friendly).
 30 |                             If False, just creates working copy in Alfresco (testing mode)
 31 |         ctx: MCP context for progress reporting
 32 |     
 33 |     Returns:
 34 |         Checkout confirmation with file path and editing instructions if download_for_editing=True,
 35 |         or working copy details if False
 36 |     """
 37 |     if ctx:
 38 |         await ctx.info(safe_format_output(f">> Checking out document: {node_id}"))
 39 |         await ctx.info(safe_format_output("Validating node ID..."))
 40 |         await ctx.report_progress(0.1)
 41 |     
 42 |     if not node_id.strip():
 43 |         return safe_format_output("❌ Error: node_id is required")
 44 |     
 45 |     try:
 46 |         logger.info(f"Starting checkout: node {node_id}")
 47 |         core_client = await get_core_client()
 48 |         
 49 |         if ctx:
 50 |             await ctx.info(safe_format_output("Connecting to Alfresco..."))
 51 |             await ctx.report_progress(0.2)
 52 |         
 53 |         # Clean the node ID (remove any URL encoding or extra characters)
 54 |         clean_node_id = node_id.strip()
 55 |         if clean_node_id.startswith('alfresco://'):
 56 |             # Extract node ID from URI format
 57 |             clean_node_id = clean_node_id.split('/')[-1]
 58 |         
 59 |         if ctx:
 60 |             await ctx.info(safe_format_output("Getting node information..."))
 61 |             await ctx.report_progress(0.3)
 62 |         
 63 |         # Get node information first to validate it exists using high-level core client
 64 |         node_response = core_client.nodes.get(node_id=clean_node_id)
 65 |         
 66 |         if not hasattr(node_response, 'entry'):
 67 |             return safe_format_output(f"❌ Failed to get node information for: {clean_node_id}")
 68 |         
 69 |         node_info = node_response.entry
 70 |         filename = getattr(node_info, 'name', f"document_{clean_node_id}")
 71 |         
 72 |         if ctx:
 73 |             await ctx.info(safe_format_output(">> Performing Alfresco lock using core client..."))
 74 |             await ctx.report_progress(0.5)
 75 |         
 76 |         # Use core client directly since it inherits from AuthenticatedClient
 77 |         lock_status = "unlocked"
 78 |         try:
 79 |             logger.info(f"Attempting to lock document: {clean_node_id}")
 80 |             
 81 |             # Use the high-level wrapper method that handles the body internally
 82 |             logger.info(f"Using AlfrescoCoreClient versions.checkout method...")
 83 |             
 84 |             # Use the hierarchical API: versions.checkout 
 85 |             lock_response = core_client.versions.checkout(
 86 |                 node_id=clean_node_id
 87 |             )
 88 |             logger.info(f"✅ Used lock_node_sync method successfully")
 89 |             
 90 |             if lock_response and hasattr(lock_response, 'entry'):
 91 |                 lock_status = "locked"
 92 |             else:
 93 |                 lock_status = "locked"  # Assume success if no error
 94 |             
 95 |             logger.info(f"Document locked successfully: {clean_node_id}")
 96 |                 
 97 |         except Exception as lock_error:
 98 |             error_str = str(lock_error)
 99 |             if "423" in error_str or "already locked" in error_str.lower():
100 |                 logger.warning(f"Document already locked: {clean_node_id}")
101 |                 return safe_format_output(f"❌ Document is already locked by another user: {error_str}")
102 |             elif "405" in error_str:
103 |                 # Server doesn't support lock API - continue without locking
104 |                 lock_status = "no-lock-api"
105 |                 logger.warning(f"Server doesn't support lock API for {clean_node_id}")
106 |                 if ctx:
107 |                     await ctx.info(safe_format_output("WARNING: Server doesn't support lock API - proceeding without lock"))
108 |             elif "multiple values for keyword argument" in error_str:
109 |                 logger.error(f"Parameter conflict in lock_node_sync: {error_str}")
110 |                 return safe_format_output(f"❌ Internal client error - parameter conflict: {error_str}")
111 |             else:
112 |                 logger.error(f"Failed to lock document {clean_node_id}: {error_str}")
113 |                 return safe_format_output(f"❌ Document cannot be locked: {error_str}")
114 |         
115 |         # Document is now locked, we'll download the current content
116 |         working_copy_id = clean_node_id  # With lock API, we work with the same node ID
117 |         
118 |         if ctx:
119 |             if lock_status == "locked":
120 |                 await ctx.info(safe_format_output(f"SUCCESS: Document locked in Alfresco!"))
121 |             else:
122 |                 await ctx.info(safe_format_output(f"Document prepared for editing (no lock support)"))
123 |             await ctx.report_progress(0.7)
124 |         
125 |         if download_for_editing:
126 |             try:
127 |                 if ctx:
128 |                     await ctx.info(safe_format_output("Downloading current content..."))
129 |                     await ctx.report_progress(0.8)
130 |                 
131 |                 # Get content using authenticated HTTP client from core client (ensure initialization)
132 |                 if not core_client.is_initialized:
133 |                     return safe_format_output("❌ Error: Alfresco server unavailable")
134 |                 # Use httpx_client property directly on AlfrescoCoreClient
135 |                 http_client = core_client.httpx_client
136 |                 
137 |                 # Build correct content URL based on config
138 |                 if config.alfresco_url.endswith('/alfresco/api/-default-/public'):
139 |                     # Full API path provided
140 |                     content_url = f"{config.alfresco_url}/alfresco/versions/1/nodes/{clean_node_id}/content"
141 |                 elif config.alfresco_url.endswith('/alfresco/api'):
142 |                     # Base API path provided
143 |                     content_url = f"{config.alfresco_url}/-default-/public/alfresco/versions/1/nodes/{clean_node_id}/content"
144 |                 else:
145 |                     # Base server URL provided
146 |                     content_url = f"{config.alfresco_url}/alfresco/api/-default-/public/alfresco/versions/1/nodes/{clean_node_id}/content"
147 |                 
148 |                 response = http_client.get(content_url)
149 |                 response.raise_for_status()
150 |                 
151 |                 # Save to Downloads/checkout folder
152 |                 downloads_dir = pathlib.Path.home() / "Downloads"
153 |                 checkout_dir = downloads_dir / "checkout"
154 |                 checkout_dir.mkdir(parents=True, exist_ok=True)
155 |                 
156 |                 # Create unique filename with node ID
157 |                 safe_filename = filename.replace(" ", "_").replace("/", "_").replace("\\", "_")
158 |                 local_filename = f"{safe_filename}_{clean_node_id}"
159 |                 local_path = checkout_dir / local_filename
160 |                 
161 |                 with open(local_path, 'wb') as f:
162 |                     f.write(response.content)
163 | 
164 |                 logger.info(f"Downloaded for editing: {filename} -> {local_path}")
165 |                 # Update checkout tracking with actual working copy ID
166 |                 checkout_manifest_path = checkout_dir / ".checkout_manifest.json"
167 |                 checkout_data = {}
168 |                 
169 |                 if checkout_manifest_path.exists():
170 |                     try:
171 |                         with open(checkout_manifest_path, 'r') as f:
172 |                             checkout_data = json.load(f)
173 |                     except:
174 |                         checkout_data = {}
175 |                 
176 |                 if 'checkouts' not in checkout_data:
177 |                     checkout_data['checkouts'] = {}
178 |                 
179 |                 checkout_time = datetime.now().isoformat()
180 |                 
181 |                 checkout_data['checkouts'][clean_node_id] = {
182 |                     'original_node_id': clean_node_id,
183 |                     'locked_node_id': clean_node_id,  # Same as original since we lock, not checkout
184 |                     'local_file': local_filename,
185 |                     'checkout_time': checkout_time,
186 |                     'original_filename': filename
187 |                 }
188 |                 
189 |                 # Save manifest
190 |                 with open(checkout_manifest_path, 'w') as f:
191 |                     json.dump(checkout_data, f, indent=2)
192 |                 
193 |                 if ctx:
194 |                     await ctx.info(safe_format_output("SUCCESS: Checkout completed!"))
195 |                     await ctx.report_progress(1.0)
196 |                 
197 |                 # Format file size
198 |                 file_size = len(response.content)
199 |                 if file_size < 1024:
200 |                     size_str = f"{file_size} bytes"
201 |                 elif file_size < 1024 * 1024:
202 |                     size_str = f"{file_size / 1024:.1f} KB"
203 |                 else:
204 |                     size_str = f"{file_size / (1024 * 1024):.1f} MB"
205 |                 
206 |                 if lock_status == "locked":
207 |                     result = f"🔒 Document Checked Out Successfully!\n\n"
208 |                     result += f"📄 Name: {filename}\n"
209 |                     result += f"🆔 Node ID: {clean_node_id}\n"
210 |                     result += f"📏 Size: {size_str}\n"
211 |                     result += f"💾 Downloaded to: {local_path}\n"
212 |                     result += f"🔒 Lock Status: {lock_status}\n"
213 |                     result += f"🕒 Checkout Time: {checkout_time}\n\n"
214 |                     result += f"Next steps:\n"
215 |                     result += f"   1. Edit the document at: {local_path}\n"
216 |                     result += f"   2. Save your changes\n"
217 |                     result += f"   3. Use checkin_document tool to upload changes\n\n"
218 |                     result += f"The document is now locked in Alfresco to prevent conflicts.\n"
219 |                     result += f"Other users cannot edit it until you check it back in or cancel the checkout."
220 |                     
221 |                     return safe_format_output(result)
222 |                 else:
223 |                                     result = f"📥 **Document downloaded for editing!**\n\n"
224 |                 status_msg = "ℹ️ **Status**: Downloaded for editing (server doesn't support locks)"
225 |                 important_msg = "ℹ️ **Note**: Server doesn't support locking - multiple users may edit simultaneously."
226 |                 
227 |                 result += f">> **Downloaded to**: `{local_path}`\n"
228 |                 result += f">> **Original**: {filename}\n"
229 |                 result += f">> **Size**: {size_str}\n"
230 |                 result += f"{status_msg}\n"
231 |                 result += f"🔗 **Node ID**: {clean_node_id}\n"
232 |                 result += f"🕒 **Downloaded at**: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n\n"
233 |                 result += f">> **Instructions**:\n"
234 |                 result += f"1. Open the file in your preferred application (Word, Excel, etc.)\n"
235 |                 result += f"2. Make your edits and save the file\n"
236 |                 result += f"3. When finished, use `checkin_document` to upload your changes\n\n"
237 |                 result += f"{important_msg}"
238 |                 
239 |                 return safe_format_output(result)
240 |             except Exception as e:
241 |                 error_msg = f"❌ Checkout failed: {str(e)}"
242 |                 if ctx:
243 |                     await ctx.error(safe_format_output(error_msg))
244 |                 logger.error(f"Checkout failed: {e}")
245 |                 return safe_format_output(error_msg)
246 |         else:
247 |             # Testing mode - just return lock status
248 |             if lock_status == "locked":
249 |                 return f"SUCCESS: Document locked successfully for testing. Node ID: {clean_node_id}, Status: LOCKED"
250 |             else:
251 |                 return f"WARNING: Document prepared for editing (no lock support). Node ID: {clean_node_id}"
252 |         
253 |     except Exception as e:
254 |         error_msg = f"❌ Checkout failed: {str(e)}"
255 |         if ctx:
256 |             await ctx.error(safe_format_output(error_msg))
257 |         logger.error(f"Checkout failed: {e}")
258 |         return safe_format_output(error_msg) 
```

--------------------------------------------------------------------------------
/examples/document_lifecycle.py:
--------------------------------------------------------------------------------

```python
  1 | #!/usr/bin/env python3
  2 | """
  3 | Document Lifecycle Example for Alfresco MCP Server
  4 | 
  5 | This example demonstrates a complete document management workflow:
  6 | - Creating folders and subfolders
  7 | - Uploading documents with metadata
  8 | - Searching and retrieving documents
  9 | - Updating document properties
 10 | - Document versioning (checkout/checkin)
 11 | - Organizing and managing content
 12 | 
 13 | This is a practical example showing real-world usage patterns.
 14 | """
 15 | 
 16 | import asyncio
 17 | import base64
 18 | import uuid
 19 | from datetime import datetime
 20 | from fastmcp import Client
 21 | from alfresco_mcp_server.fastmcp_server import mcp
 22 | 
 23 | 
 24 | class DocumentLifecycleDemo:
 25 |     """Demonstrates complete document lifecycle management."""
 26 |     
 27 |     def __init__(self):
 28 |         self.session_id = uuid.uuid4().hex[:8]
 29 |         self.created_items = []  # Track items for cleanup
 30 |         
 31 |     async def run_demo(self):
 32 |         """Run the complete document lifecycle demonstration."""
 33 |         
 34 |         print("📄 Alfresco MCP Server - Document Lifecycle Demo")
 35 |         print("=" * 60)
 36 |         print(f"Session ID: {self.session_id}")
 37 |         print(f"Timestamp: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}")
 38 |         
 39 |         async with Client(mcp) as client:
 40 |             try:
 41 |                 # Phase 1: Setup and Organization
 42 |                 await self._phase_1_setup(client)
 43 |                 
 44 |                 # Phase 2: Document Creation and Upload
 45 |                 await self._phase_2_upload(client)
 46 |                 
 47 |                 # Phase 3: Document Discovery and Search
 48 |                 await self._phase_3_search(client)
 49 |                 
 50 |                 # Phase 4: Document Management
 51 |                 await self._phase_4_management(client)
 52 |                 
 53 |                 # Phase 5: Versioning and Collaboration
 54 |                 await self._phase_5_versioning(client)
 55 |                 
 56 |                 # Phase 6: Analysis and Reporting
 57 |                 await self._phase_6_analysis(client)
 58 |                 
 59 |                 print("\n✅ Document Lifecycle Demo Completed Successfully!")
 60 |                 
 61 |             except Exception as e:
 62 |                 print(f"\n❌ Demo failed: {e}")
 63 |                 raise
 64 |     
 65 |     async def _phase_1_setup(self, client):
 66 |         """Phase 1: Create organizational structure."""
 67 |         
 68 |         print("\n" + "="*60)
 69 |         print("📁 PHASE 1: Organizational Setup")
 70 |         print("="*60)
 71 |         
 72 |         # Create main project folder
 73 |         print("\n1️⃣ Creating main project folder...")
 74 |         main_folder = await client.call_tool("create_folder", {
 75 |             "folder_name": f"Project_Alpha_{self.session_id}",
 76 |             "parent_id": "-root-",
 77 |             "description": f"Main project folder created by MCP demo {self.session_id}"
 78 |         })
 79 |         print("📁 Main folder created:")
 80 |         print(main_folder[0].text)
 81 |         
 82 |         # Create subfolders for organization
 83 |         subfolders = [
 84 |             ("Documents", "Project documents and files"),
 85 |             ("Reports", "Analysis and status reports"),
 86 |             ("Archives", "Historical and backup documents"),
 87 |             ("Drafts", "Work-in-progress documents")
 88 |         ]
 89 |         
 90 |         print("\n2️⃣ Creating organizational subfolders...")
 91 |         for folder_name, description in subfolders:
 92 |             result = await client.call_tool("create_folder", {
 93 |                 "folder_name": f"{folder_name}_{self.session_id}",
 94 |                 "parent_id": "-root-",  # In real scenario, use main folder ID
 95 |                 "description": description
 96 |             })
 97 |             print(f"  📂 {folder_name}: Created")
 98 |         
 99 |         # Get repository status
100 |         print("\n3️⃣ Checking repository status...")
101 |         repo_info = await client.read_resource("alfresco://repository/stats")
102 |         print("📊 Repository Statistics:")
103 |         print(repo_info[0].text)
104 |     
105 |     async def _phase_2_upload(self, client):
106 |         """Phase 2: Upload various document types."""
107 |         
108 |         print("\n" + "="*60)
109 |         print("📤 PHASE 2: Document Upload")
110 |         print("="*60)
111 |         
112 |         # Sample documents to upload
113 |         documents = [
114 |             {
115 |                 "name": f"project_charter_{self.session_id}.txt",
116 |                 "content": "Project Charter\n\nProject: Alpha Initiative\nObjective: Implement MCP integration\nTimeline: Q1 2024\nStakeholders: Development, QA, Operations",
117 |                 "description": "Official project charter document"
118 |             },
119 |             {
120 |                 "name": f"meeting_notes_{self.session_id}.md",
121 |                 "content": "# Meeting Notes - Alpha Project\n\n## Date: 2024-01-15\n\n### Attendees\n- John Doe (PM)\n- Jane Smith (Dev)\n\n### Key Decisions\n1. Use FastMCP 2.0\n2. Implement comprehensive testing\n3. Deploy by end of Q1",
122 |                 "description": "Weekly project meeting notes"
123 |             },
124 |             {
125 |                 "name": f"technical_spec_{self.session_id}.json",
126 |                 "content": '{\n  "project": "Alpha",\n  "version": "1.0.0",\n  "technologies": ["Python", "FastMCP", "Alfresco"],\n  "requirements": {\n    "cpu": "2 cores",\n    "memory": "4GB",\n    "storage": "10GB"\n  }\n}',
127 |                 "description": "Technical specifications in JSON format"
128 |             }
129 |         ]
130 |         
131 |         print(f"\n1️⃣ Uploading {len(documents)} documents...")
132 |         
133 |         for i, doc in enumerate(documents, 1):
134 |             print(f"\n  📄 Document {i}: {doc['name']}")
135 |             
136 |             # Encode content to base64
137 |             content_b64 = base64.b64encode(doc['content'].encode('utf-8')).decode('utf-8')
138 |             
139 |             # Upload document
140 |             result = await client.call_tool("upload_document", {
141 |                 "filename": doc['name'],
142 |                 "content_base64": content_b64,
143 |                 "parent_id": "-root-",  # In real scenario, use appropriate folder ID
144 |                 "description": doc['description']
145 |             })
146 |             
147 |             print(f"    ✅ Upload status:")
148 |             print(f"    {result[0].text}")
149 |             
150 |             # Simulate processing delay
151 |             await asyncio.sleep(0.5)
152 |         
153 |         print(f"\n✅ All {len(documents)} documents uploaded successfully!")
154 |     
155 |     async def _phase_3_search(self, client):
156 |         """Phase 3: Search and discovery operations."""
157 |         
158 |         print("\n" + "="*60)
159 |         print("🔍 PHASE 3: Document Discovery")
160 |         print("="*60)
161 |         
162 |         # Different search scenarios
163 |         searches = [
164 |             ("Project documents", f"Project_Alpha_{self.session_id}", "Find project-related content"),
165 |             ("Meeting notes", "meeting", "Locate meeting documentation"),
166 |             ("Technical files", "technical", "Find technical specifications"),
167 |             ("All session content", self.session_id, "Find all demo content")
168 |         ]
169 |         
170 |         print("\n1️⃣ Performing various search operations...")
171 |         
172 |         for i, (search_name, query, description) in enumerate(searches, 1):
173 |             print(f"\n  🔎 Search {i}: {search_name}")
174 |             print(f"      Query: '{query}'")
175 |             print(f"      Purpose: {description}")
176 |             
177 |             result = await client.call_tool("search_content", {
178 |                 "query": query,
179 |                 "max_results": 10
180 |             })
181 |             
182 |             print(f"      Results:")
183 |             print(f"      {result[0].text}")
184 |             
185 |             await asyncio.sleep(0.3)
186 |         
187 |         # Advanced search with analysis
188 |         print("\n2️⃣ Advanced search with analysis...")
189 |         prompt_result = await client.get_prompt("search_and_analyze", {
190 |             "query": f"session {self.session_id}",
191 |             "analysis_type": "detailed"
192 |         })
193 |         
194 |         print("📊 Generated Analysis Prompt:")
195 |         print(prompt_result.messages[0].content.text[:400] + "...")
196 |     
197 |     async def _phase_4_management(self, client):
198 |         """Phase 4: Document properties and metadata management."""
199 |         
200 |         print("\n" + "="*60)
201 |         print("⚙️ PHASE 4: Document Management")
202 |         print("="*60)
203 |         
204 |         print("\n1️⃣ Retrieving document properties...")
205 |         
206 |         # Get properties of root folder (as example)
207 |         props_result = await client.call_tool("get_node_properties", {
208 |             "node_id": "-root-"
209 |         })
210 |         
211 |         print("📋 Root folder properties:")
212 |         print(props_result[0].text)
213 |         
214 |         print("\n2️⃣ Updating document metadata...")
215 |         
216 |         # Update properties (simulated)
217 |         update_result = await client.call_tool("update_node_properties", {
218 |             "node_id": "-root-",  # In real scenario, use actual document ID
219 |             "properties": {
220 |                 "cm:title": f"Alpha Project Root - {self.session_id}",
221 |                 "cm:description": "Updated via MCP demo",
222 |                 "custom:project": "Alpha Initiative"
223 |             }
224 |         })
225 |         
226 |         print("📝 Property update result:")
227 |         print(update_result[0].text)
228 |         
229 |         print("\n3️⃣ Repository health check...")
230 |         health = await client.read_resource("alfresco://repository/health")
231 |         print("🏥 Repository Health:")
232 |         print(health[0].text)
233 |     
234 |     async def _phase_5_versioning(self, client):
235 |         """Phase 5: Document versioning and collaboration."""
236 |         
237 |         print("\n" + "="*60)
238 |         print("🔄 PHASE 5: Versioning & Collaboration")
239 |         print("="*60)
240 |         
241 |         # Simulate document checkout/checkin workflow
242 |         doc_id = f"test-doc-{self.session_id}"
243 |         
244 |         print("\n1️⃣ Document checkout process...")
245 |         checkout_result = await client.call_tool("checkout_document", {
246 |             "node_id": doc_id
247 |         })
248 |         
249 |         print("🔒 Checkout result:")
250 |         print(checkout_result[0].text)
251 |         
252 |         print("\n2️⃣ Document checkin with new version...")
253 |         checkin_result = await client.call_tool("checkin_document", {
254 |             "node_id": doc_id,
255 |             "comment": f"Updated during MCP demo session {self.session_id}",
256 |             "major_version": False  # Minor version increment
257 |         })
258 |         
259 |         print("🔓 Checkin result:")
260 |         print(checkin_result[0].text)
261 |         
262 |         print("\n3️⃣ Major version checkin...")
263 |         major_checkin = await client.call_tool("checkin_document", {
264 |             "node_id": doc_id,
265 |             "comment": f"Major release - Demo session {self.session_id} complete",
266 |             "major_version": True  # Major version increment
267 |         })
268 |         
269 |         print("📈 Major version result:")
270 |         print(major_checkin[0].text)
271 |     
272 |     async def _phase_6_analysis(self, client):
273 |         """Phase 6: Analysis and reporting."""
274 |         
275 |         print("\n" + "="*60)
276 |         print("📊 PHASE 6: Analysis & Reporting")
277 |         print("="*60)
278 |         
279 |         print("\n1️⃣ Repository configuration analysis...")
280 |         config = await client.read_resource("alfresco://repository/config")
281 |         print("⚙️ Current Configuration:")
282 |         print(config[0].text)
283 |         
284 |         print("\n2️⃣ Generating comprehensive analysis prompts...")
285 |         
286 |         analysis_types = ["summary", "detailed", "trends", "compliance"]
287 |         
288 |         for analysis_type in analysis_types:
289 |             print(f"\n  📋 {analysis_type.title()} Analysis:")
290 |             prompt = await client.get_prompt("search_and_analyze", {
291 |                 "query": f"Project Alpha {self.session_id}",
292 |                 "analysis_type": analysis_type
293 |             })
294 |             
295 |             # Show first part of prompt
296 |             content = prompt.messages[0].content.text
297 |             preview = content.split('\n')[0:3]  # First 3 lines
298 |             print(f"      {' '.join(preview)[:100]}...")
299 |         
300 |         print("\n3️⃣ Final repository status...")
301 |         final_stats = await client.read_resource("alfresco://repository/stats")
302 |         print("📈 Final Repository Statistics:")
303 |         print(final_stats[0].text)
304 |         
305 |         print(f"\n4️⃣ Demo session summary...")
306 |         print(f"   Session ID: {self.session_id}")
307 |         print(f"   Duration: Demo complete")
308 |         print(f"   Operations: Folder creation, document upload, search, versioning")
309 |         print(f"   Status: ✅ All operations successful")
310 | 
311 | 
312 | async def main():
313 |     """Main function to run the document lifecycle demo."""
314 |     
315 |     print("Starting Document Lifecycle Demo...")
316 |     
317 |     demo = DocumentLifecycleDemo()
318 |     
319 |     try:
320 |         await demo.run_demo()
321 |         
322 |         print("\n🎉 Document Lifecycle Demo Completed Successfully!")
323 |         print("\n📚 What you learned:")
324 |         print("- Complete document management workflow")
325 |         print("- Folder organization strategies")
326 |         print("- Document upload and metadata handling")
327 |         print("- Search and discovery techniques")
328 |         print("- Version control operations")
329 |         print("- Repository monitoring and analysis")
330 |         
331 |     except Exception as e:
332 |         print(f"\n💥 Demo failed: {e}")
333 |         print("Check your Alfresco connection and try again.")
334 | 
335 | 
336 | if __name__ == "__main__":
337 |     asyncio.run(main()) 
```

--------------------------------------------------------------------------------
/examples/error_handling.py:
--------------------------------------------------------------------------------

```python
  1 | #!/usr/bin/env python3
  2 | """
  3 | Error Handling Example for Alfresco MCP Server
  4 | 
  5 | This example demonstrates robust error handling patterns:
  6 | - Connection error recovery
  7 | - Authentication failure handling
  8 | - Timeout management
  9 | - Graceful degradation
 10 | - Retry mechanisms
 11 | - Logging and monitoring
 12 | 
 13 | Essential patterns for production deployments.
 14 | """
 15 | 
 16 | import asyncio
 17 | import logging
 18 | import time
 19 | from typing import Optional, Dict, Any
 20 | from fastmcp import Client
 21 | from alfresco_mcp_server.fastmcp_server import mcp
 22 | 
 23 | 
 24 | # Configure logging
 25 | logging.basicConfig(
 26 |     level=logging.INFO,
 27 |     format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
 28 |     handlers=[
 29 |         logging.FileHandler('alfresco_mcp_errors.log'),
 30 |         logging.StreamHandler()
 31 |     ]
 32 | )
 33 | logger = logging.getLogger(__name__)
 34 | 
 35 | 
 36 | class RobustAlfrescoClient:
 37 |     """Production-ready Alfresco MCP client with comprehensive error handling."""
 38 |     
 39 |     def __init__(self, max_retries=3, retry_delay=1.0, timeout=30.0):
 40 |         self.max_retries = max_retries
 41 |         self.retry_delay = retry_delay
 42 |         self.timeout = timeout
 43 |         self.last_error = None
 44 |         
 45 |     async def safe_call_tool(self, tool_name: str, parameters: Dict[str, Any], 
 46 |                            retry_count: int = 0) -> Optional[str]:
 47 |         """
 48 |         Safely call a tool with comprehensive error handling.
 49 |         
 50 |         Args:
 51 |             tool_name: Name of the MCP tool to call
 52 |             parameters: Tool parameters
 53 |             retry_count: Current retry attempt
 54 |             
 55 |         Returns:
 56 |             Tool result string or None if failed
 57 |         """
 58 |         
 59 |         try:
 60 |             logger.info(f"Calling tool '{tool_name}' with parameters: {parameters}")
 61 |             
 62 |             async with Client(mcp) as client:
 63 |                 # Set timeout for the operation
 64 |                 start_time = time.time()
 65 |                 
 66 |                 result = await asyncio.wait_for(
 67 |                     client.call_tool(tool_name, parameters),
 68 |                     timeout=self.timeout
 69 |                 )
 70 |                 
 71 |                 duration = time.time() - start_time
 72 |                 logger.info(f"Tool '{tool_name}' completed successfully in {duration:.2f}s")
 73 |                 
 74 |                 if result and len(result) > 0:
 75 |                     return result[0].text
 76 |                 else:
 77 |                     logger.warning(f"Tool '{tool_name}' returned empty result")
 78 |                     return None
 79 |                     
 80 |         except asyncio.TimeoutError:
 81 |             error_msg = f"Tool '{tool_name}' timed out after {self.timeout}s"
 82 |             logger.error(error_msg)
 83 |             self.last_error = error_msg
 84 |             
 85 |             # Retry with exponential backoff for timeouts
 86 |             return await self._handle_retry(tool_name, parameters, retry_count, 
 87 |                                           "timeout", exponential_backoff=True)
 88 |             
 89 |         except ConnectionError as e:
 90 |             error_msg = f"Connection error calling '{tool_name}': {e}"
 91 |             logger.error(error_msg)
 92 |             self.last_error = error_msg
 93 |             
 94 |             # Retry connection errors
 95 |             return await self._handle_retry(tool_name, parameters, retry_count, 
 96 |                                           "connection_error")
 97 |             
 98 |         except Exception as e:
 99 |             error_msg = f"Unexpected error calling '{tool_name}': {type(e).__name__}: {e}"
100 |             logger.error(error_msg, exc_info=True)
101 |             self.last_error = error_msg
102 |             
103 |             # Check if error is retryable
104 |             if self._is_retryable_error(e):
105 |                 return await self._handle_retry(tool_name, parameters, retry_count, 
106 |                                               "retryable_error")
107 |             else:
108 |                 logger.error(f"Non-retryable error for '{tool_name}': {e}")
109 |                 return None
110 |     
111 |     async def _handle_retry(self, tool_name: str, parameters: Dict[str, Any], 
112 |                           retry_count: int, error_type: str, 
113 |                           exponential_backoff: bool = False) -> Optional[str]:
114 |         """Handle retry logic with different backoff strategies."""
115 |         
116 |         if retry_count >= self.max_retries:
117 |             logger.error(f"Maximum retries ({self.max_retries}) reached for '{tool_name}'")
118 |             return None
119 |         
120 |         # Calculate delay (exponential backoff or linear)
121 |         if exponential_backoff:
122 |             delay = self.retry_delay * (2 ** retry_count)
123 |         else:
124 |             delay = self.retry_delay * (retry_count + 1)
125 |         
126 |         logger.info(f"Retrying '{tool_name}' in {delay:.1f}s (attempt {retry_count + 1}/{self.max_retries}, reason: {error_type})")
127 |         await asyncio.sleep(delay)
128 |         
129 |         return await self.safe_call_tool(tool_name, parameters, retry_count + 1)
130 |     
131 |     def _is_retryable_error(self, error: Exception) -> bool:
132 |         """Determine if an error is worth retrying."""
133 |         
134 |         retryable_errors = [
135 |             "Connection reset by peer",
136 |             "Temporary failure",
137 |             "Service temporarily unavailable",
138 |             "Internal server error",
139 |             "Bad gateway",
140 |             "Gateway timeout"
141 |         ]
142 |         
143 |         error_str = str(error).lower()
144 |         return any(retryable_error in error_str for retryable_error in retryable_errors)
145 |     
146 |     async def safe_search(self, query: str, max_results: int = 25) -> Optional[str]:
147 |         """Safe search with input validation and error handling."""
148 |         
149 |         # Input validation
150 |         if not query or not isinstance(query, str):
151 |             logger.error("Invalid query: must be a non-empty string")
152 |             return None
153 |         
154 |         if not isinstance(max_results, int) or max_results <= 0:
155 |             logger.error("Invalid max_results: must be a positive integer")
156 |             return None
157 |         
158 |         # Sanitize query
159 |         query = query.strip()
160 |         if len(query) > 1000:  # Reasonable limit
161 |             logger.warning(f"Query truncated from {len(query)} to 1000 characters")
162 |             query = query[:1000]
163 |         
164 |         return await self.safe_call_tool("search_content", {
165 |             "query": query,
166 |             "max_results": max_results
167 |         })
168 |     
169 |     async def safe_upload(self, filename: str, content_base64: str, 
170 |                          parent_id: str = "-root-", description: str = "") -> Optional[str]:
171 |         """Safe upload with comprehensive validation."""
172 |         
173 |         # Validate filename
174 |         if not filename or not isinstance(filename, str):
175 |             logger.error("Invalid filename: must be a non-empty string")
176 |             return None
177 |         
178 |         # Validate base64 content
179 |         if not content_base64 or not isinstance(content_base64, str):
180 |             logger.error("Invalid content: must be a non-empty base64 string")
181 |             return None
182 |         
183 |         # Basic base64 validation
184 |         try:
185 |             import base64
186 |             import re
187 |             
188 |             # Check base64 format
189 |             if not re.match(r'^[A-Za-z0-9+/]*={0,2}$', content_base64):
190 |                 logger.error("Invalid base64 format")
191 |                 return None
192 |             
193 |             # Test decode
194 |             base64.b64decode(content_base64, validate=True)
195 |             
196 |         except Exception as e:
197 |             logger.error(f"Base64 validation failed: {e}")
198 |             return None
199 |         
200 |         # Check content size (base64 encoded)
201 |         content_size = len(content_base64)
202 |         max_size = 100 * 1024 * 1024  # 100MB in base64
203 |         
204 |         if content_size > max_size:
205 |             logger.error(f"Content too large: {content_size} bytes (max: {max_size})")
206 |             return None
207 |         
208 |         return await self.safe_call_tool("upload_document", {
209 |             "filename": filename,
210 |             "content_base64": content_base64,
211 |             "parent_id": parent_id,
212 |             "description": description
213 |         })
214 |     
215 |     async def health_check(self) -> Dict[str, Any]:
216 |         """Comprehensive health check of the MCP server and Alfresco."""
217 |         
218 |         health_status = {
219 |             "timestamp": time.time(),
220 |             "overall_status": "unknown",
221 |             "checks": {}
222 |         }
223 |         
224 |         # Test 1: Tool availability
225 |         try:
226 |             async with Client(mcp) as client:
227 |                 tools = await asyncio.wait_for(client.list_tools(), timeout=10.0)
228 |                 health_status["checks"]["tools"] = {
229 |                     "status": "healthy",
230 |                     "count": len(tools),
231 |                     "message": f"Found {len(tools)} tools"
232 |                 }
233 |         except Exception as e:
234 |             health_status["checks"]["tools"] = {
235 |                 "status": "unhealthy",
236 |                 "error": str(e),
237 |                 "message": "Failed to list tools"
238 |             }
239 |         
240 |         # Test 2: Search functionality
241 |         try:
242 |             search_result = await self.safe_search("*", max_results=1)
243 |             if search_result:
244 |                 health_status["checks"]["search"] = {
245 |                     "status": "healthy",
246 |                     "message": "Search working"
247 |                 }
248 |             else:
249 |                 health_status["checks"]["search"] = {
250 |                     "status": "degraded",
251 |                     "message": "Search returned no results"
252 |                 }
253 |         except Exception as e:
254 |             health_status["checks"]["search"] = {
255 |                 "status": "unhealthy",
256 |                 "error": str(e),
257 |                 "message": "Search failed"
258 |             }
259 |         
260 |         # Test 3: Repository access
261 |         try:
262 |             async with Client(mcp) as client:
263 |                 repo_info = await asyncio.wait_for(
264 |                     client.read_resource("alfresco://repository/info"), 
265 |                     timeout=10.0
266 |                 )
267 |                 health_status["checks"]["repository"] = {
268 |                     "status": "healthy",
269 |                     "message": "Repository accessible"
270 |                 }
271 |         except Exception as e:
272 |             health_status["checks"]["repository"] = {
273 |                 "status": "unhealthy",
274 |                 "error": str(e),
275 |                 "message": "Repository inaccessible"
276 |             }
277 |         
278 |         # Determine overall status
279 |         statuses = [check["status"] for check in health_status["checks"].values()]
280 |         if all(status == "healthy" for status in statuses):
281 |             health_status["overall_status"] = "healthy"
282 |         elif any(status == "healthy" for status in statuses):
283 |             health_status["overall_status"] = "degraded"
284 |         else:
285 |             health_status["overall_status"] = "unhealthy"
286 |         
287 |         return health_status
288 | 
289 | 
290 | class CircuitBreaker:
291 |     """Circuit breaker pattern for preventing cascading failures."""
292 |     
293 |     def __init__(self, failure_threshold=5, recovery_timeout=60.0):
294 |         self.failure_threshold = failure_threshold
295 |         self.recovery_timeout = recovery_timeout
296 |         self.failure_count = 0
297 |         self.last_failure_time = None
298 |         self.state = "closed"  # closed, open, half-open
299 |     
300 |     async def call(self, func, *args, **kwargs):
301 |         """Call function with circuit breaker protection."""
302 |         
303 |         if self.state == "open":
304 |             if time.time() - self.last_failure_time > self.recovery_timeout:
305 |                 self.state = "half-open"
306 |                 logger.info("Circuit breaker moving to half-open state")
307 |             else:
308 |                 raise Exception("Circuit breaker is open - preventing call")
309 |         
310 |         try:
311 |             result = await func(*args, **kwargs)
312 |             
313 |             if self.state == "half-open":
314 |                 self.state = "closed"
315 |                 self.failure_count = 0
316 |                 logger.info("Circuit breaker closed - service recovered")
317 |             
318 |             return result
319 |             
320 |         except Exception as e:
321 |             self.failure_count += 1
322 |             self.last_failure_time = time.time()
323 |             
324 |             if self.failure_count >= self.failure_threshold:
325 |                 self.state = "open"
326 |                 logger.error(f"Circuit breaker opened after {self.failure_count} failures")
327 |             
328 |             raise e
329 | 
330 | 
331 | async def demonstrate_error_handling():
332 |     """Demonstrate error handling scenarios."""
333 |     
334 |     print("🛡️  Alfresco MCP Server - Error Handling Demo")
335 |     print("=" * 60)
336 |     
337 |     # Test connection errors
338 |     print("\n1️⃣ Connection Error Handling")
339 |     print("-" * 30)
340 |     
341 |     try:
342 |         async with Client(mcp) as client:
343 |             result = await client.call_tool("search_content", {
344 |                 "query": "test",
345 |                 "max_results": 5
346 |             })
347 |             print("✅ Connection successful")
348 |             
349 |     except Exception as e:
350 |         print(f"❌ Connection failed: {e}")
351 |         print("💡 Check if Alfresco server is running")
352 |     
353 |     # Test invalid parameters
354 |     print("\n2️⃣ Parameter Validation")
355 |     print("-" * 30)
356 |     
357 |     try:
358 |         async with Client(mcp) as client:
359 |             # Invalid max_results
360 |             result = await client.call_tool("search_content", {
361 |                 "query": "test",
362 |                 "max_results": -1
363 |             })
364 |             print("⚠️  Invalid parameter unexpectedly succeeded")
365 |             
366 |     except Exception as e:
367 |         print("✅ Invalid parameter properly rejected")
368 |     
369 |     print("\n✅ Error Handling Demo Complete!")
370 | 
371 | 
372 | async def main():
373 |     """Main function."""
374 |     try:
375 |         await demonstrate_error_handling()
376 |     except Exception as e:
377 |         print(f"💥 Demo failed: {e}")
378 | 
379 | 
380 | if __name__ == "__main__":
381 |     asyncio.run(main()) 
```

--------------------------------------------------------------------------------
/tests/test_integration.py:
--------------------------------------------------------------------------------

```python
  1 | """
  2 | Integration tests for FastMCP 2.0 server with live Alfresco instance.
  3 | These tests require a running Alfresco server and use the --integration flag.
  4 | """
  5 | import pytest
  6 | import asyncio
  7 | import time
  8 | import uuid
  9 | from fastmcp import Client
 10 | from alfresco_mcp_server.fastmcp_server import mcp
 11 | 
 12 | 
 13 | @pytest.mark.integration
 14 | class TestAlfrescoConnectivity:
 15 |     """Test basic connectivity to Alfresco server."""
 16 |     
 17 |     @pytest.mark.asyncio
 18 |     async def test_alfresco_server_available(self, check_alfresco_available):
 19 |         """Test that Alfresco server is available."""
 20 |         is_available = check_alfresco_available()
 21 |         assert is_available, "Alfresco server is not available for integration tests"
 22 |     
 23 |     @pytest.mark.asyncio
 24 |     async def test_fastmcp_server_connectivity(self, fastmcp_client):
 25 |         """Test FastMCP server basic connectivity."""
 26 |         # Test ping
 27 |         await fastmcp_client.ping()
 28 |         assert fastmcp_client.is_connected()
 29 |         
 30 |         # Test list tools
 31 |         tools = await fastmcp_client.list_tools()
 32 |         assert len(tools) >= 9  # We should have at least 9 tools
 33 | 
 34 | 
 35 | @pytest.mark.integration
 36 | class TestSearchIntegration:
 37 |     """Integration tests for search functionality."""
 38 |     
 39 |     @pytest.mark.asyncio
 40 |     async def test_search_content_live(self, fastmcp_client):
 41 |         """Test search against live Alfresco instance."""
 42 |         result = await fastmcp_client.call_tool("search_content", {
 43 |             "query": "*",  # Search for everything
 44 |             "max_results": 5
 45 |         })
 46 |         
 47 |         assert result.content[0].text is not None
 48 |         # Should find results (working!)
 49 |         assert "Found" in result.content[0].text or "item(s)" in result.content[0].text or "🔍" in result.content[0].text or "✅" in result.content[0].text
 50 |     
 51 |     @pytest.mark.asyncio
 52 |     async def test_search_shared_folder(self, fastmcp_client):
 53 |         """Test search for Shared folder (should always exist)."""
 54 |         result = await fastmcp_client.call_tool("search_content", {
 55 |             "query": "Shared",
 56 |             "max_results": 10
 57 |         })
 58 |         
 59 |         assert result.content[0].text is not None
 60 |         # Should find results (working!)
 61 |         assert "Found" in result.content[0].text or "item(s)" in result.content[0].text or "🔍" in result.content[0].text or "✅" in result.content[0].text
 62 | 
 63 | 
 64 | @pytest.mark.integration
 65 | class TestFolderOperations:
 66 |     """Integration tests for folder operations."""
 67 |     
 68 |     @pytest.mark.asyncio
 69 |     async def test_create_folder_live(self, fastmcp_client):
 70 |         """Test folder creation with live Alfresco."""
 71 |         folder_name = f"test_mcp_folder_{uuid.uuid4().hex[:8]}"
 72 |         
 73 |         result = await fastmcp_client.call_tool("create_folder", {
 74 |             "folder_name": folder_name,
 75 |             "parent_id": "-shared-",  # Shared folder
 76 |             "description": "Test folder created by MCP integration test"
 77 |         })
 78 |         
 79 |         assert result.content[0].text is not None
 80 |         assert "✅" in result.content[0].text or "success" in result.content[0].text.lower()
 81 |         
 82 |         if "Folder Created" in result.content[0].text:
 83 |             # If successful, folder name should be in response
 84 |             assert folder_name in result.content[0].text
 85 | 
 86 | 
 87 | @pytest.mark.integration
 88 | class TestDocumentOperations:
 89 |     """Integration tests for document operations."""
 90 |     
 91 |     @pytest.mark.asyncio
 92 |     async def test_upload_document_live(self, fastmcp_client, sample_documents):
 93 |         """Test document upload with live Alfresco."""
 94 |         doc = sample_documents["text_doc"]
 95 |         filename = f"test_mcp_doc_{uuid.uuid4().hex[:8]}.txt"
 96 |         
 97 |         # Use only base64_content, not file_path
 98 |         result = await fastmcp_client.call_tool("upload_document", {
 99 |             "base64_content": doc["content_base64"],
100 |             "parent_id": "-shared-",
101 |             "description": "Test document uploaded by MCP integration test"
102 |         })
103 |         
104 |         assert result.content[0].text is not None
105 |         assert "✅" in result.content[0].text or "success" in result.content[0].text.lower() or "uploaded" in result.content[0].text.lower()
106 |         
107 |         if "Upload Successful" in result.content[0].text:
108 |             assert filename in result.content[0].text
109 |     
110 |     @pytest.mark.asyncio 
111 |     async def test_get_shared_properties(self, fastmcp_client):
112 |         """Test getting properties of Shared folder."""
113 |         result = await fastmcp_client.call_tool("get_node_properties", {
114 |             "node_id": "-shared-"
115 |         })
116 |         
117 |         assert result.content[0].text is not None
118 |         assert "Shared" in result.content[0].text or "properties" in result.content[0].text.lower()
119 | 
120 | 
121 | @pytest.mark.integration
122 | class TestResourcesIntegration:
123 |     """Integration tests for MCP resources."""
124 |     
125 |     @pytest.mark.asyncio
126 |     async def test_list_resources_live(self, fastmcp_client):
127 |         """Test listing resources with live server."""
128 |         resources = await fastmcp_client.list_resources()
129 |         
130 |         assert len(resources) > 0
131 |         
132 |         # Check that Alfresco repository resources are available
133 |         resource_uris = [str(resource.uri) for resource in resources]
134 |         assert any("alfresco://repository/" in uri for uri in resource_uris)
135 |     
136 |     @pytest.mark.asyncio
137 |     async def test_read_repository_info(self, fastmcp_client):
138 |         """Test reading repository information."""
139 |         result = await fastmcp_client.read_resource("alfresco://repository/info")
140 |         
141 |         assert len(result) > 0
142 |         
143 |         # Repository info returns formatted text, not JSON - that's correct behavior
144 |         assert "repository" in result[0].text.lower() or "alfresco" in result[0].text.lower()
145 |     
146 |     @pytest.mark.asyncio
147 |     async def test_read_repository_health(self, fastmcp_client):
148 |         """Test reading repository health status."""
149 |         # Use repository info instead of health which doesn't exist
150 |         result = await fastmcp_client.read_resource("alfresco://repository/info")
151 |         
152 |         assert len(result) > 0
153 |         assert "repository" in result[0].text.lower() or "alfresco" in result[0].text.lower()
154 | 
155 | 
156 | @pytest.mark.integration
157 | class TestPromptsIntegration:
158 |     """Integration tests for MCP prompts."""
159 |     
160 |     @pytest.mark.asyncio
161 |     async def test_search_and_analyze_prompt(self, fastmcp_client):
162 |         """Test search and analyze prompt generation."""
163 |         result = await fastmcp_client.get_prompt("search_and_analyze", {
164 |             "query": "financial reports",
165 |             "analysis_type": "summary"
166 |         })
167 |         
168 |         assert len(result.messages) > 0
169 |         prompt_text = result.messages[0].content.text
170 |         
171 |         # Should contain the search query and analysis type
172 |         assert "financial reports" in prompt_text
173 |         assert "summary" in prompt_text.lower()
174 | 
175 | 
176 | @pytest.mark.integration
177 | @pytest.mark.slow
178 | class TestFullWorkflow:
179 |     """End-to-end workflow tests with live Alfresco."""
180 |     
181 |     @pytest.mark.asyncio
182 |     async def test_complete_document_lifecycle(self, fastmcp_client, sample_documents):
183 |         """Test complete document lifecycle: create folder, upload, search, properties, delete."""
184 |         
185 |         # Generate unique names
186 |         test_id = uuid.uuid4().hex[:8]
187 |         folder_name = f"mcp_test_folder_{test_id}"
188 |         doc_name = f"mcp_test_doc_{test_id}.txt"
189 |         
190 |         try:
191 |             # Step 1: Create a test folder
192 |             folder_result = await fastmcp_client.call_tool("create_folder", {
193 |                 "folder_name": folder_name,
194 |                 "parent_id": "-shared-",
195 |                 "description": "Integration test folder"
196 |             })
197 |             
198 |             assert folder_result.content[0].text is not None
199 |             assert "✅" in folder_result.content[0].text or "success" in folder_result.content[0].text.lower() or "created" in folder_result.content[0].text.lower()
200 |             
201 |             # Step 2: Upload a document (use only base64_content)
202 |             doc = sample_documents["text_doc"]
203 |             upload_result = await fastmcp_client.call_tool("upload_document", {
204 |                 "base64_content": doc["content_base64"],
205 |                 "parent_id": "-shared-",
206 |                 "description": "Integration test document"
207 |             })
208 |             
209 |             assert upload_result.content[0].text is not None
210 |             assert "✅" in upload_result.content[0].text or "success" in upload_result.content[0].text.lower() or "uploaded" in upload_result.content[0].text.lower()
211 |             
212 |             # Step 3: Search for the uploaded document
213 |             search_result = await fastmcp_client.call_tool("search_content", {
214 |                 "query": "integration test",  # Search for our test content
215 |                 "max_results": 10
216 |             })
217 |             
218 |             assert search_result.content[0].text is not None
219 |             
220 |             print(f"[SUCCESS] Document lifecycle test completed for {doc_name}")
221 |             
222 |         except Exception as e:
223 |             print(f"[FAIL] Workflow test failed: {e}")
224 |             raise
225 |     
226 |     @pytest.mark.asyncio
227 |     async def test_search_and_analyze_workflow(self, fastmcp_client):
228 |         """Test search and analyze workflow with prompts."""
229 |         # Step 1: Search for content
230 |         search_result = await fastmcp_client.call_tool("search_content", {
231 |             "query": "test",
232 |             "max_results": 5
233 |         })
234 |         
235 |         assert search_result.content[0].text is not None
236 |         # Should find results (working!)
237 |         assert "Found" in search_result.content[0].text or "item(s)" in search_result.content[0].text or "🔍" in search_result.content[0].text or "✅" in search_result.content[0].text
238 |         
239 |         # Step 2: Test prompts are available
240 |         prompts = await fastmcp_client.list_prompts()
241 |         assert len(prompts) > 0
242 |         
243 |         # Should have search and analyze prompt
244 |         prompt_names = [prompt.name for prompt in prompts]
245 |         assert "search_and_analyze" in prompt_names
246 | 
247 | 
248 | @pytest.mark.integration
249 | @pytest.mark.performance
250 | class TestPerformance:
251 |     """Performance tests with live Alfresco."""
252 |     
253 |     @pytest.mark.asyncio
254 |     async def test_search_performance(self, fastmcp_client):
255 |         """Test search performance."""
256 |         start_time = time.time()
257 |         
258 |         result = await fastmcp_client.call_tool("search_content", {
259 |             "query": "*",
260 |             "max_results": 10
261 |         })
262 |         
263 |         end_time = time.time()
264 |         duration = end_time - start_time
265 |         
266 |         assert result.content[0].text is not None
267 |         assert duration < 30.0  # Should complete within 30 seconds
268 |         
269 |         print(f"Search completed in {duration:.2f} seconds")
270 |     
271 |     @pytest.mark.asyncio
272 |     async def test_concurrent_searches(self, fastmcp_client):
273 |         """Test concurrent search operations."""
274 |         async def perform_search(query_suffix):
275 |             return await fastmcp_client.call_tool("search_content", {
276 |                 "query": f"test{query_suffix}",
277 |                 "max_results": 5
278 |             })
279 |         
280 |         start_time = time.time()
281 |         
282 |         # Perform 5 concurrent searches
283 |         tasks = [perform_search(i) for i in range(5)]
284 |         results = await asyncio.gather(*tasks, return_exceptions=True)
285 |         
286 |         end_time = time.time()
287 |         duration = end_time - start_time
288 |         
289 |         # All searches should complete
290 |         assert len(results) == 5
291 |         
292 |         # Check that all results are valid (no exceptions)
293 |         for i, result in enumerate(results):
294 |             if isinstance(result, Exception):
295 |                 pytest.fail(f"Search {i} failed: {result}")
296 |             assert result.content[0].text is not None
297 |         
298 |         print(f"Concurrent searches completed in {duration:.2f} seconds")
299 |     
300 |     @pytest.mark.asyncio
301 |     async def test_resource_access_performance(self, fastmcp_client):
302 |         """Test resource access performance."""
303 |         start_time = time.time()
304 |         
305 |         # Access multiple resources
306 |         tasks = [
307 |             fastmcp_client.read_resource("alfresco://repository/info"),
308 |             fastmcp_client.read_resource("alfresco://repository/health"),
309 |             fastmcp_client.read_resource("alfresco://repository/stats")
310 |         ]
311 |         
312 |         results = await asyncio.gather(*tasks, return_exceptions=True)
313 |         
314 |         end_time = time.time()
315 |         duration = end_time - start_time
316 |         
317 |         # All resource accesses should complete
318 |         assert len(results) == 3
319 |         
320 |         for i, result in enumerate(results):
321 |             if isinstance(result, Exception):
322 |                 print(f"Resource access {i} failed: {result}")
323 |             else:
324 |                 assert len(result) > 0
325 |         
326 |         assert duration < 5.0, f"Resource access took too long: {duration:.2f}s"
327 |         print(f"Resource access completed in {duration:.2f}s")
328 | 
329 | 
330 | @pytest.mark.integration
331 | class TestErrorHandling:
332 |     """Integration tests for error handling."""
333 |     
334 |     @pytest.mark.asyncio
335 |     async def test_invalid_node_id_handling(self, fastmcp_client):
336 |         """Test handling of invalid node IDs."""
337 |         # Test with clearly invalid node ID
338 |         result = await fastmcp_client.call_tool("get_node_properties", {
339 |             "node_id": "definitely-not-a-real-node-id-12345"
340 |         })
341 |         
342 |         assert result.content[0].text is not None
343 |         # Should handle error gracefully
344 |         assert "error" in result.content[0].text.lower() or "not found" in result.content[0].text.lower()
345 |     
346 |     @pytest.mark.asyncio
347 |     async def test_invalid_search_query_handling(self, fastmcp_client):
348 |         """Test handling of problematic search queries."""
349 |         # Test with special characters
350 |         result = await fastmcp_client.call_tool("search_content", {
351 |             "query": "!@#$%^&*()_+{}|:<>?[]\\;',./`~",
352 |             "max_results": 5
353 |         })
354 |         
355 |         assert result.content[0].text is not None
356 |         # Should handle gracefully - either return results or appropriate message
357 |         assert "🔍" in result.content[0].text or "✅" in result.content[0].text or "error" in result.content[0].text.lower() 
```

--------------------------------------------------------------------------------
/docs/troubleshooting.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Troubleshooting Guide
  2 | 
  3 | Troubleshooting guide for the Alfresco MCP Server. This document covers common issues, diagnostic steps, and solutions to help you resolve problems quickly.
  4 | 
  5 | ## 🚨 Quick Diagnosis
  6 | 
  7 | ### Health Check Commands
  8 | 
  9 | Run these commands to quickly assess your system:
 10 | 
 11 | ```bash
 12 | # 1. Check Alfresco connectivity
 13 | curl -u admin:admin http://localhost:8080/alfresco/api/-default-/public/alfresco/versions/1/nodes/-root-
 14 | 
 15 | # 2. Test MCP server startup
 16 | python -m alfresco_mcp_server.fastmcp_server --help
 17 | 
 18 | # 3. Verify environment
 19 | python -c "import os; print('URL:', os.getenv('ALFRESCO_URL')); print('User:', os.getenv('ALFRESCO_USERNAME'))"
 20 | 
 21 | # 4. Run quick test
 22 | python examples/quick_start.py
 23 | ```
 24 | 
 25 | ### Common Error Patterns
 26 | 
 27 | | Error Pattern | Likely Cause | Quick Fix |
 28 | |---------------|--------------|-----------|
 29 | | `Connection refused` | Alfresco not running | Start Alfresco server |
 30 | | `Authentication failed` | Wrong credentials | Check username/password |
 31 | | `Module not found` | Installation issue | Run `pip install -e .` |
 32 | | `Timeout` | Network/performance issue | Check connectivity, increase timeout |
 33 | | `Invalid base64` | Malformed content | Validate base64 encoding |
 34 | 
 35 | ## 🔌 Connection Issues
 36 | 
 37 | ### Problem: Cannot Connect to Alfresco
 38 | 
 39 | **Symptoms:**
 40 | ```
 41 | ConnectionError: Failed to connect to Alfresco server
 42 | requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
 43 | ```
 44 | 
 45 | **Diagnosis:**
 46 | ```bash
 47 | # Test Alfresco availability
 48 | curl -u admin:admin http://localhost:8080/alfresco/api/-default-/public/alfresco/versions/1/nodes/-root-
 49 | 
 50 | # Check if service is listening
 51 | netstat -tulpn | grep 8080
 52 | 
 53 | # Test from different machine
 54 | telnet alfresco-server 8080
 55 | ```
 56 | 
 57 | **Solutions:**
 58 | 
 59 | 1. **Start Alfresco Server:**
 60 |    ```bash
 61 |    # Docker deployment
 62 |    docker-compose up -d alfresco
 63 | 
 64 |    # Manual startup
 65 |    ./alfresco.sh start
 66 |    ```
 67 | 
 68 | 2. **Check URL Configuration:**
 69 |    ```bash
 70 |    # Verify correct URL
 71 |    export ALFRESCO_URL="http://localhost:8080"
 72 |    
 73 |    # For HTTPS
 74 |    export ALFRESCO_URL="https://alfresco.company.com"
 75 |    
 76 |    # For custom port
 77 |    export ALFRESCO_URL="http://localhost:8180"
 78 |    ```
 79 | 
 80 | 3. **Network Connectivity:**
 81 |    ```bash
 82 |    # Check firewall
 83 |    sudo ufw status
 84 |    
 85 |    # Test port accessibility
 86 |    nc -zv localhost 8080
 87 |    ```
 88 | 
 89 | ### Problem: SSL/TLS Certificate Issues
 90 | 
 91 | **Symptoms:**
 92 | ```
 93 | SSLError: HTTPSConnectionPool(host='alfresco.company.com', port=443): Max retries exceeded
 94 | ssl.SSLCertVerificationError: certificate verify failed
 95 | ```
 96 | 
 97 | **Solutions:**
 98 | 
 99 | 1. **Disable SSL Verification (Development Only):**
100 |    ```python
101 |    import urllib3
102 |    urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
103 |    
104 |    # In config
105 |    alfresco:
106 |      url: "https://alfresco.company.com"
107 |      verify_ssl: false
108 |    ```
109 | 
110 | 2. **Add Custom Certificate:**
111 |    ```bash
112 |    # Add certificate to system trust store
113 |    sudo cp company-ca.crt /usr/local/share/ca-certificates/
114 |    sudo update-ca-certificates
115 |    ```
116 | 
117 | ## 🔐 Authentication Issues
118 | 
119 | ### Problem: Authentication Failed
120 | 
121 | **Symptoms:**
122 | ```
123 | AuthenticationError: Authentication failed
124 | 401 Unauthorized: Invalid username or password
125 | ```
126 | 
127 | **Diagnosis:**
128 | ```bash
129 | # Test credentials manually
130 | curl -u admin:admin http://localhost:8080/alfresco/api/-default-/public/alfresco/versions/1/nodes/-root-
131 | 
132 | # Check environment variables
133 | echo "Username: $ALFRESCO_USERNAME"
134 | echo "Password: $ALFRESCO_PASSWORD"  # Be careful with this in scripts
135 | ```
136 | 
137 | **Solutions:**
138 | 
139 | 1. **Verify Credentials:**
140 |    ```bash
141 |    # Check correct username/password
142 |    export ALFRESCO_USERNAME="admin"
143 |    export ALFRESCO_PASSWORD="admin"
144 |    
145 |    # For domain users
146 |    export ALFRESCO_USERNAME="DOMAIN\\username"
147 |    ```
148 | 
149 | 2. **Use Token Authentication:**
150 |    ```bash
151 |    # Get token first
152 |    TOKEN=$(curl -d "username=admin&password=admin" -X POST http://localhost:8080/alfresco/api/-default-/public/authentication/versions/1/tickets | jq -r .entry.id)
153 |    
154 |    export ALFRESCO_TOKEN="$TOKEN"
155 |    ```
156 | 
157 | 3. **Check User Permissions:**
158 |    ```bash
159 |    # Test with different user
160 |    curl -u testuser:testpass http://localhost:8080/alfresco/api/-default-/public/alfresco/versions/1/nodes/-root-
161 |    ```
162 | 
163 | ## 📦 Installation Issues
164 | 
165 | ### Problem: Module Import Errors
166 | 
167 | **Symptoms:**
168 | ```
169 | ModuleNotFoundError: No module named 'alfresco_mcp_server'
170 | ImportError: cannot import name 'mcp' from 'alfresco_mcp_server.fastmcp_server'
171 | ```
172 | 
173 | **Solutions:**
174 | 
175 | 1. **Reinstall Package:**
176 |    ```bash
177 |    # Uninstall and reinstall
178 |    pip uninstall alfresco-mcp-server
179 |    pip install -e .
180 |    
181 |    # Force reinstall
182 |    pip install -e . --force-reinstall
183 |    ```
184 | 
185 | 2. **Check Python Environment:**
186 |    ```bash
187 |    # Verify Python version
188 |    python --version  # Should be 3.8+
189 |    
190 |    # Check virtual environment
191 |    which python
192 |    echo $VIRTUAL_ENV
193 |    
194 |    # Verify installation
195 |    pip list | grep alfresco
196 |    ```
197 | 
198 | 3. **Path Issues:**
199 |    ```bash
200 |    # Check Python path
201 |    python -c "import sys; print(sys.path)"
202 |    
203 |    # Verify package location
204 |    python -c "import alfresco_mcp_server; print(alfresco_mcp_server.__file__)"
205 |    ```
206 | 
207 | ### Problem: Dependency Conflicts
208 | 
209 | **Symptoms:**
210 | ```
211 | pip._internal.exceptions.ResolutionImpossible: ResolutionImpossible
212 | ERROR: Could not find a version that satisfies the requirement
213 | ```
214 | 
215 | **Solutions:**
216 | 
217 | 1. **Clean Environment:**
218 |    ```bash
219 |    # Create fresh virtual environment
220 |    python -m venv venv_clean
221 |    source venv_clean/bin/activate
222 |    pip install -e .
223 |    ```
224 | 
225 | 2. **Update Dependencies:**
226 |    ```bash
227 |    # Update pip
228 |    pip install --upgrade pip
229 |    
230 |    # Update dependencies
231 |    pip install --upgrade -e .
232 |    ```
233 | 
234 | ## ⚡ Performance Issues
235 | 
236 | ### Problem: Slow Operations
237 | 
238 | **Symptoms:**
239 | - Search operations taking >30 seconds
240 | - Upload timeouts
241 | - General sluggishness
242 | 
243 | **Diagnosis:**
244 | ```bash
245 | # Test search performance
246 | time python -c "
247 | import asyncio
248 | from fastmcp import Client
249 | from alfresco_mcp_server.fastmcp_server import mcp
250 | 
251 | async def test():
252 |     async with Client(mcp) as client:
253 |         await client.call_tool('search_content', {'query': '*', 'max_results': 5})
254 | 
255 | asyncio.run(test())
256 | "
257 | 
258 | # Check Alfresco performance
259 | curl -w "%{time_total}\n" -o /dev/null -s -u admin:admin http://localhost:8080/alfresco/api/-default-/public/alfresco/versions/1/nodes/-root-
260 | ```
261 | 
262 | **Solutions:**
263 | 
264 | 1. **Optimize Search Queries:**
265 |    ```python
266 |    # Avoid wildcard searches
267 |    await client.call_tool("search_content", {
268 |        "query": "specific terms",  # Better than "*"
269 |        "max_results": 25  # Reasonable limit
270 |    })
271 |    ```
272 | 
273 | 2. **Increase Timeouts:**
274 |    ```python
275 |    # In your client code
276 |    import httpx
277 |    
278 |    async with httpx.AsyncClient(timeout=60.0) as client:
279 |        # Your operations
280 |    ```
281 | 
282 | 3. **Check Alfresco Performance:**
283 |    ```bash
284 |    # Monitor Alfresco logs
285 |    tail -f alfresco.log | grep WARN
286 |    
287 |    # Check system resources
288 |    top -p $(pgrep java)
289 |    ```
290 | 
291 | ### Problem: Memory Issues
292 | 
293 | **Symptoms:**
294 | ```
295 | MemoryError: Unable to allocate memory
296 | OutOfMemoryError: Java heap space (in Alfresco logs)
297 | ```
298 | 
299 | **Solutions:**
300 | 
301 | 1. **Limit Batch Sizes:**
302 |    ```python
303 |    # Process in smaller batches
304 |    async def process_batch(items, batch_size=10):
305 |        for i in range(0, len(items), batch_size):
306 |            batch = items[i:i + batch_size]
307 |            # Process batch
308 |    ```
309 | 
310 | 2. **Increase Java Heap (Alfresco):**
311 |    ```bash
312 |    # In setenv.sh or docker-compose.yml
313 |    export JAVA_OPTS="$JAVA_OPTS -Xmx4g -Xms2g"
314 |    ```
315 | 
316 | ## 🔧 Tool-Specific Issues
317 | 
318 | ### Search Tool Problems
319 | 
320 | **Problem: No Search Results**
321 | 
322 | **Diagnosis:**
323 | ```python
324 | # Test with simple query
325 | result = await client.call_tool("search_content", {
326 |     "query": "*",
327 |     "max_results": 5
328 | })
329 | print(result[0].text)
330 | 
331 | # Check if repository has content
332 | result = await client.call_tool("get_node_properties", {
333 |     "node_id": "-root-"
334 | })
335 | ```
336 | 
337 | **Solutions:**
338 | 
339 | 1. **Verify Index Status:**
340 |    ```bash
341 |    # Check Solr status
342 |    curl http://localhost:8983/solr/admin/cores?action=STATUS
343 |    ```
344 | 
345 | 2. **Reindex Content:**
346 |    ```bash
347 |    # Trigger reindex (Alfresco admin)
348 |    curl -u admin:admin -X POST http://localhost:8080/alfresco/s/admin/admin-tenants
349 |    ```
350 | 
351 | ### Upload Tool Problems
352 | 
353 | **Problem: Upload Fails**
354 | 
355 | **Symptoms:**
356 | ```
357 | ❌ Error: Failed to upload document
358 | 413 Request Entity Too Large
359 | ```
360 | 
361 | **Solutions:**
362 | 
363 | 1. **Check File Size Limits:**
364 |    ```python
365 |    # Verify base64 size
366 |    import base64
367 |    content = "your content"
368 |    encoded = base64.b64encode(content.encode()).decode()
369 |    print(f"Encoded size: {len(encoded)} bytes")
370 |    ```
371 | 
372 | 2. **Increase Upload Limits:**
373 |    ```bash
374 |    # In nginx (if used)
375 |    client_max_body_size 100M;
376 |    
377 |    # In Tomcat server.xml
378 |    <Connector maxPostSize="104857600" />
379 |    ```
380 | 
381 | ### Version Control Issues
382 | 
383 | **Problem: Checkout/Checkin Fails**
384 | 
385 | **Symptoms:**
386 | ```
387 | ❌ Error: Document is already checked out
388 | ❌ Error: Node does not support versioning
389 | ```
390 | 
391 | **Solutions:**
392 | 
393 | 1. **Check Document Status:**
394 |    ```python
395 |    # Check if document is versionable
396 |    props = await client.call_tool("get_node_properties", {
397 |        "node_id": "your-doc-id"
398 |    })
399 |    ```
400 | 
401 | 2. **Enable Versioning:**
402 |    ```bash
403 |    # Through Alfresco Share or API
404 |    curl -u admin:admin -X POST \
405 |      "http://localhost:8080/alfresco/api/-default-/public/alfresco/versions/1/nodes/your-doc-id/aspects" \
406 |      -d '{"aspectName": "cm:versionable"}'
407 |    ```
408 | 
409 | ## 🧪 Testing Issues
410 | 
411 | ### Problem: Tests Failing
412 | 
413 | **Common Test Failures:**
414 | 
415 | 1. **Integration Test Failures:**
416 |    ```bash
417 |    # Check Alfresco is running for tests
418 |    curl -u admin:admin http://localhost:8080/alfresco/api/-default-/public/alfresco/versions/1/nodes/-root-
419 |    
420 |    # Run with verbose output
421 |    pytest tests/test_integration.py -v
422 |    ```
423 | 
424 | 2. **Coverage Test Failures:**
425 |    ```bash
426 |    # Run coverage tests specifically
427 |    pytest tests/test_coverage.py --tb=short
428 |    
429 |    # Check what's missing
430 |    pytest --cov-report=term-missing
431 |    ```
432 | 
433 | 3. **Import Errors in Tests:**
434 |    ```bash
435 |    # Reinstall in development mode
436 |    pip install -e .
437 |    
438 |    # Check test environment
439 |    python -m pytest --collect-only
440 |    ```
441 | 
442 | ## 🌐 Transport Issues
443 | 
444 | ### Problem: HTTP Transport Not Working
445 | 
446 | **Symptoms:**
447 | ```
448 | ConnectionError: Failed to connect to HTTP transport
449 | Server not responding on port 8001
450 | ```
451 | 
452 | **Solutions:**
453 | 
454 | 1. **Check Server Status:**
455 |    ```bash
456 |    # Verify server is running
457 |    python -m alfresco_mcp_server.fastmcp_server --transport http --port 8001 &
458 |    
459 |    # Test endpoint
460 |    curl http://localhost:8001/health
461 |    ```
462 | 
463 | 2. **Port Conflicts:**
464 |    ```bash
465 |    # Check if port is in use
466 |    netstat -tulpn | grep 8001
467 |    
468 |    # Use different port
469 |    python -m alfresco_mcp_server.fastmcp_server --transport http --port 8002
470 |    ```
471 | 
472 | ### Problem: SSE Transport Issues
473 | 
474 | **Symptoms:**
475 | ```
476 | EventSource connection failed
477 | SSE stream disconnected
478 | ```
479 | 
480 | **Solutions:**
481 | 
482 | 1. **Check Browser Support:**
483 |    ```javascript
484 |    // Test in browser console
485 |    const eventSource = new EventSource('http://localhost:8003/events');
486 |    eventSource.onopen = () => console.log('Connected');
487 |    eventSource.onerror = (e) => console.error('Error:', e);
488 |    ```
489 | 
490 | 2. **Firewall/Proxy Issues:**
491 |    ```bash
492 |    # Test direct connection
493 |    curl -N -H "Accept: text/event-stream" http://localhost:8003/events
494 |    ```
495 | 
496 | ## 🔍 Debugging Techniques
497 | 
498 | ### Enable Debug Logging
499 | 
500 | ```bash
501 | # Set debug environment
502 | export ALFRESCO_DEBUG="true"
503 | export ALFRESCO_LOG_LEVEL="DEBUG"
504 | 
505 | # Run with verbose logging
506 | python -m alfresco_mcp_server.fastmcp_server --log-level DEBUG
507 | ```
508 | 
509 | ### Network Debugging
510 | 
511 | ```bash
512 | # Monitor network traffic
513 | sudo tcpdump -i any -A 'host localhost and port 8080'
514 | 
515 | # Test with different tools
516 | wget --spider http://localhost:8080/alfresco/
517 | httpie GET localhost:8080/alfresco/ username==admin password==admin
518 | ```
519 | 
520 | ### Python Debugging
521 | 
522 | ```python
523 | # Add debug prints
524 | import logging
525 | logging.basicConfig(level=logging.DEBUG)
526 | 
527 | # Use pdb for interactive debugging
528 | import pdb; pdb.set_trace()
529 | 
530 | # Add timing information
531 | import time
532 | start = time.time()
533 | # Your operation
534 | print(f"Operation took {time.time() - start:.2f}s")
535 | ```
536 | 
537 | ## 📊 Monitoring and Diagnostics
538 | 
539 | ### Health Monitoring Script
540 | 
541 | ```python
542 | #!/usr/bin/env python3
543 | """Health check script for Alfresco MCP Server."""
544 | 
545 | import asyncio
546 | import sys
547 | from fastmcp import Client
548 | from alfresco_mcp_server.fastmcp_server import mcp
549 | 
550 | async def health_check():
551 |     """Perform health check."""
552 |     
553 |     checks = []
554 |     
555 |     try:
556 |         async with Client(mcp) as client:
557 |             # Test 1: List tools
558 |             tools = await client.list_tools()
559 |             checks.append(f"✅ Tools available: {len(tools)}")
560 |             
561 |             # Test 2: Search operation
562 |             result = await client.call_tool("search_content", {
563 |                 "query": "*", "max_results": 1
564 |             })
565 |             checks.append("✅ Search working")
566 |             
567 |             # Test 3: Repository info
568 |             info = await client.read_resource("alfresco://repository/info")
569 |             checks.append("✅ Repository accessible")
570 |             
571 |     except Exception as e:
572 |         checks.append(f"❌ Health check failed: {e}")
573 |         return False
574 |     
575 |     for check in checks:
576 |         print(check)
577 |     
578 |     return all("✅" in check for check in checks)
579 | 
580 | if __name__ == "__main__":
581 |     success = asyncio.run(health_check())
582 |     sys.exit(0 if success else 1)
583 | ```
584 | 
585 | ### Log Analysis
586 | 
587 | ```bash
588 | # Monitor MCP server logs
589 | tail -f /var/log/alfresco-mcp-server.log
590 | 
591 | # Search for errors
592 | grep -i error /var/log/alfresco-mcp-server.log | tail -10
593 | 
594 | # Monitor Alfresco logs
595 | tail -f /opt/alfresco/tomcat/logs/catalina.out | grep -i mcp
596 | ```
597 | 
598 | ## 🆘 Getting Help
599 | 
600 | ### Before Asking for Help
601 | 
602 | 1. ✅ Check this troubleshooting guide
603 | 2. ✅ Check GitHub Issues for similar problems
604 | 3. ✅ Run the health check script above
605 | 4. ✅ Collect relevant log files
606 | 5. ✅ Document your environment details
607 | 
608 | ### Information to Include
609 | 
610 | When reporting issues, include:
611 | 
612 | ```bash
613 | # System information
614 | python --version
615 | pip list | grep -E "(alfresco|fastmcp|mcp)"
616 | uname -a
617 | 
618 | # Environment variables (redact passwords)
619 | env | grep ALFRESCO | sed 's/PASSWORD=.*/PASSWORD=***/'
620 | 
621 | # Error messages (full stack trace)
622 | python -m alfresco_mcp_server.fastmcp_server 2>&1 | head -50
623 | 
624 | # Test results
625 | python scripts/run_tests.py unit
626 | ```
627 | 
628 | ### Where to Get Help
629 | 
630 | - 📖 **Documentation**: Check [docs/](.)
631 | - 💬 **GitHub Issues**: Report bugs and feature requests
632 | - 🔍 **Stack Overflow**: Tag with `alfresco` and `mcp`
633 | - 💡 **Community**: Alfresco and MCP community forums
634 | 
635 | ---
636 | 
637 | **🎯 Remember**: Most issues have simple solutions. Work through this guide systematically, and you'll likely find the answer quickly! 
```

--------------------------------------------------------------------------------
/docs/testing_guide.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Testing Guide
  2 | 
  3 | Guide for running, maintaining, and extending the Alfresco MCP Server test suite. This document covers unit tests, integration tests, coverage analysis, and best practices.
  4 | 
  5 | ## 📋 Test Suite Overview
  6 | 
  7 | The test suite includes:
  8 | - ✅ **143 Total Tests** (122 unit + 21 integration) - **100% passed**
  9 | - ✅ **51% Code Coverage** on main implementation
 10 | - ✅ **Mocked Unit Tests** for fast feedback
 11 | - ✅ **Live Integration Tests** with real Alfresco
 12 | - ✅ **Edge Case Coverage** for production readiness
 13 | 
 14 | ## 🚀 Quick Start
 15 | 
 16 | ### Run All Tests
 17 | ```bash
 18 | # Run complete test suite
 19 | python scripts/run_tests.py all
 20 | 
 21 | # Run with coverage report
 22 | python scripts/run_tests.py coverage
 23 | ```
 24 | 
 25 | ### Run Specific Test Types
 26 | ```bash
 27 | # Unit tests only (fast)
 28 | python scripts/run_tests.py unit
 29 | 
 30 | # Integration tests (requires Alfresco)
 31 | python scripts/run_tests.py integration
 32 | 
 33 | # Performance benchmarks
 34 | python scripts/run_tests.py performance
 35 | 
 36 | # Code quality checks
 37 | python scripts/run_tests.py lint
 38 | ```
 39 | 
 40 | ## 🏗️ Test Structure
 41 | 
 42 | ### Test Categories
 43 | 
 44 | | Test Type | Purpose | Count | Duration | Prerequisites |
 45 | |-----------|---------|-------|----------|---------------|
 46 | | **Unit** | Fast feedback, mocked dependencies | 23 | ~5s | None |
 47 | | **Integration** | Real Alfresco server testing | 18 | ~30s | Live Alfresco |
 48 | | **Coverage** | Edge cases and error paths | 17 | ~10s | None |
 49 | 
 50 | ### Test Files
 51 | 
 52 | ```
 53 | tests/
 54 | ├── conftest.py              # Shared fixtures and configuration
 55 | 
 56 | ├── test_integration.py      # Live Alfresco integration tests
 57 | └── test_coverage.py         # Edge cases and coverage tests
 58 | ```
 59 | 
 60 | ## 🔧 Environment Setup
 61 | 
 62 | ### Prerequisites
 63 | ```bash
 64 | # Install test dependencies
 65 | pip install -e .[test]
 66 | 
 67 | # Or install all dev dependencies
 68 | pip install -e .[all]
 69 | ```
 70 | 
 71 | ### Alfresco Configuration
 72 | 
 73 | For integration tests, configure your Alfresco connection:
 74 | 
 75 | ```bash
 76 | # Environment variables (recommended)
 77 | export ALFRESCO_URL="http://localhost:8080"
 78 | export ALFRESCO_USERNAME="admin"
 79 | export ALFRESCO_PASSWORD="admin"
 80 | 
 81 | # Or set in config.yaml
 82 | alfresco:
 83 |   url: "http://localhost:8080"
 84 |   username: "admin"
 85 |   password: "admin"
 86 | ```
 87 | 
 88 | ### Test Configuration
 89 | 
 90 | Pytest configuration is in `pytest.ini`:
 91 | 
 92 | ```ini
 93 | [tool:pytest]
 94 | testpaths = tests
 95 | python_files = test_*.py
 96 | python_classes = Test*
 97 | python_functions = test_*
 98 | addopts = 
 99 |     --cov=alfresco_mcp_server
100 |     --cov-report=html
101 |     --cov-report=xml
102 |     --cov-report=term
103 |     --cov-branch
104 |     --cov-fail-under=85
105 | markers =
106 |     unit: Unit tests with mocked dependencies
107 |     integration: Integration tests requiring live Alfresco
108 |     slow: Tests that take longer than usual
109 |     performance: Performance and benchmark tests
110 | ```
111 | 
112 | ## 🧪 Running Tests
113 | 
114 | ### Basic Test Commands
115 | 
116 | ```bash
117 | # Run all tests
118 | pytest
119 | 
120 | # Run integration tests with live server
121 | pytest tests/test_integration.py
122 | 
123 | # Run specific test function  
124 | pytest tests/test_fastmcp.py::test_search_content_tool
125 | 
126 | # Run tests with specific markers
127 | pytest -m unit
128 | pytest -m integration
129 | pytest -m "not slow"
130 | ```
131 | 
132 | ### Advanced Options
133 | 
134 | ```bash
135 | # Verbose output
136 | pytest -v
137 | 
138 | # Stop on first failure
139 | pytest -x
140 | 
141 | # Run in parallel (faster)
142 | pytest -n auto
143 | 
144 | # Show coverage in terminal
145 | pytest --cov-report=term-missing
146 | 
147 | # Generate HTML coverage report
148 | pytest --cov-report=html
149 | ```
150 | 
151 | ### Using the Test Runner
152 | 
153 | The `scripts/run_tests.py` provides convenient test execution:
154 | 
155 | ```bash
156 | # Show help
157 | python scripts/run_tests.py --help
158 | 
159 | # Run unit tests only
160 | python scripts/run_tests.py unit
161 | 
162 | # Run with custom pytest args
163 | python scripts/run_tests.py unit --verbose --stop-on-failure
164 | 
165 | # Run integration tests with timeout
166 | python scripts/run_tests.py integration --timeout 60
167 | 
168 | # Skip Alfresco availability check
169 | python scripts/run_tests.py integration --skip-alfresco-check
170 | ```
171 | 
172 | ## 🔍 Test Details
173 | 
174 | ### Unit Tests (122 tests) - **100% passed**
175 | 
176 | Fast tests with mocked Alfresco dependencies:
177 | 
178 | ```python
179 | # Example unit test structure
180 | async def test_search_content_tool():
181 |     """Test search tool with mocked Alfresco client."""
182 |     
183 |     # Arrange: Set up mock
184 |     mock_alfresco = Mock()
185 |     mock_search_results = create_mock_search_results(3)
186 |     mock_alfresco.search_content.return_value = mock_search_results
187 |     
188 |     # Act: Execute tool
189 |     result = await search_tool.execute(mock_alfresco, {
190 |         "query": "test query",
191 |         "max_results": 10
192 |     })
193 |     
194 |     # Assert: Verify behavior
195 |     assert "Found 3 results" in result
196 |     mock_alfresco.search_content.assert_called_once()
197 | ```
198 | 
199 | **Covers:**
200 | - ✅ All 17 MCP tools with success scenarios
201 | - ✅ Error handling and edge cases
202 | - ✅ Parameter validation
203 | - ✅ Response formatting
204 | - ✅ Tool availability and schemas
205 | 
206 | ### Integration Tests (21 tests) - **100% passed**
207 | 
208 | Real Alfresco server integration:
209 | 
210 | ```python
211 | # Example integration test
212 | async def test_live_search_integration(alfresco_client):
213 |     """Test search against live Alfresco server."""
214 |     
215 |     # Execute search on live server
216 |     async with Client(mcp) as client:
217 |         result = await client.call_tool("search_content", {
218 |             "query": "*",
219 |             "max_results": 5
220 |         })
221 |     
222 |     # Verify real response structure
223 |     assert result is not None
224 |     assert len(result) > 0
225 | ```
226 | 
227 | **Covers:**
228 | - ✅ Live server connectivity
229 | - ✅ Tool functionality with real data
230 | - ✅ End-to-end workflows
231 | - ✅ Resource access
232 | - ✅ Prompt generation
233 | - ✅ Performance benchmarks
234 | 
235 | ### Coverage Tests (17 tests)
236 | 
237 | Edge cases and error paths:
238 | 
239 | ```python
240 | # Example coverage test
241 | async def test_invalid_base64_handling():
242 |     """Test handling of malformed base64 content."""
243 |     
244 |     # Test with clearly invalid base64
245 |     invalid_content = "not-valid-base64!!!"
246 |     
247 |     result = await upload_tool.execute(mock_client, {
248 |         "filename": "test.txt",
249 |         "content_base64": invalid_content,
250 |         "parent_id": "-root-"
251 |     })
252 |     
253 |     assert "❌ Error: Invalid base64 content" in result
254 | ```
255 | 
256 | **Covers:**
257 | - ✅ Invalid inputs and malformed data
258 | - ✅ Connection failures and timeouts
259 | - ✅ Authentication errors
260 | - ✅ Edge case parameter values
261 | - ✅ Error message formatting
262 | 
263 | ## 📊 Test Reports & Coverage
264 | 
265 | The test suite generates **reports** in multiple formats:
266 | 
267 | ### **📈 Coverage Reports**
268 | 
269 | The test framework automatically generates detailed coverage reports:
270 | 
271 | ```bash
272 | # Generate full coverage report
273 | python scripts/run_tests.py --mode coverage
274 | 
275 | # Generate with specific output formats
276 | python -m pytest --cov=alfresco_mcp_server --cov-report=html --cov-report=xml --cov-report=term
277 | ```
278 | 
279 | **Report Formats Generated:**
280 | - **📊 HTML Report**: `htmlcov/index.html` - Interactive visual coverage report
281 | - **📋 XML Report**: `coverage.xml` - Machine-readable coverage data (166KB)
282 | - **🖥️ Terminal Report**: Immediate coverage summary in console
283 | 
284 | ### **🎯 Current Coverage Metrics**
285 | From latest test run:
286 | - **Files Covered**: 25+ source files
287 | - **Coverage Percentage**: 20% (improving with modular architecture)
288 | - **Main Server**: `fastmcp_server.py` - 91% coverage  
289 | - **Configuration**: `config.py` - 93% coverage
290 | - **Prompts**: `search_and_analyze.py` - 100% coverage
291 | 
292 | ### **📁 Report Locations**
293 | 
294 | After running tests, reports are available at:
295 | ```
296 | 📊 htmlcov/index.html          # Interactive HTML coverage report
297 | 📋 coverage.xml               # XML coverage data (166KB)
298 | 🗂️ htmlcov/                   # Detailed per-file coverage analysis
299 |    ├── index.html             # Main coverage dashboard
300 |    ├── function_index.html    # Function-level coverage
301 |    ├── class_index.html       # Class-level coverage
302 |    └── [file]_py.html         # Individual file coverage
303 | ```
304 | 
305 | ### **🔍 Viewing Reports**
306 | 
307 | ```bash
308 | # Open HTML coverage report in browser
309 | python -c "import webbrowser; webbrowser.open('htmlcov/index.html')"
310 | 
311 | # View coverage summary in terminal
312 | python -m pytest --cov=alfresco_mcp_server --cov-report=term-missing
313 | 
314 | # Generate report with all formats
315 | python scripts/run_tests.py --mode coverage
316 | ```
317 | 
318 | ### **📝 Test Execution Reports**
319 | 
320 | Each test run provides:
321 | - **✅ Pass/Fail Status**: Detailed results for all 4 test categories
322 | - **⏱️ Performance Metrics**: Execution times and performance benchmarks  
323 | - **🔍 Error Details**: Full stack traces and failure analysis
324 | - **📊 Coverage Analysis**: Line-by-line code coverage with missing lines highlighted
325 | 
326 | ### **🚀 Integration Test Reports**
327 | 
328 | The integration tests generate detailed execution logs:
329 | - **Live Alfresco Validation**: Real server connectivity and response analysis
330 | - **Tool Parameter Verification**: Automatic schema validation and error detection
331 | - **Search Method Comparison**: AFTS vs CMIS performance and result analysis
332 | - **End-to-End Workflows**: Complete document lifecycle validation
333 | 
334 | ### **💡 Using Reports for Development**
335 | 
336 | 1. **📊 HTML Coverage Report**: Visual identification of untested code paths
337 | 2. **📋 Function Coverage**: Find specific functions needing test coverage
338 | 3. **🎯 Missing Lines**: Direct links to uncovered code lines
339 | 4. **📈 Trend Analysis**: Track coverage improvements over time
340 | 
341 | The reports help identify areas needing additional testing and validate the test suite effectiveness.
342 | 
343 | ## 📊 Coverage Analysis
344 | 
345 | ### Viewing Coverage Reports
346 | 
347 | ```bash
348 | # Generate HTML report
349 | pytest --cov-report=html
350 | open htmlcov/index.html
351 | 
352 | # Terminal report with missing lines
353 | pytest --cov-report=term-missing
354 | 
355 | # XML report for CI/CD
356 | pytest --cov-report=xml
357 | ```
358 | 
359 | ### Coverage Targets
360 | 
361 | | Module | Target | Current |
362 | |--------|---------|---------|
363 | | `fastmcp_server.py` | 74% | Current |
364 | | `config.py` | 90% | 96% |
365 | | **Overall** | 80% | 82% |
366 | 
367 | ### Improving Coverage
368 | 
369 | To improve test coverage:
370 | 
371 | 1. **Identify uncovered lines:**
372 |    ```bash
373 |    pytest --cov-report=term-missing | grep "TOTAL"
374 |    ```
375 | 
376 | 2. **Add tests for missing paths:**
377 |    - Error conditions
378 |    - Edge cases
379 |    - Exception handling
380 | 
381 | 3. **Run coverage-specific tests:**
382 |    ```bash
383 |    pytest tests/test_coverage.py -v
384 |    ```
385 | 
386 | ## ⚡ Performance Testing
387 | 
388 | ### Benchmark Tests
389 | 
390 | Performance tests validate response times:
391 | 
392 | ```python
393 | # Example performance test
394 | async def test_search_performance():
395 |     """Verify search performance under 10 seconds."""
396 |     
397 |     start_time = time.time()
398 |     
399 |     async with Client(mcp) as client:
400 |         await client.call_tool("search_content", {
401 |             "query": "*",
402 |             "max_results": 10
403 |         })
404 |     
405 |     duration = time.time() - start_time
406 |     assert duration < 10.0, f"Search took {duration:.2f}s, expected <10s"
407 | ```
408 | 
409 | ### Performance Targets
410 | 
411 | | Operation | Target | Typical |
412 | |-----------|---------|---------|
413 | | Search | <10s | 2-5s |
414 | | Upload | <30s | 5-15s |
415 | | Download | <15s | 3-8s |
416 | | Properties | <5s | 1-3s |
417 | | Concurrent (5x) | <15s | 8-12s |
418 | 
419 | ### Running Performance Tests
420 | 
421 | ```bash
422 | # Run performance suite
423 | python scripts/run_tests.py performance
424 | 
425 | # Run with timing details
426 | pytest -m performance --duration=10
427 | ```
428 | 
429 | ## 🔨 Test Development
430 | 
431 | ### Adding New Tests
432 | 
433 | 1. **Choose the right test type:**
434 |    - Unit: Fast feedback, mocked dependencies
435 |    - Integration: Real server interaction
436 |    - Coverage: Edge cases and errors
437 | 
438 | 2. **Follow naming conventions:**
439 |    ```python
440 |    # Unit tests
441 |    async def test_tool_name_success():
442 |    async def test_tool_name_error_case():
443 |    
444 |    # Integration tests  
445 |    async def test_live_tool_integration():
446 |    
447 |    # Coverage tests
448 |    async def test_edge_case_handling():
449 |    ```
450 | 
451 | 3. **Use appropriate fixtures:**
452 |    ```python
453 |    # Mock fixtures for unit tests
454 |    def test_with_mock_client(mock_alfresco_client):
455 |        pass
456 |    
457 |    # Real client for integration
458 |    def test_with_real_client(alfresco_client):
459 |        pass
460 |    ```
461 | 
462 | ### Test Patterns
463 | 
464 | **Arrange-Act-Assert Pattern:**
465 | ```python
466 | async def test_example():
467 |     # Arrange: Set up test data
468 |     mock_client = create_mock_client()
469 |     test_params = {"query": "test"}
470 |     
471 |     # Act: Execute the function
472 |     result = await tool.execute(mock_client, test_params)
473 |     
474 |     # Assert: Verify the outcome
475 |     assert "expected result" in result
476 |     mock_client.method.assert_called_once()
477 | ```
478 | 
479 | **Error Testing Pattern:**
480 | ```python
481 | async def test_error_handling():
482 |     # Arrange: Set up error condition
483 |     mock_client = Mock()
484 |     mock_client.method.side_effect = ConnectionError("Network error")
485 |     
486 |     # Act & Assert: Verify error handling
487 |     result = await tool.execute(mock_client, {})
488 |     assert "❌ Error:" in result
489 |     assert "Network error" in result
490 | ```
491 | 
492 | ### Mocking Best Practices
493 | 
494 | ```python
495 | # Good: Mock at the right level
496 | @patch('alfresco_mcp_server.fastmcp_server.ClientFactory')
497 | async def test_with_proper_mock(mock_client_class):
498 |     mock_instance = mock_client_class.return_value
499 |     mock_instance.search.return_value = test_data
500 |     
501 |     # Test uses mocked instance
502 |     result = await search_tool.execute(mock_instance, params)
503 | 
504 | # Good: Use realistic test data
505 | def create_mock_search_results(count=3):
506 |     return [
507 |         {
508 |             "entry": {
509 |                 "id": f"test-id-{i}",
510 |                 "name": f"test-doc-{i}.txt",
511 |                 "nodeType": "cm:content",
512 |                 "properties": {
513 |                     "cm:title": f"Test Document {i}",
514 |                     "cm:created": "2024-01-15T10:30:00.000Z"
515 |                 }
516 |             }
517 |         }
518 |         for i in range(count)
519 |     ]
520 | ```
521 | 
522 | ## 🚨 Troubleshooting Tests
523 | 
524 | ### Common Issues
525 | 
526 | **Test Failures:**
527 | 
528 | 1. **Connection Errors in Integration Tests:**
529 |    ```bash
530 |    # Check Alfresco is running
531 |    curl -u admin:admin http://localhost:8080/alfresco/api/-default-/public/alfresco/versions/1/nodes/-root-
532 |    
533 |    # Verify environment variables
534 |    echo $ALFRESCO_URL
535 |    echo $ALFRESCO_USERNAME
536 |    ```
537 | 
538 | 2. **Import Errors:**
539 |    ```bash
540 |    # Reinstall in development mode
541 |    pip install -e .
542 |    
543 |    # Check Python path
544 |    python -c "import alfresco_mcp_server; print(alfresco_mcp_server.__file__)"
545 |    ```
546 | 
547 | 3. **Coverage Too Low:**
548 |    ```bash
549 |    # Run coverage tests specifically
550 |    pytest tests/test_coverage.py
551 |    
552 |    # Check what's missing
553 |    pytest --cov-report=term-missing
554 |    ```
555 | 
556 | **Performance Issues:**
557 | 
558 | 1. **Slow Tests:**
559 |    ```bash
560 |    # Profile test execution time
561 |    pytest --duration=10
562 |    
563 |    # Run only fast tests
564 |    pytest -m "not slow"
565 |    ```
566 | 
567 | 2. **Timeout Errors:**
568 |    ```bash
569 |    # Increase timeout for integration tests
570 |    pytest --timeout=60 tests/test_integration.py
571 |    ```
572 | 
573 | ### Debugging Tests
574 | 
575 | ```bash
576 | # Run with pdb debugger
577 | pytest --pdb tests/test_file.py::test_function
578 | 
579 | # Show full output (don't capture)
580 | pytest -s tests/test_file.py
581 | 
582 | # Show local variables on failure
583 | pytest --tb=long
584 | 
585 | # Run single test with maximum verbosity
586 | pytest -vvv tests/test_file.py::test_function
587 | ```
588 | 
589 | ## 🔄 Continuous Integration
590 | 
591 | ### GitHub Actions Integration
592 | 
593 | Example CI configuration:
594 | 
595 | ```yaml
596 | name: Tests
597 | on: [push, pull_request]
598 | 
599 | jobs:
600 |   test:
601 |     runs-on: ubuntu-latest
602 |     steps:
603 |       - uses: actions/checkout@v2
604 |       - name: Set up Python
605 |         uses: actions/setup-python@v2
606 |         with:
607 |           python-version: 3.8
608 |       
609 |       - name: Install dependencies
610 |         run: |
611 |           pip install -e .[test]
612 |       
613 |       - name: Run unit tests
614 |         run: |
615 |           python scripts/run_tests.py unit
616 |       
617 |       - name: Run coverage tests
618 |         run: |
619 |           python scripts/run_tests.py coverage
620 |       
621 |       - name: Upload coverage reports
622 |         uses: codecov/codecov-action@v1
623 |         with:
624 |           file: ./coverage.xml
625 | ```
626 | 
627 | ### Local Pre-commit Hooks
628 | 
629 | ```bash
630 | # Install pre-commit
631 | pip install pre-commit
632 | 
633 | # Set up hooks
634 | pre-commit install
635 | 
636 | # Run manually
637 | pre-commit run --all-files
638 | ```
639 | 
640 | ## 📈 Test Metrics
641 | 
642 | ### Success Criteria
643 | 
644 | - ✅ **All tests passing**: **143/143 (100%)**
645 | - ✅ **Coverage target**: >85% on main modules
646 | - ✅ **Performance targets**: All benchmarks within limits
647 | - ✅ **No linting errors**: Clean code quality
648 | 
649 | ### Monitoring
650 | 
651 | ```bash
652 | # Daily test run
653 | python scripts/run_tests.py all > test_results.log 2>&1
654 | 
655 | # Coverage tracking
656 | pytest --cov-report=json
657 | # Parse coverage.json for metrics
658 | 
659 | # Performance monitoring
660 | python scripts/run_tests.py performance | grep "Duration:"
661 | ```
662 | 
663 | ---
664 | 
665 | **🎯 Remember**: Good tests are your safety net for refactoring and new features. Keep them fast, reliable, and thorough! 
```

--------------------------------------------------------------------------------
/examples/batch_operations.py:
--------------------------------------------------------------------------------

```python
  1 | #!/usr/bin/env python3
  2 | """
  3 | Batch Operations Example for Alfresco MCP Server
  4 | 
  5 | This example demonstrates efficient batch processing patterns:
  6 | - Bulk document uploads
  7 | - Parallel search operations
  8 | - Batch metadata updates
  9 | - Concurrent folder creation
 10 | - Performance optimization techniques
 11 | 
 12 | Useful for processing large numbers of documents or automating
 13 | repetitive tasks.
 14 | """
 15 | 
 16 | import asyncio
 17 | import base64
 18 | import time
 19 | import uuid
 20 | from concurrent.futures import ThreadPoolExecutor
 21 | from typing import List, Dict, Any
 22 | from fastmcp import Client
 23 | from alfresco_mcp_server.fastmcp_server import mcp
 24 | 
 25 | 
 26 | class BatchOperationsDemo:
 27 |     """Demonstrates efficient batch processing with Alfresco MCP Server."""
 28 |     
 29 |     def __init__(self):
 30 |         self.session_id = uuid.uuid4().hex[:8]
 31 |         self.batch_size = 5  # Number of operations per batch
 32 |         
 33 |     async def run_batch_demo(self):
 34 |         """Run comprehensive batch operations demonstration."""
 35 |         
 36 |         print("📦 Alfresco MCP Server - Batch Operations Demo")
 37 |         print("=" * 60)
 38 |         print(f"Session ID: {self.session_id}")
 39 |         print(f"Batch Size: {self.batch_size}")
 40 |         
 41 |         async with Client(mcp) as client:
 42 |             # Demo 1: Bulk Document Upload
 43 |             await self._demo_bulk_upload(client)
 44 |             
 45 |             # Demo 2: Parallel Search Operations
 46 |             await self._demo_parallel_search(client)
 47 |             
 48 |             # Demo 3: Batch Folder Creation
 49 |             await self._demo_batch_folders(client)
 50 |             
 51 |             # Demo 4: Concurrent Property Updates
 52 |             await self._demo_batch_properties(client)
 53 |             
 54 |             # Demo 5: Performance Comparison
 55 |             await self._demo_performance_comparison(client)
 56 |             
 57 |             print("\n✅ Batch Operations Demo Complete!")
 58 |     
 59 |     async def _demo_bulk_upload(self, client):
 60 |         """Demonstrate bulk document upload with progress tracking."""
 61 |         
 62 |         print("\n" + "="*60)
 63 |         print("📤 Demo 1: Bulk Document Upload")
 64 |         print("="*60)
 65 |         
 66 |         # Generate sample documents
 67 |         documents = self._generate_sample_documents(10)
 68 |         
 69 |         print(f"\n📋 Uploading {len(documents)} documents...")
 70 |         print("   Strategy: Async batch processing with progress tracking")
 71 |         
 72 |         start_time = time.time()
 73 |         
 74 |         # Method 1: Sequential upload (for comparison)
 75 |         print("\n1️⃣ Sequential Upload:")
 76 |         sequential_start = time.time()
 77 |         
 78 |         for i, doc in enumerate(documents[:3], 1):  # Only 3 for demo
 79 |             print(f"   📄 Uploading document {i}/3: {doc['name']}")
 80 |             
 81 |             result = await client.call_tool("upload_document", {
 82 |                 "filename": doc['name'],
 83 |                 "content_base64": doc['content_b64'],
 84 |                 "parent_id": "-root-",
 85 |                 "description": doc['description']
 86 |             })
 87 |             
 88 |             if "✅" in result[0].text:
 89 |                 print(f"   ✅ Document {i} uploaded successfully")
 90 |             else:
 91 |                 print(f"   ❌ Document {i} failed")
 92 |         
 93 |         sequential_time = time.time() - sequential_start
 94 |         print(f"   ⏱️  Sequential time: {sequential_time:.2f}s")
 95 |         
 96 |         # Method 2: Batch upload with semaphore
 97 |         print("\n2️⃣ Concurrent Upload (with rate limiting):")
 98 |         concurrent_start = time.time()
 99 |         
100 |         semaphore = asyncio.Semaphore(3)  # Limit concurrent uploads
101 |         
102 |         async def upload_with_limit(doc, index):
103 |             async with semaphore:
104 |                 print(f"   📄 Starting upload {index}: {doc['name']}")
105 |                 
106 |                 result = await client.call_tool("upload_document", {
107 |                     "filename": doc['name'],
108 |                     "content_base64": doc['content_b64'],
109 |                     "parent_id": "-root-",
110 |                     "description": doc['description']
111 |                 })
112 |                 
113 |                 success = "✅" in result[0].text
114 |                 print(f"   {'✅' if success else '❌'} Upload {index} completed")
115 |                 return success
116 |         
117 |         # Upload remaining documents concurrently
118 |         remaining_docs = documents[3:8]  # Next 5 documents
119 |         tasks = [
120 |             upload_with_limit(doc, i+4) 
121 |             for i, doc in enumerate(remaining_docs)
122 |         ]
123 |         
124 |         results = await asyncio.gather(*tasks, return_exceptions=True)
125 |         
126 |         concurrent_time = time.time() - concurrent_start
127 |         successful = sum(1 for r in results if r is True)
128 |         
129 |         print(f"   ⏱️  Concurrent time: {concurrent_time:.2f}s")
130 |         print(f"   📊 Success rate: {successful}/{len(remaining_docs)}")
131 |         print(f"   🚀 Speed improvement: {sequential_time/concurrent_time:.1f}x faster")
132 |     
133 |     async def _demo_parallel_search(self, client):
134 |         """Demonstrate parallel search operations."""
135 |         
136 |         print("\n" + "="*60)
137 |         print("🔍 Demo 2: Parallel Search Operations")
138 |         print("="*60)
139 |         
140 |         # Different search queries to run in parallel
141 |         search_queries = [
142 |             ("Content search", "*", 10),
143 |             ("Session docs", self.session_id, 5),
144 |             ("Test files", "test", 8),
145 |             ("Documents", "document", 12),
146 |             ("Recent items", "2024", 15)
147 |         ]
148 |         
149 |         print(f"\n📋 Running {len(search_queries)} searches in parallel...")
150 |         
151 |         start_time = time.time()
152 |         
153 |         async def parallel_search(query_info):
154 |             name, query, max_results = query_info
155 |             print(f"   🔎 Starting: {name} ('{query}')")
156 |             
157 |             try:
158 |                 result = await client.call_tool("search_content", {
159 |                     "query": query,
160 |                     "max_results": max_results
161 |                 })
162 |                 
163 |                 # Extract result count from response
164 |                 response_text = result[0].text
165 |                 if "Found" in response_text:
166 |                     print(f"   ✅ {name}: Completed")
167 |                 else:
168 |                     print(f"   📝 {name}: No results")
169 |                 
170 |                 return name, True, response_text
171 |                 
172 |             except Exception as e:
173 |                 print(f"   ❌ {name}: Failed - {e}")
174 |                 return name, False, str(e)
175 |         
176 |         # Execute all searches in parallel
177 |         search_tasks = [parallel_search(query) for query in search_queries]
178 |         search_results = await asyncio.gather(*search_tasks)
179 |         
180 |         parallel_time = time.time() - start_time
181 |         
182 |         print(f"\n📊 Parallel Search Results:")
183 |         print(f"   ⏱️  Total time: {parallel_time:.2f}s")
184 |         print(f"   🎯 Searches completed: {len(search_results)}")
185 |         
186 |         successful = sum(1 for _, success, _ in search_results if success)
187 |         print(f"   ✅ Success rate: {successful}/{len(search_results)}")
188 |         
189 |         # Show estimated sequential time
190 |         avg_search_time = 0.5  # Estimate 500ms per search
191 |         estimated_sequential = len(search_queries) * avg_search_time
192 |         print(f"   🚀 vs Sequential (~{estimated_sequential:.1f}s): {estimated_sequential/parallel_time:.1f}x faster")
193 |     
194 |     async def _demo_batch_folders(self, client):
195 |         """Demonstrate batch folder creation with hierarchical structure."""
196 |         
197 |         print("\n" + "="*60)
198 |         print("📁 Demo 3: Batch Folder Creation")
199 |         print("="*60)
200 |         
201 |         # Define folder structure
202 |         folder_structure = [
203 |             ("Projects", "Main projects folder"),
204 |             ("Archives", "Archived projects"),
205 |             ("Templates", "Document templates"),
206 |             ("Reports", "Generated reports"),
207 |             ("Temp", "Temporary workspace")
208 |         ]
209 |         
210 |         print(f"\n📋 Creating {len(folder_structure)} folders concurrently...")
211 |         
212 |         async def create_folder_async(folder_info, index):
213 |             name, description = folder_info
214 |             folder_name = f"{name}_{self.session_id}"
215 |             
216 |             print(f"   📂 Creating folder {index+1}: {folder_name}")
217 |             
218 |             try:
219 |                 result = await client.call_tool("create_folder", {
220 |                     "folder_name": folder_name,
221 |                     "parent_id": "-root-",
222 |                     "description": f"{description} - Batch demo {self.session_id}"
223 |                 })
224 |                 
225 |                 success = "✅" in result[0].text
226 |                 print(f"   {'✅' if success else '❌'} Folder {index+1}: {folder_name}")
227 |                 return success
228 |                 
229 |             except Exception as e:
230 |                 print(f"   ❌ Folder {index+1} failed: {e}")
231 |                 return False
232 |         
233 |         start_time = time.time()
234 |         
235 |         # Create all folders concurrently
236 |         folder_tasks = [
237 |             create_folder_async(folder, i) 
238 |             for i, folder in enumerate(folder_structure)
239 |         ]
240 |         
241 |         folder_results = await asyncio.gather(*folder_tasks, return_exceptions=True)
242 |         
243 |         creation_time = time.time() - start_time
244 |         successful_folders = sum(1 for r in folder_results if r is True)
245 |         
246 |         print(f"\n📊 Batch Folder Creation Results:")
247 |         print(f"   ⏱️  Creation time: {creation_time:.2f}s")
248 |         print(f"   ✅ Folders created: {successful_folders}/{len(folder_structure)}")
249 |         print(f"   📈 Average time per folder: {creation_time/len(folder_structure):.2f}s")
250 |     
251 |     async def _demo_batch_properties(self, client):
252 |         """Demonstrate batch property updates."""
253 |         
254 |         print("\n" + "="*60)
255 |         print("⚙️ Demo 4: Batch Property Updates")
256 |         print("="*60)
257 |         
258 |         # Simulate updating properties on multiple nodes
259 |         node_updates = [
260 |             ("-root-", {"cm:title": f"Root Updated {self.session_id}", "cm:description": "Batch update demo"}),
261 |             ("-root-", {"custom:project": "Batch Demo", "custom:session": self.session_id}),
262 |             ("-root-", {"cm:tags": "demo,batch,mcp", "custom:timestamp": str(int(time.time()))}),
263 |         ]
264 |         
265 |         print(f"\n📋 Updating properties on {len(node_updates)} nodes...")
266 |         
267 |         async def update_properties_async(update_info, index):
268 |             node_id, properties = update_info
269 |             
270 |             print(f"   ⚙️ Updating properties {index+1}: {len(properties)} properties")
271 |             
272 |             try:
273 |                 result = await client.call_tool("update_node_properties", {
274 |                     "node_id": node_id,
275 |                     "properties": properties
276 |                 })
277 |                 
278 |                 success = "✅" in result[0].text
279 |                 print(f"   {'✅' if success else '❌'} Properties {index+1} updated")
280 |                 return success
281 |                 
282 |             except Exception as e:
283 |                 print(f"   ❌ Properties {index+1} failed: {e}")
284 |                 return False
285 |         
286 |         start_time = time.time()
287 |         
288 |         # Update all properties concurrently
289 |         update_tasks = [
290 |             update_properties_async(update, i) 
291 |             for i, update in enumerate(node_updates)
292 |         ]
293 |         
294 |         update_results = await asyncio.gather(*update_tasks, return_exceptions=True)
295 |         
296 |         update_time = time.time() - start_time
297 |         successful_updates = sum(1 for r in update_results if r is True)
298 |         
299 |         print(f"\n📊 Batch Property Update Results:")
300 |         print(f"   ⏱️  Update time: {update_time:.2f}s")
301 |         print(f"   ✅ Updates completed: {successful_updates}/{len(node_updates)}")
302 |     
303 |     async def _demo_performance_comparison(self, client):
304 |         """Compare sequential vs concurrent operation performance."""
305 |         
306 |         print("\n" + "="*60)
307 |         print("⚡ Demo 5: Performance Comparison")
308 |         print("="*60)
309 |         
310 |         # Test operations
311 |         operations = [
312 |             ("search", "search_content", {"query": f"test_{i}", "max_results": 3})
313 |             for i in range(5)
314 |         ]
315 |         
316 |         print(f"\n📊 Comparing sequential vs concurrent execution...")
317 |         print(f"   Operations: {len(operations)} search operations")
318 |         
319 |         # Sequential execution
320 |         print("\n1️⃣ Sequential Execution:")
321 |         sequential_start = time.time()
322 |         
323 |         for i, (op_type, tool_name, params) in enumerate(operations):
324 |             print(f"   🔄 Operation {i+1}/{len(operations)}")
325 |             try:
326 |                 await client.call_tool(tool_name, params)
327 |                 print(f"   ✅ Operation {i+1} completed")
328 |             except Exception as e:
329 |                 print(f"   ❌ Operation {i+1} failed: {e}")
330 |         
331 |         sequential_time = time.time() - sequential_start
332 |         
333 |         # Concurrent execution
334 |         print("\n2️⃣ Concurrent Execution:")
335 |         concurrent_start = time.time()
336 |         
337 |         async def execute_operation(op_info, index):
338 |             op_type, tool_name, params = op_info
339 |             print(f"   🔄 Starting operation {index+1}")
340 |             
341 |             try:
342 |                 await client.call_tool(tool_name, params)
343 |                 print(f"   ✅ Operation {index+1} completed")
344 |                 return True
345 |             except Exception as e:
346 |                 print(f"   ❌ Operation {index+1} failed: {e}")
347 |                 return False
348 |         
349 |         concurrent_tasks = [
350 |             execute_operation(op, i) 
351 |             for i, op in enumerate(operations)
352 |         ]
353 |         
354 |         concurrent_results = await asyncio.gather(*concurrent_tasks, return_exceptions=True)
355 |         concurrent_time = time.time() - concurrent_start
356 |         
357 |         # Performance summary
358 |         print(f"\n📈 Performance Comparison Results:")
359 |         print(f"   Sequential time: {sequential_time:.2f}s")
360 |         print(f"   Concurrent time: {concurrent_time:.2f}s")
361 |         print(f"   Speed improvement: {sequential_time/concurrent_time:.1f}x")
362 |         print(f"   Time saved: {sequential_time-concurrent_time:.2f}s ({(1-concurrent_time/sequential_time)*100:.1f}%)")
363 |         
364 |         print(f"\n💡 Batch Processing Best Practices:")
365 |         print(f"   • Use async/await for I/O bound operations")
366 |         print(f"   • Implement rate limiting with semaphores")
367 |         print(f"   • Handle exceptions gracefully in batch operations")
368 |         print(f"   • Monitor progress with appropriate logging")
369 |         print(f"   • Consider memory usage for large batches")
370 |     
371 |     def _generate_sample_documents(self, count: int) -> List[Dict[str, Any]]:
372 |         """Generate sample documents for testing."""
373 |         
374 |         documents = []
375 |         
376 |         for i in range(count):
377 |             content = f"""Document {i+1}
378 |             
379 | Session: {self.session_id}
380 | Created: {time.strftime('%Y-%m-%d %H:%M:%S')}
381 | Type: Batch Demo Document
382 | Index: {i+1} of {count}
383 | 
384 | This is a sample document created during the batch operations demo.
385 | It contains some sample content for testing purposes.
386 | 
387 | Content sections:
388 | - Introduction
389 | - Main content  
390 | - Conclusion
391 | 
392 | Document properties:
393 | - Unique ID: {uuid.uuid4()}
394 | - Processing batch: {self.session_id}
395 | - Creation timestamp: {int(time.time())}
396 | """
397 |             
398 |             documents.append({
399 |                 "name": f"batch_doc_{self.session_id}_{i+1:03d}.txt",
400 |                 "content": content,
401 |                 "content_b64": base64.b64encode(content.encode('utf-8')).decode('utf-8'),
402 |                 "description": f"Batch demo document {i+1} from session {self.session_id}"
403 |             })
404 |         
405 |         return documents
406 | 
407 | 
408 | async def main():
409 |     """Main function to run batch operations demo."""
410 |     
411 |     print("Starting Batch Operations Demo...")
412 |     
413 |     try:
414 |         demo = BatchOperationsDemo()
415 |         await demo.run_batch_demo()
416 |         
417 |         print("\n🎉 Batch Operations Demo Complete!")
418 |         print("\n📚 What you learned:")
419 |         print("• Efficient batch document upload patterns")
420 |         print("• Parallel search operation techniques")
421 |         print("• Concurrent folder creation strategies")
422 |         print("• Batch property update methods")
423 |         print("• Performance optimization approaches")
424 |         print("• Rate limiting and error handling")
425 |         
426 |     except Exception as e:
427 |         print(f"\n💥 Batch demo failed: {e}")
428 | 
429 | 
430 | if __name__ == "__main__":
431 |     asyncio.run(main()) 
```
Page 3/4FirstPrevNextLast