#
tokens: 13235/50000 3/85 files (page 3/3)
lines: off (toggle) GitHub
raw markdown copy
This is page 3 of 3. Use http://codebase.md/stevereiner/python-alfresco-mcp-server?page={x} to view the full context.

# Directory Structure

```
├── .gitattributes
├── .gitignore
├── .vscode
│   ├── mcp.json
│   └── settings.json
├── alfresco_mcp_server
│   ├── __init__.py
│   ├── config.py
│   ├── fastmcp_server.py
│   ├── prompts
│   │   ├── __init__.py
│   │   └── search_and_analyze.py
│   ├── resources
│   │   ├── __init__.py
│   │   └── repository_resources.py
│   ├── tools
│   │   ├── __init__.py
│   │   ├── core
│   │   │   ├── __init__.py
│   │   │   ├── browse_repository.py
│   │   │   ├── cancel_checkout.py
│   │   │   ├── checkin_document.py
│   │   │   ├── checkout_document.py
│   │   │   ├── create_folder.py
│   │   │   ├── delete_node.py
│   │   │   ├── download_document.py
│   │   │   ├── get_node_properties.py
│   │   │   ├── update_node_properties.py
│   │   │   └── upload_document.py
│   │   └── search
│   │       ├── __init__.py
│   │       ├── advanced_search.py
│   │       ├── cmis_search.py
│   │       ├── search_by_metadata.py
│   │       └── search_content.py
│   └── utils
│       ├── __init__.py
│       ├── connection.py
│       ├── file_type_analysis.py
│       └── json_utils.py
├── CHANGELOG.md
├── claude-desktop-config-pipx-macos.json
├── claude-desktop-config-pipx-windows.json
├── claude-desktop-config-uv-macos.json
├── claude-desktop-config-uv-windows.json
├── claude-desktop-config-uvx-macos.json
├── claude-desktop-config-uvx-windows.json
├── config.yaml
├── docs
│   ├── api_reference.md
│   ├── claude_desktop_setup.md
│   ├── client_configurations.md
│   ├── configuration_guide.md
│   ├── install_with_pip_pipx.md
│   ├── mcp_inspector_setup.md
│   ├── quick_start_guide.md
│   ├── README.md
│   ├── testing_guide.md
│   └── troubleshooting.md
├── examples
│   ├── batch_operations.py
│   ├── document_lifecycle.py
│   ├── error_handling.py
│   ├── examples_summary.md
│   ├── quick_start.py
│   ├── README.md
│   └── transport_examples.py
├── LICENSE
├── MANIFEST.in
├── mcp-inspector-http-pipx-config.json
├── mcp-inspector-http-uv-config.json
├── mcp-inspector-http-uvx-config.json
├── mcp-inspector-stdio-pipx-config.json
├── mcp-inspector-stdio-uv-config.json
├── mcp-inspector-stdio-uvx-config.json
├── prompts-for-claude.md
├── pyproject.toml
├── pytest.ini
├── README.md
├── run_server.py
├── sample-dot-env.txt
├── scripts
│   ├── run_tests.py
│   └── test.bat
├── tests
│   ├── __init__.py
│   ├── conftest.py
│   ├── mcp_specific
│   │   ├── MCP_INSPECTOR_CONNECTION.md
│   │   ├── mcp_testing_guide.md
│   │   ├── START_HTTP_SERVER.md
│   │   ├── START_MCP_INSPECTOR.md
│   │   ├── test_http_server.ps1
│   │   ├── test_with_mcp_inspector.md
│   │   └── TESTING_INSTRUCTIONS.md
│   ├── README.md
│   ├── test_coverage.py
│   ├── test_fastmcp_2_0.py
│   ├── test_integration.py
│   └── test_unit_tools.py
├── tests-debug
│   └── README.md
└── uv.lock
```

# Files

--------------------------------------------------------------------------------
/docs/testing_guide.md:
--------------------------------------------------------------------------------

```markdown
# Testing Guide

Guide for running, maintaining, and extending the Alfresco MCP Server test suite. This document covers unit tests, integration tests, coverage analysis, and best practices.

## 📋 Test Suite Overview

The test suite includes:
- ✅ **143 Total Tests** (122 unit + 21 integration) - **100% passed**
- ✅ **51% Code Coverage** on main implementation
- ✅ **Mocked Unit Tests** for fast feedback
- ✅ **Live Integration Tests** with real Alfresco
- ✅ **Edge Case Coverage** for production readiness

## 🚀 Quick Start

### Run All Tests
```bash
# Run complete test suite
python scripts/run_tests.py all

# Run with coverage report
python scripts/run_tests.py coverage
```

### Run Specific Test Types
```bash
# Unit tests only (fast)
python scripts/run_tests.py unit

# Integration tests (requires Alfresco)
python scripts/run_tests.py integration

# Performance benchmarks
python scripts/run_tests.py performance

# Code quality checks
python scripts/run_tests.py lint
```

## 🏗️ Test Structure

### Test Categories

| Test Type | Purpose | Count | Duration | Prerequisites |
|-----------|---------|-------|----------|---------------|
| **Unit** | Fast feedback, mocked dependencies | 23 | ~5s | None |
| **Integration** | Real Alfresco server testing | 18 | ~30s | Live Alfresco |
| **Coverage** | Edge cases and error paths | 17 | ~10s | None |

### Test Files

```
tests/
├── conftest.py              # Shared fixtures and configuration

├── test_integration.py      # Live Alfresco integration tests
└── test_coverage.py         # Edge cases and coverage tests
```

## 🔧 Environment Setup

### Prerequisites
```bash
# Install test dependencies
pip install -e .[test]

# Or install all dev dependencies
pip install -e .[all]
```

### Alfresco Configuration

For integration tests, configure your Alfresco connection:

```bash
# Environment variables (recommended)
export ALFRESCO_URL="http://localhost:8080"
export ALFRESCO_USERNAME="admin"
export ALFRESCO_PASSWORD="admin"

# Or set in config.yaml
alfresco:
  url: "http://localhost:8080"
  username: "admin"
  password: "admin"
```

### Test Configuration

Pytest configuration is in `pytest.ini`:

```ini
[tool:pytest]
testpaths = tests
python_files = test_*.py
python_classes = Test*
python_functions = test_*
addopts = 
    --cov=alfresco_mcp_server
    --cov-report=html
    --cov-report=xml
    --cov-report=term
    --cov-branch
    --cov-fail-under=85
markers =
    unit: Unit tests with mocked dependencies
    integration: Integration tests requiring live Alfresco
    slow: Tests that take longer than usual
    performance: Performance and benchmark tests
```

## 🧪 Running Tests

### Basic Test Commands

```bash
# Run all tests
pytest

# Run integration tests with live server
pytest tests/test_integration.py

# Run specific test function  
pytest tests/test_fastmcp.py::test_search_content_tool

# Run tests with specific markers
pytest -m unit
pytest -m integration
pytest -m "not slow"
```

### Advanced Options

```bash
# Verbose output
pytest -v

# Stop on first failure
pytest -x

# Run in parallel (faster)
pytest -n auto

# Show coverage in terminal
pytest --cov-report=term-missing

# Generate HTML coverage report
pytest --cov-report=html
```

### Using the Test Runner

The `scripts/run_tests.py` provides convenient test execution:

```bash
# Show help
python scripts/run_tests.py --help

# Run unit tests only
python scripts/run_tests.py unit

# Run with custom pytest args
python scripts/run_tests.py unit --verbose --stop-on-failure

# Run integration tests with timeout
python scripts/run_tests.py integration --timeout 60

# Skip Alfresco availability check
python scripts/run_tests.py integration --skip-alfresco-check
```

## 🔍 Test Details

### Unit Tests (122 tests) - **100% passed**

Fast tests with mocked Alfresco dependencies:

```python
# Example unit test structure
async def test_search_content_tool():
    """Test search tool with mocked Alfresco client."""
    
    # Arrange: Set up mock
    mock_alfresco = Mock()
    mock_search_results = create_mock_search_results(3)
    mock_alfresco.search_content.return_value = mock_search_results
    
    # Act: Execute tool
    result = await search_tool.execute(mock_alfresco, {
        "query": "test query",
        "max_results": 10
    })
    
    # Assert: Verify behavior
    assert "Found 3 results" in result
    mock_alfresco.search_content.assert_called_once()
```

**Covers:**
- ✅ All 17 MCP tools with success scenarios
- ✅ Error handling and edge cases
- ✅ Parameter validation
- ✅ Response formatting
- ✅ Tool availability and schemas

### Integration Tests (21 tests) - **100% passed**

Real Alfresco server integration:

```python
# Example integration test
async def test_live_search_integration(alfresco_client):
    """Test search against live Alfresco server."""
    
    # Execute search on live server
    async with Client(mcp) as client:
        result = await client.call_tool("search_content", {
            "query": "*",
            "max_results": 5
        })
    
    # Verify real response structure
    assert result is not None
    assert len(result) > 0
```

**Covers:**
- ✅ Live server connectivity
- ✅ Tool functionality with real data
- ✅ End-to-end workflows
- ✅ Resource access
- ✅ Prompt generation
- ✅ Performance benchmarks

### Coverage Tests (17 tests)

Edge cases and error paths:

```python
# Example coverage test
async def test_invalid_base64_handling():
    """Test handling of malformed base64 content."""
    
    # Test with clearly invalid base64
    invalid_content = "not-valid-base64!!!"
    
    result = await upload_tool.execute(mock_client, {
        "filename": "test.txt",
        "content_base64": invalid_content,
        "parent_id": "-root-"
    })
    
    assert "❌ Error: Invalid base64 content" in result
```

**Covers:**
- ✅ Invalid inputs and malformed data
- ✅ Connection failures and timeouts
- ✅ Authentication errors
- ✅ Edge case parameter values
- ✅ Error message formatting

## 📊 Test Reports & Coverage

The test suite generates **reports** in multiple formats:

### **📈 Coverage Reports**

The test framework automatically generates detailed coverage reports:

```bash
# Generate full coverage report
python scripts/run_tests.py --mode coverage

# Generate with specific output formats
python -m pytest --cov=alfresco_mcp_server --cov-report=html --cov-report=xml --cov-report=term
```

**Report Formats Generated:**
- **📊 HTML Report**: `htmlcov/index.html` - Interactive visual coverage report
- **📋 XML Report**: `coverage.xml` - Machine-readable coverage data (166KB)
- **🖥️ Terminal Report**: Immediate coverage summary in console

### **🎯 Current Coverage Metrics**
From latest test run:
- **Files Covered**: 25+ source files
- **Coverage Percentage**: 20% (improving with modular architecture)
- **Main Server**: `fastmcp_server.py` - 91% coverage  
- **Configuration**: `config.py` - 93% coverage
- **Prompts**: `search_and_analyze.py` - 100% coverage

### **📁 Report Locations**

After running tests, reports are available at:
```
📊 htmlcov/index.html          # Interactive HTML coverage report
📋 coverage.xml               # XML coverage data (166KB)
🗂️ htmlcov/                   # Detailed per-file coverage analysis
   ├── index.html             # Main coverage dashboard
   ├── function_index.html    # Function-level coverage
   ├── class_index.html       # Class-level coverage
   └── [file]_py.html         # Individual file coverage
```

### **🔍 Viewing Reports**

```bash
# Open HTML coverage report in browser
python -c "import webbrowser; webbrowser.open('htmlcov/index.html')"

# View coverage summary in terminal
python -m pytest --cov=alfresco_mcp_server --cov-report=term-missing

# Generate report with all formats
python scripts/run_tests.py --mode coverage
```

### **📝 Test Execution Reports**

Each test run provides:
- **✅ Pass/Fail Status**: Detailed results for all 4 test categories
- **⏱️ Performance Metrics**: Execution times and performance benchmarks  
- **🔍 Error Details**: Full stack traces and failure analysis
- **📊 Coverage Analysis**: Line-by-line code coverage with missing lines highlighted

### **🚀 Integration Test Reports**

The integration tests generate detailed execution logs:
- **Live Alfresco Validation**: Real server connectivity and response analysis
- **Tool Parameter Verification**: Automatic schema validation and error detection
- **Search Method Comparison**: AFTS vs CMIS performance and result analysis
- **End-to-End Workflows**: Complete document lifecycle validation

### **💡 Using Reports for Development**

1. **📊 HTML Coverage Report**: Visual identification of untested code paths
2. **📋 Function Coverage**: Find specific functions needing test coverage
3. **🎯 Missing Lines**: Direct links to uncovered code lines
4. **📈 Trend Analysis**: Track coverage improvements over time

The reports help identify areas needing additional testing and validate the test suite effectiveness.

## 📊 Coverage Analysis

### Viewing Coverage Reports

```bash
# Generate HTML report
pytest --cov-report=html
open htmlcov/index.html

# Terminal report with missing lines
pytest --cov-report=term-missing

# XML report for CI/CD
pytest --cov-report=xml
```

### Coverage Targets

| Module | Target | Current |
|--------|---------|---------|
| `fastmcp_server.py` | 74% | Current |
| `config.py` | 90% | 96% |
| **Overall** | 80% | 82% |

### Improving Coverage

To improve test coverage:

1. **Identify uncovered lines:**
   ```bash
   pytest --cov-report=term-missing | grep "TOTAL"
   ```

2. **Add tests for missing paths:**
   - Error conditions
   - Edge cases
   - Exception handling

3. **Run coverage-specific tests:**
   ```bash
   pytest tests/test_coverage.py -v
   ```

## ⚡ Performance Testing

### Benchmark Tests

Performance tests validate response times:

```python
# Example performance test
async def test_search_performance():
    """Verify search performance under 10 seconds."""
    
    start_time = time.time()
    
    async with Client(mcp) as client:
        await client.call_tool("search_content", {
            "query": "*",
            "max_results": 10
        })
    
    duration = time.time() - start_time
    assert duration < 10.0, f"Search took {duration:.2f}s, expected <10s"
```

### Performance Targets

| Operation | Target | Typical |
|-----------|---------|---------|
| Search | <10s | 2-5s |
| Upload | <30s | 5-15s |
| Download | <15s | 3-8s |
| Properties | <5s | 1-3s |
| Concurrent (5x) | <15s | 8-12s |

### Running Performance Tests

```bash
# Run performance suite
python scripts/run_tests.py performance

# Run with timing details
pytest -m performance --duration=10
```

## 🔨 Test Development

### Adding New Tests

1. **Choose the right test type:**
   - Unit: Fast feedback, mocked dependencies
   - Integration: Real server interaction
   - Coverage: Edge cases and errors

2. **Follow naming conventions:**
   ```python
   # Unit tests
   async def test_tool_name_success():
   async def test_tool_name_error_case():
   
   # Integration tests  
   async def test_live_tool_integration():
   
   # Coverage tests
   async def test_edge_case_handling():
   ```

3. **Use appropriate fixtures:**
   ```python
   # Mock fixtures for unit tests
   def test_with_mock_client(mock_alfresco_client):
       pass
   
   # Real client for integration
   def test_with_real_client(alfresco_client):
       pass
   ```

### Test Patterns

**Arrange-Act-Assert Pattern:**
```python
async def test_example():
    # Arrange: Set up test data
    mock_client = create_mock_client()
    test_params = {"query": "test"}
    
    # Act: Execute the function
    result = await tool.execute(mock_client, test_params)
    
    # Assert: Verify the outcome
    assert "expected result" in result
    mock_client.method.assert_called_once()
```

**Error Testing Pattern:**
```python
async def test_error_handling():
    # Arrange: Set up error condition
    mock_client = Mock()
    mock_client.method.side_effect = ConnectionError("Network error")
    
    # Act & Assert: Verify error handling
    result = await tool.execute(mock_client, {})
    assert "❌ Error:" in result
    assert "Network error" in result
```

### Mocking Best Practices

```python
# Good: Mock at the right level
@patch('alfresco_mcp_server.fastmcp_server.ClientFactory')
async def test_with_proper_mock(mock_client_class):
    mock_instance = mock_client_class.return_value
    mock_instance.search.return_value = test_data
    
    # Test uses mocked instance
    result = await search_tool.execute(mock_instance, params)

# Good: Use realistic test data
def create_mock_search_results(count=3):
    return [
        {
            "entry": {
                "id": f"test-id-{i}",
                "name": f"test-doc-{i}.txt",
                "nodeType": "cm:content",
                "properties": {
                    "cm:title": f"Test Document {i}",
                    "cm:created": "2024-01-15T10:30:00.000Z"
                }
            }
        }
        for i in range(count)
    ]
```

## 🚨 Troubleshooting Tests

### Common Issues

**Test Failures:**

1. **Connection Errors in Integration Tests:**
   ```bash
   # Check Alfresco is running
   curl -u admin:admin http://localhost:8080/alfresco/api/-default-/public/alfresco/versions/1/nodes/-root-
   
   # Verify environment variables
   echo $ALFRESCO_URL
   echo $ALFRESCO_USERNAME
   ```

2. **Import Errors:**
   ```bash
   # Reinstall in development mode
   pip install -e .
   
   # Check Python path
   python -c "import alfresco_mcp_server; print(alfresco_mcp_server.__file__)"
   ```

3. **Coverage Too Low:**
   ```bash
   # Run coverage tests specifically
   pytest tests/test_coverage.py
   
   # Check what's missing
   pytest --cov-report=term-missing
   ```

**Performance Issues:**

1. **Slow Tests:**
   ```bash
   # Profile test execution time
   pytest --duration=10
   
   # Run only fast tests
   pytest -m "not slow"
   ```

2. **Timeout Errors:**
   ```bash
   # Increase timeout for integration tests
   pytest --timeout=60 tests/test_integration.py
   ```

### Debugging Tests

```bash
# Run with pdb debugger
pytest --pdb tests/test_file.py::test_function

# Show full output (don't capture)
pytest -s tests/test_file.py

# Show local variables on failure
pytest --tb=long

# Run single test with maximum verbosity
pytest -vvv tests/test_file.py::test_function
```

## 🔄 Continuous Integration

### GitHub Actions Integration

Example CI configuration:

```yaml
name: Tests
on: [push, pull_request]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - name: Set up Python
        uses: actions/setup-python@v2
        with:
          python-version: 3.8
      
      - name: Install dependencies
        run: |
          pip install -e .[test]
      
      - name: Run unit tests
        run: |
          python scripts/run_tests.py unit
      
      - name: Run coverage tests
        run: |
          python scripts/run_tests.py coverage
      
      - name: Upload coverage reports
        uses: codecov/codecov-action@v1
        with:
          file: ./coverage.xml
```

### Local Pre-commit Hooks

```bash
# Install pre-commit
pip install pre-commit

# Set up hooks
pre-commit install

# Run manually
pre-commit run --all-files
```

## 📈 Test Metrics

### Success Criteria

- ✅ **All tests passing**: **143/143 (100%)**
- ✅ **Coverage target**: >85% on main modules
- ✅ **Performance targets**: All benchmarks within limits
- ✅ **No linting errors**: Clean code quality

### Monitoring

```bash
# Daily test run
python scripts/run_tests.py all > test_results.log 2>&1

# Coverage tracking
pytest --cov-report=json
# Parse coverage.json for metrics

# Performance monitoring
python scripts/run_tests.py performance | grep "Duration:"
```

---

**🎯 Remember**: Good tests are your safety net for refactoring and new features. Keep them fast, reliable, and thorough! 
```

--------------------------------------------------------------------------------
/examples/batch_operations.py:
--------------------------------------------------------------------------------

```python
#!/usr/bin/env python3
"""
Batch Operations Example for Alfresco MCP Server

This example demonstrates efficient batch processing patterns:
- Bulk document uploads
- Parallel search operations
- Batch metadata updates
- Concurrent folder creation
- Performance optimization techniques

Useful for processing large numbers of documents or automating
repetitive tasks.
"""

import asyncio
import base64
import time
import uuid
from concurrent.futures import ThreadPoolExecutor
from typing import List, Dict, Any
from fastmcp import Client
from alfresco_mcp_server.fastmcp_server import mcp


class BatchOperationsDemo:
    """Demonstrates efficient batch processing with Alfresco MCP Server."""
    
    def __init__(self):
        self.session_id = uuid.uuid4().hex[:8]
        self.batch_size = 5  # Number of operations per batch
        
    async def run_batch_demo(self):
        """Run comprehensive batch operations demonstration."""
        
        print("📦 Alfresco MCP Server - Batch Operations Demo")
        print("=" * 60)
        print(f"Session ID: {self.session_id}")
        print(f"Batch Size: {self.batch_size}")
        
        async with Client(mcp) as client:
            # Demo 1: Bulk Document Upload
            await self._demo_bulk_upload(client)
            
            # Demo 2: Parallel Search Operations
            await self._demo_parallel_search(client)
            
            # Demo 3: Batch Folder Creation
            await self._demo_batch_folders(client)
            
            # Demo 4: Concurrent Property Updates
            await self._demo_batch_properties(client)
            
            # Demo 5: Performance Comparison
            await self._demo_performance_comparison(client)
            
            print("\n✅ Batch Operations Demo Complete!")
    
    async def _demo_bulk_upload(self, client):
        """Demonstrate bulk document upload with progress tracking."""
        
        print("\n" + "="*60)
        print("📤 Demo 1: Bulk Document Upload")
        print("="*60)
        
        # Generate sample documents
        documents = self._generate_sample_documents(10)
        
        print(f"\n📋 Uploading {len(documents)} documents...")
        print("   Strategy: Async batch processing with progress tracking")
        
        start_time = time.time()
        
        # Method 1: Sequential upload (for comparison)
        print("\n1️⃣ Sequential Upload:")
        sequential_start = time.time()
        
        for i, doc in enumerate(documents[:3], 1):  # Only 3 for demo
            print(f"   📄 Uploading document {i}/3: {doc['name']}")
            
            result = await client.call_tool("upload_document", {
                "filename": doc['name'],
                "content_base64": doc['content_b64'],
                "parent_id": "-root-",
                "description": doc['description']
            })
            
            if "✅" in result[0].text:
                print(f"   ✅ Document {i} uploaded successfully")
            else:
                print(f"   ❌ Document {i} failed")
        
        sequential_time = time.time() - sequential_start
        print(f"   ⏱️  Sequential time: {sequential_time:.2f}s")
        
        # Method 2: Batch upload with semaphore
        print("\n2️⃣ Concurrent Upload (with rate limiting):")
        concurrent_start = time.time()
        
        semaphore = asyncio.Semaphore(3)  # Limit concurrent uploads
        
        async def upload_with_limit(doc, index):
            async with semaphore:
                print(f"   📄 Starting upload {index}: {doc['name']}")
                
                result = await client.call_tool("upload_document", {
                    "filename": doc['name'],
                    "content_base64": doc['content_b64'],
                    "parent_id": "-root-",
                    "description": doc['description']
                })
                
                success = "✅" in result[0].text
                print(f"   {'✅' if success else '❌'} Upload {index} completed")
                return success
        
        # Upload remaining documents concurrently
        remaining_docs = documents[3:8]  # Next 5 documents
        tasks = [
            upload_with_limit(doc, i+4) 
            for i, doc in enumerate(remaining_docs)
        ]
        
        results = await asyncio.gather(*tasks, return_exceptions=True)
        
        concurrent_time = time.time() - concurrent_start
        successful = sum(1 for r in results if r is True)
        
        print(f"   ⏱️  Concurrent time: {concurrent_time:.2f}s")
        print(f"   📊 Success rate: {successful}/{len(remaining_docs)}")
        print(f"   🚀 Speed improvement: {sequential_time/concurrent_time:.1f}x faster")
    
    async def _demo_parallel_search(self, client):
        """Demonstrate parallel search operations."""
        
        print("\n" + "="*60)
        print("🔍 Demo 2: Parallel Search Operations")
        print("="*60)
        
        # Different search queries to run in parallel
        search_queries = [
            ("Content search", "*", 10),
            ("Session docs", self.session_id, 5),
            ("Test files", "test", 8),
            ("Documents", "document", 12),
            ("Recent items", "2024", 15)
        ]
        
        print(f"\n📋 Running {len(search_queries)} searches in parallel...")
        
        start_time = time.time()
        
        async def parallel_search(query_info):
            name, query, max_results = query_info
            print(f"   🔎 Starting: {name} ('{query}')")
            
            try:
                result = await client.call_tool("search_content", {
                    "query": query,
                    "max_results": max_results
                })
                
                # Extract result count from response
                response_text = result[0].text
                if "Found" in response_text:
                    print(f"   ✅ {name}: Completed")
                else:
                    print(f"   📝 {name}: No results")
                
                return name, True, response_text
                
            except Exception as e:
                print(f"   ❌ {name}: Failed - {e}")
                return name, False, str(e)
        
        # Execute all searches in parallel
        search_tasks = [parallel_search(query) for query in search_queries]
        search_results = await asyncio.gather(*search_tasks)
        
        parallel_time = time.time() - start_time
        
        print(f"\n📊 Parallel Search Results:")
        print(f"   ⏱️  Total time: {parallel_time:.2f}s")
        print(f"   🎯 Searches completed: {len(search_results)}")
        
        successful = sum(1 for _, success, _ in search_results if success)
        print(f"   ✅ Success rate: {successful}/{len(search_results)}")
        
        # Show estimated sequential time
        avg_search_time = 0.5  # Estimate 500ms per search
        estimated_sequential = len(search_queries) * avg_search_time
        print(f"   🚀 vs Sequential (~{estimated_sequential:.1f}s): {estimated_sequential/parallel_time:.1f}x faster")
    
    async def _demo_batch_folders(self, client):
        """Demonstrate batch folder creation with hierarchical structure."""
        
        print("\n" + "="*60)
        print("📁 Demo 3: Batch Folder Creation")
        print("="*60)
        
        # Define folder structure
        folder_structure = [
            ("Projects", "Main projects folder"),
            ("Archives", "Archived projects"),
            ("Templates", "Document templates"),
            ("Reports", "Generated reports"),
            ("Temp", "Temporary workspace")
        ]
        
        print(f"\n📋 Creating {len(folder_structure)} folders concurrently...")
        
        async def create_folder_async(folder_info, index):
            name, description = folder_info
            folder_name = f"{name}_{self.session_id}"
            
            print(f"   📂 Creating folder {index+1}: {folder_name}")
            
            try:
                result = await client.call_tool("create_folder", {
                    "folder_name": folder_name,
                    "parent_id": "-root-",
                    "description": f"{description} - Batch demo {self.session_id}"
                })
                
                success = "✅" in result[0].text
                print(f"   {'✅' if success else '❌'} Folder {index+1}: {folder_name}")
                return success
                
            except Exception as e:
                print(f"   ❌ Folder {index+1} failed: {e}")
                return False
        
        start_time = time.time()
        
        # Create all folders concurrently
        folder_tasks = [
            create_folder_async(folder, i) 
            for i, folder in enumerate(folder_structure)
        ]
        
        folder_results = await asyncio.gather(*folder_tasks, return_exceptions=True)
        
        creation_time = time.time() - start_time
        successful_folders = sum(1 for r in folder_results if r is True)
        
        print(f"\n📊 Batch Folder Creation Results:")
        print(f"   ⏱️  Creation time: {creation_time:.2f}s")
        print(f"   ✅ Folders created: {successful_folders}/{len(folder_structure)}")
        print(f"   📈 Average time per folder: {creation_time/len(folder_structure):.2f}s")
    
    async def _demo_batch_properties(self, client):
        """Demonstrate batch property updates."""
        
        print("\n" + "="*60)
        print("⚙️ Demo 4: Batch Property Updates")
        print("="*60)
        
        # Simulate updating properties on multiple nodes
        node_updates = [
            ("-root-", {"cm:title": f"Root Updated {self.session_id}", "cm:description": "Batch update demo"}),
            ("-root-", {"custom:project": "Batch Demo", "custom:session": self.session_id}),
            ("-root-", {"cm:tags": "demo,batch,mcp", "custom:timestamp": str(int(time.time()))}),
        ]
        
        print(f"\n📋 Updating properties on {len(node_updates)} nodes...")
        
        async def update_properties_async(update_info, index):
            node_id, properties = update_info
            
            print(f"   ⚙️ Updating properties {index+1}: {len(properties)} properties")
            
            try:
                result = await client.call_tool("update_node_properties", {
                    "node_id": node_id,
                    "properties": properties
                })
                
                success = "✅" in result[0].text
                print(f"   {'✅' if success else '❌'} Properties {index+1} updated")
                return success
                
            except Exception as e:
                print(f"   ❌ Properties {index+1} failed: {e}")
                return False
        
        start_time = time.time()
        
        # Update all properties concurrently
        update_tasks = [
            update_properties_async(update, i) 
            for i, update in enumerate(node_updates)
        ]
        
        update_results = await asyncio.gather(*update_tasks, return_exceptions=True)
        
        update_time = time.time() - start_time
        successful_updates = sum(1 for r in update_results if r is True)
        
        print(f"\n📊 Batch Property Update Results:")
        print(f"   ⏱️  Update time: {update_time:.2f}s")
        print(f"   ✅ Updates completed: {successful_updates}/{len(node_updates)}")
    
    async def _demo_performance_comparison(self, client):
        """Compare sequential vs concurrent operation performance."""
        
        print("\n" + "="*60)
        print("⚡ Demo 5: Performance Comparison")
        print("="*60)
        
        # Test operations
        operations = [
            ("search", "search_content", {"query": f"test_{i}", "max_results": 3})
            for i in range(5)
        ]
        
        print(f"\n📊 Comparing sequential vs concurrent execution...")
        print(f"   Operations: {len(operations)} search operations")
        
        # Sequential execution
        print("\n1️⃣ Sequential Execution:")
        sequential_start = time.time()
        
        for i, (op_type, tool_name, params) in enumerate(operations):
            print(f"   🔄 Operation {i+1}/{len(operations)}")
            try:
                await client.call_tool(tool_name, params)
                print(f"   ✅ Operation {i+1} completed")
            except Exception as e:
                print(f"   ❌ Operation {i+1} failed: {e}")
        
        sequential_time = time.time() - sequential_start
        
        # Concurrent execution
        print("\n2️⃣ Concurrent Execution:")
        concurrent_start = time.time()
        
        async def execute_operation(op_info, index):
            op_type, tool_name, params = op_info
            print(f"   🔄 Starting operation {index+1}")
            
            try:
                await client.call_tool(tool_name, params)
                print(f"   ✅ Operation {index+1} completed")
                return True
            except Exception as e:
                print(f"   ❌ Operation {index+1} failed: {e}")
                return False
        
        concurrent_tasks = [
            execute_operation(op, i) 
            for i, op in enumerate(operations)
        ]
        
        concurrent_results = await asyncio.gather(*concurrent_tasks, return_exceptions=True)
        concurrent_time = time.time() - concurrent_start
        
        # Performance summary
        print(f"\n📈 Performance Comparison Results:")
        print(f"   Sequential time: {sequential_time:.2f}s")
        print(f"   Concurrent time: {concurrent_time:.2f}s")
        print(f"   Speed improvement: {sequential_time/concurrent_time:.1f}x")
        print(f"   Time saved: {sequential_time-concurrent_time:.2f}s ({(1-concurrent_time/sequential_time)*100:.1f}%)")
        
        print(f"\n💡 Batch Processing Best Practices:")
        print(f"   • Use async/await for I/O bound operations")
        print(f"   • Implement rate limiting with semaphores")
        print(f"   • Handle exceptions gracefully in batch operations")
        print(f"   • Monitor progress with appropriate logging")
        print(f"   • Consider memory usage for large batches")
    
    def _generate_sample_documents(self, count: int) -> List[Dict[str, Any]]:
        """Generate sample documents for testing."""
        
        documents = []
        
        for i in range(count):
            content = f"""Document {i+1}
            
Session: {self.session_id}
Created: {time.strftime('%Y-%m-%d %H:%M:%S')}
Type: Batch Demo Document
Index: {i+1} of {count}

This is a sample document created during the batch operations demo.
It contains some sample content for testing purposes.

Content sections:
- Introduction
- Main content  
- Conclusion

Document properties:
- Unique ID: {uuid.uuid4()}
- Processing batch: {self.session_id}
- Creation timestamp: {int(time.time())}
"""
            
            documents.append({
                "name": f"batch_doc_{self.session_id}_{i+1:03d}.txt",
                "content": content,
                "content_b64": base64.b64encode(content.encode('utf-8')).decode('utf-8'),
                "description": f"Batch demo document {i+1} from session {self.session_id}"
            })
        
        return documents


async def main():
    """Main function to run batch operations demo."""
    
    print("Starting Batch Operations Demo...")
    
    try:
        demo = BatchOperationsDemo()
        await demo.run_batch_demo()
        
        print("\n🎉 Batch Operations Demo Complete!")
        print("\n📚 What you learned:")
        print("• Efficient batch document upload patterns")
        print("• Parallel search operation techniques")
        print("• Concurrent folder creation strategies")
        print("• Batch property update methods")
        print("• Performance optimization approaches")
        print("• Rate limiting and error handling")
        
    except Exception as e:
        print(f"\n💥 Batch demo failed: {e}")


if __name__ == "__main__":
    asyncio.run(main()) 
```

--------------------------------------------------------------------------------
/docs/api_reference.md:
--------------------------------------------------------------------------------

```markdown
# API Reference

Complete reference for all Alfresco MCP Server tools, resources, and prompts. This document provides detailed information about parameters, responses, and usage examples for our modular FastMCP 2.0 architecture.

## 📋 Overview

The Alfresco MCP Server provides 15 tools for document management, 1 repository resource, and 1 AI-powered prompt for analysis.

### Quick Reference

**🔍 Search Tools (4)**
| Tool | Purpose | Input | Output |
|------|---------|-------|--------|
| [`search_content`](#search_content) | Search documents/folders | query, max_results, node_type | Search results with nodes |
| [`advanced_search`](#advanced_search) | Advanced search with filters | query, content_type, created_after, etc. | Filtered search results |
| [`search_by_metadata`](#search_by_metadata) | Search by metadata properties | property_name, property_value, comparison | Property-based results |
| [`cmis_search`](#cmis_search) | CMIS SQL queries | cmis_query, preset, max_results | SQL query results |

**🛠️ Core Tools (11)**
| Tool | Purpose | Input | Output |
|------|---------|-------|--------|
| [`browse_repository`](#browse_repository) | Browse repository folders | node_id | Folder contents |
| [`repository_info`](#repository_info) | Get repository information | None | Repository status/info |
| [`upload_document`](#upload_document) | Upload new document | filename, content_base64, parent_id | Upload status |
| [`download_document`](#download_document) | Download document content | node_id, save_to_disk | Base64 encoded content |
| [`create_folder`](#create_folder) | Create new folder | folder_name, parent_id, description | Creation status |
| [`get_node_properties`](#get_node_properties) | Get node metadata | node_id | Properties object |
| [`update_node_properties`](#update_node_properties) | Update metadata | node_id, name, title, description, author | Update status |
| [`delete_node`](#delete_node) | Delete document/folder | node_id, permanent | Deletion status |
| [`checkout_document`](#checkout_document) | Lock document for editing | node_id, download_for_editing | Checkout status |
| [`checkin_document`](#checkin_document) | Save new version | node_id, comment, major_version, file_path | Checkin status |
| [`cancel_checkout`](#cancel_checkout) | Cancel checkout/unlock | node_id | Cancel status |

**📄 Resources (1)**
| Resource | Purpose | URI | Output |
|----------|---------|-----|--------|
| [`repository_info`](#repository_info_resource) | Repository status and configuration | alfresco://repository/info | Repository details |

## 🔍 Search Tools

### `search_content`

Search for documents and folders in the Alfresco repository.

**Parameters:**
```json
{
  "query": "string",          // Search query (required)
  "max_results": "integer"    // Maximum results to return (optional, default: 25)
}
```

**Response:**
```json
{
  "results": [
    {
      "id": "node-id",
      "name": "document.pdf",
      "nodeType": "cm:content",
      "isFile": true,
      "isFolder": false,
      "properties": {
        "cm:title": "Document Title",
        "cm:description": "Document description",
        "cm:created": "2024-01-15T10:30:00.000Z",
        "cm:modified": "2024-01-15T15:45:00.000Z",
        "cm:creator": "admin",
        "cm:modifier": "user1"
      },
      "path": "/Company Home/Sites/example/documentLibrary/document.pdf"
    }
  ],
  "totalCount": 1
}
```

**Example:**
```python
# Basic search
result = await client.call_tool("search_content", {
    "query": "financial report",
    "max_results": 10
})

# Wildcard search
result = await client.call_tool("search_content", {
    "query": "*",
    "max_results": 5
})

# Specific term search
result = await client.call_tool("search_content", {
    "query": "budget 2024"
})
```

### `advanced_search`

Advanced search with filters, sorting, and AFTS query language support.

**Parameters:**
```json
{
  "query": "string",            // AFTS query (required)
  "content_type": "string",     // Content type filter (optional)
  "created_after": "string",    // Date filter YYYY-MM-DD (optional)
  "created_before": "string",   // Date filter YYYY-MM-DD (optional)
  "sort_field": "string",       // Sort field (optional, default: "score")
  "sort_order": "string",       // Sort order "ASC" or "DESC" (optional)
  "max_results": "integer"      // Maximum results (optional, default: 25)
}
```

**Example:**
```python
# Advanced search with filters
result = await client.call_tool("advanced_search", {
    "query": "TYPE:cm:content AND cm:title:financial",
    "content_type": "pdf",
    "created_after": "2024-01-01",
    "sort_field": "cm:modified",
    "sort_order": "DESC",
    "max_results": 20
})
```

### `search_by_metadata`

Search by specific metadata properties with comparison operators.

**Parameters:**
```json
{
  "property_name": "string",    // Property name (required) e.g., "cm:title"
  "property_value": "string",   // Property value to search for (required)
  "comparison": "string",       // Comparison operator (optional, default: "equals")
  "max_results": "integer"      // Maximum results (optional, default: 25)
}
```

**Comparison operators:** `equals`, `contains`, `starts_with`, `ends_with`, `greater_than`, `less_than`

**Example:**
```python
# Search by title containing text
result = await client.call_tool("search_by_metadata", {
    "property_name": "cm:title",
    "property_value": "Annual Report",
    "comparison": "contains",
    "max_results": 15
})

# Search by creation date
result = await client.call_tool("search_by_metadata", {
    "property_name": "cm:created",
    "property_value": "2024-01-01",
    "comparison": "greater_than"
})
```

### `cmis_search`

Execute CMIS SQL queries for complex content discovery.

**Parameters:**
```json
{
  "cmis_query": "string",       // CMIS SQL query (required)
  "preset": "string",           // Preset query type (optional)
  "max_results": "integer"      // Maximum results (optional, default: 25)
}
```

**Preset options:** `all_documents`, `all_folders`, `recent_content`

**Example:**
```python
# CMIS SQL query
result = await client.call_tool("cmis_search", {
    "cmis_query": "SELECT * FROM cmis:document WHERE cmis:name LIKE '%report%'",
    "max_results": 30
})

# Using preset
result = await client.call_tool("cmis_search", {
    "preset": "recent_content",
    "max_results": 10
})
```

## 🗂️ Repository Operations

### `browse_repository`

Browse repository folders and their contents.

**Parameters:**
```json
{
  "node_id": "string"           // Folder node ID (optional, default: "-root-")
}
```

**Example:**
```python
# Browse root folder
result = await client.call_tool("browse_repository", {
    "node_id": "-root-"
})

# Browse specific folder
result = await client.call_tool("browse_repository", {
    "node_id": "folder-abc123-def456"
})
```

### `repository_info`

Get repository information, version, and configuration details.

**Parameters:** None

**Example:**
```python
# Get repository information
result = await client.call_tool("repository_info", {})
```

## 📤 Document Upload

### `upload_document`

Upload a new document to the Alfresco repository.

**Parameters:**
```json
{
  "filename": "string",         // Document filename (required)
  "content_base64": "string",   // Base64 encoded content (required)
  "parent_id": "string",        // Parent folder ID (optional, default: "-root-")
  "description": "string"       // Document description (optional)
}
```

**Response:**
```json
{
  "success": true,
  "nodeId": "abc123-def456-ghi789",
  "filename": "document.pdf",
  "parentId": "-root-",
  "path": "/Company Home/document.pdf"
}
```

**Example:**
```python
import base64

# Prepare content
content = "This is my document content"
content_b64 = base64.b64encode(content.encode()).decode()

# Upload to root
result = await client.call_tool("upload_document", {
    "filename": "my_document.txt",
    "content_base64": content_b64,
    "parent_id": "-root-",
    "description": "My first document"
})

# Upload to specific folder
result = await client.call_tool("upload_document", {
    "filename": "report.pdf",
    "content_base64": pdf_content_b64,
    "parent_id": "folder-node-id",
    "description": "Monthly report"
})
```

## 📥 Document Download

### `download_document`

Download the content of a document from the repository.

**Parameters:**
```json
{
  "node_id": "string"   // Document node ID (required)
}
```

**Response:**
```json
{
  "success": true,
  "nodeId": "abc123-def456-ghi789",
  "filename": "document.pdf",
  "mimeType": "application/pdf",
  "size": 1024,
  "content_base64": "JVBERi0xLjQKJ..."
}
```

**Example:**
```python
# Download document
result = await client.call_tool("download_document", {
    "node_id": "abc123-def456-ghi789"
})

# Decode content
import base64
content = base64.b64decode(result.content_base64).decode()
print(content)
```

## 🔄 Version Control

### `checkout_document`

Check out a document for editing (locks the document).

**Parameters:**
```json
{
  "node_id": "string"   // Document node ID (required)
}
```

**Response:**
```json
{
  "success": true,
  "nodeId": "abc123-def456-ghi789",
  "workingCopyId": "abc123-def456-ghi789-wc",
  "status": "checked_out"
}
```

### `checkin_document`

Check in a document with a new version.

**Parameters:**
```json
{
  "node_id": "string",          // Document node ID (required)
  "comment": "string",          // Version comment (optional)
  "major_version": "boolean"    // Major version increment (optional, default: false)
}
```

**Response:**
```json
{
  "success": true,
  "nodeId": "abc123-def456-ghi789",
  "version": "1.1",
  "comment": "Updated content",
  "isMajorVersion": false
}
```

**Example:**
```python
# Checkout document
checkout_result = await client.call_tool("checkout_document", {
    "node_id": "doc-node-id"
})

# Make changes (simulated)
# ... edit the document ...

# Checkin as minor version
checkin_result = await client.call_tool("checkin_document", {
    "node_id": "doc-node-id",
    "comment": "Fixed typos and updated content",
    "major_version": False
})

# Checkin as major version
major_checkin = await client.call_tool("checkin_document", {
    "node_id": "doc-node-id", 
    "comment": "Major content overhaul",
    "major_version": True
})
```

### `cancel_checkout`

Cancel the checkout of a document, unlocking it without saving changes.

**Parameters:**
```json
{
  "node_id": "string"           // Document node ID (required)
}
```

**Response:**
```json
{
  "success": true,
  "nodeId": "doc-node-id",
  "message": "Checkout cancelled successfully",
  "unlocked": true
}
```

**Example:**
```python
# Cancel checkout (unlock without saving)
cancel_result = await client.call_tool("cancel_checkout", {
    "node_id": "doc-node-id"
})

# Typical workflow: checkout -> cancel if needed
checkout_result = await client.call_tool("checkout_document", {
    "node_id": "doc-node-id"
})

# If you decide not to make changes
cancel_result = await client.call_tool("cancel_checkout", {
    "node_id": "doc-node-id"
})
```

## 🗑️ Node Deletion

### `delete_node`

Delete a document or folder from the repository.

**Parameters:**
```json
{
  "node_id": "string",      // Node ID to delete (required)
  "permanent": "boolean"    // Permanent deletion (optional, default: false)
}
```

**Response:**
```json
{
  "success": true,
  "nodeId": "abc123-def456-ghi789",
  "permanent": false,
  "status": "moved_to_trash"
}
```

**Example:**
```python
# Move to trash (soft delete)
result = await client.call_tool("delete_node", {
    "node_id": "node-to-delete",
    "permanent": False
})

# Permanent deletion
result = await client.call_tool("delete_node", {
    "node_id": "node-to-delete",
    "permanent": True
})
```

## ⚙️ Property Management

### `get_node_properties`

Retrieve all properties and metadata for a node.

**Parameters:**
```json
{
  "node_id": "string"   // Node ID (required)
}
```

**Response:**
```json
{
  "success": true,
  "nodeId": "abc123-def456-ghi789",
  "properties": {
    "cm:name": "document.pdf",
    "cm:title": "Important Document",
    "cm:description": "This is an important document",
    "cm:created": "2024-01-15T10:30:00.000Z",
    "cm:modified": "2024-01-15T15:45:00.000Z",
    "cm:creator": "admin",
    "cm:modifier": "user1",
    "cm:owner": "admin",
    "sys:node-uuid": "abc123-def456-ghi789",
    "sys:store-protocol": "workspace",
    "sys:store-identifier": "SpacesStore"
  }
}
```

### `update_node_properties`

Update properties and metadata for a node.

**Parameters:**
```json
{
  "node_id": "string",      // Node ID (required)
  "properties": "object"    // Properties to update (required)
}
```

**Response:**
```json
{
  "success": true,
  "nodeId": "abc123-def456-ghi789",
  "updatedProperties": {
    "cm:title": "Updated Title",
    "cm:description": "Updated description"
  }
}
```

**Example:**
```python
# Get current properties
props_result = await client.call_tool("get_node_properties", {
    "node_id": "abc123-def456-ghi789"
})

print("Current properties:", props_result[0].text)

# Update properties
update_result = await client.call_tool("update_node_properties", {
    "node_id": "abc123-def456-ghi789",
    "properties": {
        "cm:title": "Updated Document Title",
        "cm:description": "This document has been updated",
        "custom:project": "Project Alpha",
        "custom:status": "approved"
    }
})
```

## 📁 Folder Operations

### `create_folder`

Create a new folder in the repository.

**Parameters:**
```json
{
  "folder_name": "string",    // Folder name (required)
  "parent_id": "string",      // Parent folder ID (optional, default: "-root-")
  "description": "string"     // Folder description (optional)
}
```

**Response:**
```json
{
  "success": true,
  "nodeId": "folder-abc123-def456",
  "folderName": "New Folder",
  "parentId": "-root-",
  "path": "/Company Home/New Folder"
}
```

**Example:**
```python
# Create folder in root
result = await client.call_tool("create_folder", {
    "folder_name": "Project Documents",
    "parent_id": "-root-",
    "description": "Documents for the current project"
})

# Create nested folder
result = await client.call_tool("create_folder", {
    "folder_name": "Reports",
    "parent_id": "parent-folder-id",
    "description": "Monthly and quarterly reports"
})
```

## 📚 Resources

### Repository Resources

Access repository information and status:

```python
# Repository information
info = await client.read_resource("alfresco://repository/info")

# Repository health status
health = await client.read_resource("alfresco://repository/health")

# Repository statistics
stats = await client.read_resource("alfresco://repository/stats")

# Repository configuration
config = await client.read_resource("alfresco://repository/config")
```

**Resource Responses:**
```json
{
  "repository": {
    "edition": "Community",
    "version": "7.4.0",
    "status": "healthy",
    "modules": ["content-services", "search-services"]
  }
}
```

## 💭 Prompts

### `search_and_analyze`

Generate AI-powered analysis prompts for search results.

**Parameters:**
```json
{
  "query": "string",            // Search query (required)
  "analysis_type": "string"     // Analysis type: summary, detailed, trends, compliance (required)
}
```

**Response:**
```json
{
  "messages": [
    {
      "role": "user", 
      "content": {
        "type": "text",
        "text": "Based on the Alfresco search results for 'financial reports', provide a detailed analysis..."
      }
    }
  ]
}
```

**Example:**
```python
# Generate analysis prompt
prompt_result = await client.get_prompt("search_and_analyze", {
    "query": "quarterly reports 2024",
    "analysis_type": "summary"
})

print("Generated prompt:")
print(prompt_result.messages[0].content.text)
```

## 🔍 Error Handling

All tools return consistent error responses:

```json
{
  "success": false,
  "error": {
    "code": "ALFRESCO_ERROR",
    "message": "Authentication failed",
    "details": "Invalid username or password"
  }
}
```

Common error codes:
- `AUTHENTICATION_ERROR`: Invalid credentials
- `NODE_NOT_FOUND`: Specified node doesn't exist
- `PERMISSION_DENIED`: Insufficient permissions
- `INVALID_PARAMETER`: Missing or invalid parameters
- `CONNECTION_ERROR`: Cannot connect to Alfresco server

## 📄 Resources

Resources provide access to repository information and metadata without modifying content.

### `repository_info` (Resource) {#repository_info_resource}

Get comprehensive repository information including version, license, and configuration.

**URI:** `alfresco://repository/info`

**Response:**
```json
{
  "repository": {
    "id": "alfresco-community-2023.2",
    "edition": "Community",
    "version": {
      "major": "23",
      "minor": "2", 
      "patch": "0",
      "hotfix": "0",
      "schema": "14005",
      "label": "r135432-b14",
      "display": "Community v23.2.0 (r135432-b14) schema 14005"
    },
    "status": {
      "readOnly": false,
      "auditEnabled": true,
      "quickShareEnabled": true,
      "thumbnailGenerationEnabled": true
    },
    "license": {
      "issuedAt": "2024-01-01T00:00:00.000Z",
      "expiresAt": "2024-12-31T23:59:59.000Z",
      "remainingDays": 365,
      "holder": "Community Edition",
      "mode": "ENTERPRISE",
      "entitlements": {
        "maxUsers": 0,
        "maxDocs": 0
      }
    },
    "modules": [
      {
        "id": "alfresco-share-services",
        "title": "Alfresco Share Services",
        "version": "23.2.0",
        "installState": "INSTALLED",
        "installDate": "2024-01-15T10:30:00.000Z"
      }
    ]
  }
}
```

**Example:**
```python
# Read repository information
info = await client.read_resource("alfresco://repository/info")
print(info[0].text)

# Repository info is also available as a tool
tool_result = await client.call_tool("repository_info", {})
```

**Use Cases:**
- Health checks and monitoring
- Version compatibility verification  
- License compliance checking
- System status reporting
- Administrative dashboards

## 📊 Rate Limits and Performance

- **Default timeout**: 30 seconds per operation
- **Concurrent operations**: Up to 10 simultaneous requests
- **File size limits**: 100MB per upload
- **Search limits**: Maximum 1000 results per search

## 🔐 Security Considerations

- All communications use HTTPS when available
- Credentials are passed securely via environment variables
- Base64 encoding for document content transfer
- Node IDs are validated before operations

---

**📝 Note**: This API reference covers version 1.1.0 of the Alfresco MCP Server. This release includes all 15 tools with FastMCP 2.0 implementation. 
```
Page 3/3FirstPrevNextLast