#
tokens: 43521/50000 5/146 files (page 5/7)
lines: off (toggle) GitHub
raw markdown copy
This is page 5 of 7. Use http://codebase.md/tosin2013/mcp-codebase-insight?lines=false&page={x} to view the full context.

# Directory Structure

```
├── .bumpversion.cfg
├── .codecov.yml
├── .compile-venv-py3.11
│   ├── bin
│   │   ├── activate
│   │   ├── activate.csh
│   │   ├── activate.fish
│   │   ├── Activate.ps1
│   │   ├── coverage
│   │   ├── coverage-3.11
│   │   ├── coverage3
│   │   ├── pip
│   │   ├── pip-compile
│   │   ├── pip-sync
│   │   ├── pip3
│   │   ├── pip3.11
│   │   ├── py.test
│   │   ├── pyproject-build
│   │   ├── pytest
│   │   ├── python
│   │   ├── python3
│   │   ├── python3.11
│   │   └── wheel
│   └── pyvenv.cfg
├── .env.example
├── .github
│   ├── agents
│   │   ├── DebugAgent.agent.md
│   │   ├── DocAgent.agent.md
│   │   ├── README.md
│   │   ├── TestAgent.agent.md
│   │   └── VectorStoreAgent.agent.md
│   ├── copilot-instructions.md
│   └── workflows
│       ├── build-verification.yml
│       ├── publish.yml
│       └── tdd-verification.yml
├── .gitignore
├── async_fixture_wrapper.py
├── CHANGELOG.md
├── CLAUDE.md
├── codebase_structure.txt
├── component_test_runner.py
├── CONTRIBUTING.md
├── core_workflows.txt
├── create_release_issues.sh
├── debug_tests.md
├── Dockerfile
├── docs
│   ├── adrs
│   │   └── 001_use_docker_for_qdrant.md
│   ├── api.md
│   ├── components
│   │   └── README.md
│   ├── cookbook.md
│   ├── development
│   │   ├── CODE_OF_CONDUCT.md
│   │   ├── CONTRIBUTING.md
│   │   └── README.md
│   ├── documentation_map.md
│   ├── documentation_summary.md
│   ├── features
│   │   ├── adr-management.md
│   │   ├── code-analysis.md
│   │   └── documentation.md
│   ├── getting-started
│   │   ├── configuration.md
│   │   ├── docker-setup.md
│   │   ├── installation.md
│   │   ├── qdrant_setup.md
│   │   └── quickstart.md
│   ├── qdrant_setup.md
│   ├── README.md
│   ├── SSE_INTEGRATION.md
│   ├── system_architecture
│   │   └── README.md
│   ├── templates
│   │   └── adr.md
│   ├── testing_guide.md
│   ├── troubleshooting
│   │   ├── common-issues.md
│   │   └── faq.md
│   ├── vector_store_best_practices.md
│   └── workflows
│       └── README.md
├── error_logs.txt
├── examples
│   └── use_with_claude.py
├── github-actions-documentation.md
├── Makefile
├── module_summaries
│   ├── backend_summary.txt
│   ├── database_summary.txt
│   └── frontend_summary.txt
├── output.txt
├── package-lock.json
├── package.json
├── PLAN.md
├── prepare_codebase.sh
├── PULL_REQUEST.md
├── pyproject.toml
├── pytest.ini
├── README.md
├── requirements-3.11.txt
├── requirements-3.11.txt.backup
├── requirements-dev.txt
├── requirements.in
├── requirements.txt
├── run_build_verification.sh
├── run_fixed_tests.sh
├── run_test_with_path_fix.sh
├── run_tests.py
├── scripts
│   ├── check_qdrant_health.sh
│   ├── compile_requirements.sh
│   ├── load_example_patterns.py
│   ├── macos_install.sh
│   ├── README.md
│   ├── setup_qdrant.sh
│   ├── start_mcp_server.sh
│   ├── store_code_relationships.py
│   ├── store_report_in_mcp.py
│   ├── validate_knowledge_base.py
│   ├── validate_poc.py
│   ├── validate_vector_store.py
│   └── verify_build.py
├── server.py
├── setup_qdrant_collection.py
├── setup.py
├── src
│   └── mcp_codebase_insight
│       ├── __init__.py
│       ├── __main__.py
│       ├── asgi.py
│       ├── core
│       │   ├── __init__.py
│       │   ├── adr.py
│       │   ├── cache.py
│       │   ├── component_status.py
│       │   ├── config.py
│       │   ├── debug.py
│       │   ├── di.py
│       │   ├── documentation.py
│       │   ├── embeddings.py
│       │   ├── errors.py
│       │   ├── health.py
│       │   ├── knowledge.py
│       │   ├── metrics.py
│       │   ├── prompts.py
│       │   ├── sse.py
│       │   ├── state.py
│       │   ├── task_tracker.py
│       │   ├── tasks.py
│       │   └── vector_store.py
│       ├── models.py
│       ├── server_test_isolation.py
│       ├── server.py
│       ├── utils
│       │   ├── __init__.py
│       │   └── logger.py
│       └── version.py
├── start-mcpserver.sh
├── summary_document.txt
├── system-architecture.md
├── system-card.yml
├── test_fix_helper.py
├── test_fixes.md
├── test_function.txt
├── test_imports.py
├── tests
│   ├── components
│   │   ├── conftest.py
│   │   ├── test_core_components.py
│   │   ├── test_embeddings.py
│   │   ├── test_knowledge_base.py
│   │   ├── test_sse_components.py
│   │   ├── test_stdio_components.py
│   │   ├── test_task_manager.py
│   │   └── test_vector_store.py
│   ├── config
│   │   └── test_config_and_env.py
│   ├── conftest.py
│   ├── integration
│   │   ├── fixed_test2.py
│   │   ├── test_api_endpoints.py
│   │   ├── test_api_endpoints.py-e
│   │   ├── test_communication_integration.py
│   │   └── test_server.py
│   ├── README.md
│   ├── README.test.md
│   ├── test_build_verifier.py
│   └── test_file_relationships.py
└── trajectories
    └── tosinakinosho
        ├── anthropic_filemap__claude-3-sonnet-20240229__t-0.00__p-1.00__c-3.00___db62b9
        │   └── db62b9
        │       └── config.yaml
        ├── default__claude-3-5-sonnet-20240620__t-0.00__p-1.00__c-3.00___03565e
        │   └── 03565e
        │       ├── 03565e.traj
        │       └── config.yaml
        └── default__openrouter
            └── anthropic
                └── claude-3.5-sonnet-20240620:beta__t-0.00__p-1.00__c-3.00___03565e
                    └── 03565e
                        ├── 03565e.pred
                        ├── 03565e.traj
                        └── config.yaml
```

# Files

--------------------------------------------------------------------------------
/tests/integration/test_server.py:
--------------------------------------------------------------------------------

```python
"""Test server API endpoints."""

import sys
import os

# Ensure the src directory is in the Python path
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../../')))

import pytest
import pytest_asyncio
from httpx import AsyncClient
import uuid
import logging
import time
from pathlib import Path
from datetime import datetime, timezone
from typing import Dict, List, Any, Optional

from src.mcp_codebase_insight.core.config import ServerConfig
from src.mcp_codebase_insight.core.vector_store import VectorStore
from src.mcp_codebase_insight.core.knowledge import Pattern
from src.mcp_codebase_insight.core.embeddings import SentenceTransformerEmbedding
from src.mcp_codebase_insight.server import CodebaseAnalysisServer
from src.mcp_codebase_insight.server_test_isolation import get_isolated_server_state

# Setup logger
logger = logging.getLogger(__name__)

# Environment variables or defaults for vector store testing
QDRANT_URL = os.environ.get("QDRANT_URL", "http://localhost:6333") 
TEST_COLLECTION_NAME = os.environ.get("TEST_COLLECTION_NAME", "test_vector_search")
EMBEDDING_MODEL = os.environ.get("EMBEDDING_MODEL", "all-MiniLM-L6-v2")

# Path to test repository
TEST_REPO_PATH = Path("tests/fixtures/test_repo")

@pytest_asyncio.fixture
async def setup_test_vector_store(test_server_client):
    """Set up a test vector store with sample patterns for the server tests.
    
    This fixture initializes the vector store component in the server with test patterns,
    allowing the vector store search endpoint to be tested properly.
    """
    # Get server state from the test client
    logger.info("Attempting to get server health status")
    request = await test_server_client.get("/health")
    if request.status_code != 200:
        logger.warning(f"Server health check failed with status code {request.status_code}")
        yield None
        return
    
    # Get the server state through test isolation utilities
    logger.info("Getting isolated server state")
    server_state = get_isolated_server_state()
    if not server_state:
        logger.warning("Could not get isolated server state, server_state is None")
        yield None
        return
    
    logger.info(f"Got server state, instance ID: {server_state.instance_id}")
    logger.info(f"Server state components: {server_state.list_components()}")
        
    # Create and initialize a test vector store
    try:
        # Create the embedder first
        logger.info(f"Creating embedding model with model name: {EMBEDDING_MODEL}")
        embedder = SentenceTransformerEmbedding(model_name=EMBEDDING_MODEL)
        await embedder.initialize()
        
        # Now create the vector store with the embedder
        logger.info(f"Creating vector store with URL: {QDRANT_URL}, collection: {TEST_COLLECTION_NAME}")
        vector_store = VectorStore(
            url=QDRANT_URL,
            embedder=embedder,
            collection_name=TEST_COLLECTION_NAME
        )
        
        # Delete any existing collection with this name
        try:
            logger.info("Cleaning up vector store before use")
            await vector_store.cleanup()
            logger.info("Vector store cleaned up")
        except Exception as e:
            logger.warning(f"Error during vector store cleanup: {str(e)}")
            
        # Initialize the vector store
        logger.info("Initializing vector store")
        await vector_store.initialize()
        logger.info(f"Initialized vector store with collection: {TEST_COLLECTION_NAME}")
        
        # Add test patterns
        logger.info("Adding test patterns to vector store")
        await add_test_patterns(vector_store, embedder)
        
        # Register the vector store in the server state
        logger.info("Registering vector store component in server state")
        server_state.register_component("vector_store", vector_store)
        logger.info("Registered vector store component in server state")
        
        yield vector_store
        
        # Cleanup
        try:
            logger.info("Closing vector store")
            await vector_store.close()
            logger.info("Vector store closed")
        except Exception as e:
            logger.warning(f"Error during vector store closure: {str(e)}")
            
    except Exception as e:
        logger.error(f"Error setting up test vector store: {str(e)}", exc_info=True)
        yield None

async def add_test_patterns(store: VectorStore, embedder: SentenceTransformerEmbedding):
    """Add test patterns to the vector store for testing."""
    patterns = []
    
    # Add sample patterns for testing
    patterns.append(Pattern(
        id=str(uuid.uuid4()),
        text="""class SearchResult:
    \"\"\"Represents a search result from the vector store.\"\"\"
    def __init__(self, id: str, score: float, metadata: Optional[Dict] = None):
        self.id = id
        self.score = score
        self.metadata = metadata or {}
        
    def to_dict(self):
        \"\"\"Convert to dictionary.\"\"\"
        return {
            "id": self.id,
            "score": self.score,
            "metadata": self.metadata
        }""",
        title="SearchResult Class",
        description="A class for vector store search results",
        pattern_type="code",
        tags=["python", "class", "search", "vector-store"],
        metadata={
            "language": "python",
            "file_path": "src/core/models.py",
            "line_range": "10-25",
            "timestamp": datetime.now(timezone.utc).isoformat(),
            "type": "code"
        }
    ))
    
    patterns.append(Pattern(
        id=str(uuid.uuid4()),
        text="""async def search(
    self,
    query: str,
    limit: int = 5,
    threshold: float = 0.7,
    file_type: Optional[str] = None,
    path_pattern: Optional[str] = None
) -> List[Dict]:
    \"\"\"Search for patterns matching the query.\"\"\"
    # Generate embedding for the query
    embedding = await self.embedding_model.embed(query)
    
    # Prepare filter conditions
    filter_conditions = {}
    if file_type:
        filter_conditions["language"] = file_type
    if path_pattern:
        filter_conditions["file_path"] = {"$like": path_pattern}
        
    # Perform the search
    results = await self.vector_store.search(
        embedding=embedding,
        limit=limit,
        filter_conditions=filter_conditions
    )
    
    # Filter by threshold
    filtered_results = [r for r in results if r.score >= threshold]
    
    return filtered_results""",
        title="Vector Store Search Method",
        description="Async method to search the vector store with filters",
        pattern_type="code",
        tags=["python", "async", "function", "search"],
        metadata={
            "language": "python",
            "file_path": "src/core/search.py", 
            "line_range": "50-75",
            "timestamp": datetime.now(timezone.utc).isoformat(),
            "type": "code"
        }
    ))
    
    patterns.append(Pattern(
        id=str(uuid.uuid4()),
        text="""# Vector Store Configuration
        
## Search Parameters

- **query**: The text to search for similar patterns
- **threshold**: Similarity score threshold (0.0 to 1.0)
- **limit**: Maximum number of results to return
- **file_type**: Filter by programming language/file type
- **path_pattern**: Filter by file path pattern

## Recommended Threshold Values

- **0.9-1.0**: Very high precision, almost exact matches
- **0.8-0.9**: High precision, strongly similar
- **0.7-0.8**: Good balance (default)
- **0.6-0.7**: Higher recall, more results
- **0.5-0.6**: Very high recall, may include less relevant matches""",
        title="Vector Store Documentation",
        description="Documentation on vector store search parameters",
        pattern_type="documentation",
        tags=["documentation", "markdown", "search", "parameters"],
        metadata={
            "language": "markdown",
            "file_path": "docs/vector_store.md",
            "line_range": "50-70",
            "timestamp": datetime.now(timezone.utc).isoformat(),
            "type": "documentation"
        }
    ))
    
    # Store patterns with embeddings
    for pattern in patterns:
        # Generate embedding for the pattern text
        embedding = await embedder.embed(pattern.text)
        
        # Store the pattern
        await store.store_pattern(
            id=pattern.id,
            text=pattern.text,
            title=pattern.title,
            description=pattern.description,
            pattern_type=pattern.pattern_type,
            tags=pattern.tags,
            metadata=pattern.metadata,
            embedding=embedding
        )
        logger.info(f"Added pattern: {pattern.title}")
    
    logger.info(f"Added {len(patterns)} patterns to the test vector store")
    return patterns

# Use the test_client fixture from conftest.py
@pytest_asyncio.fixture(scope="function")
async def test_server_client(httpx_test_client):
    """Get a test client for server API testing.
    
    This uses the httpx_test_client from conftest.py to ensure
    proper event loop and resource management.
    """
    yield httpx_test_client

@pytest.fixture
def test_code():
    """Return a sample code snippet for testing."""
    return """
def example_function(x: int) -> int:
    return x * 2
"""

@pytest.fixture
def test_issue():
    """Return a sample issue description for testing."""
    return "Error in function: example_function returns incorrect results for negative values"

@pytest.fixture
def test_adr():
    """Return a sample ADR structure for testing."""
    return {
        "title": "Test ADR",
        "status": "Proposed",
        "context": "This is a test ADR for automated testing purposes.",
        "decision": "We've decided to use this test ADR format.",
        "consequences": {
            "positive": ["Test positive consequence"],
            "negative": ["Test negative consequence"]
        },
        "options": [
            {
                "title": "Test option",
                "description": "Test description",
                "pros": ["Test pro"],
                "cons": ["Test con"]
            }
        ]
    }

@pytest.mark.asyncio
async def test_health_check(test_server_client: AsyncClient):
    """Test health check endpoint."""
    response = await test_server_client.get("/health")
    assert response.status_code == 200
    data = response.json()
    assert "status" in data

@pytest.mark.asyncio
async def test_metrics(test_server_client: AsyncClient):
    """Test metrics endpoint."""
    response = await test_server_client.get("/metrics")
    # Some test servers may not have metrics enabled
    if response.status_code == 200:
        data = response.json()
        assert "metrics" in data
    else:
        logger.info(f"Metrics endpoint not available (status: {response.status_code})")
        assert response.status_code in [404, 503]  # Not found or service unavailable

@pytest.mark.asyncio
async def test_analyze_code(test_server_client: AsyncClient, test_code: str):
    """Test code analysis endpoint."""
    response = await test_server_client.post(
        "/tools/analyze-code",
        json={
            "name": "analyze-code",
            "arguments": {
                "code": test_code,
                "context": {}
            }
        }
    )
    # Component might not be available in test server
    if response.status_code == 200:
        data = response.json()
        assert "content" in data
    else:
        logger.info(f"Code analysis endpoint not available (status: {response.status_code})")
        assert response.status_code in [404, 503]  # Not found or service unavailable

@pytest.mark.asyncio
async def test_create_adr(test_server_client: AsyncClient, test_adr: dict):
    """Test ADR creation endpoint."""
    response = await test_server_client.post(
        "/tools/create-adr",
        json={
            "name": "create-adr",
            "arguments": test_adr
        }
    )
    # Component might not be available in test server
    if response.status_code == 200:
        data = response.json()
        assert "content" in data
    else:
        logger.info(f"ADR creation endpoint not available (status: {response.status_code})")
        assert response.status_code in [404, 503]  # Not found or service unavailable

@pytest.mark.asyncio
async def test_debug_issue(test_server_client: AsyncClient, test_issue: str):
    """Test issue debugging endpoint."""
    response = await test_server_client.post(
        "/tools/debug-issue",
        json={
            "name": "debug-issue",
            "arguments": {
                "issue": test_issue,
                "context": {}
            }
        }
    )
    # Component might not be available in test server
    if response.status_code == 200:
        data = response.json()
        assert "content" in data
    else:
        logger.info(f"Debug issue endpoint not available (status: {response.status_code})")
        assert response.status_code in [404, 503]  # Not found or service unavailable

@pytest.mark.asyncio
async def test_search_knowledge(test_server_client: AsyncClient):
    """Test knowledge search endpoint."""
    response = await test_server_client.post(
        "/tools/search-knowledge",
        json={
            "name": "search-knowledge", 
            "arguments": {
                "query": "test query",
                "limit": 5
            }
        }
    )
    # Component might not be available in test server
    if response.status_code == 200:
        data = response.json()
        assert "content" in data
    else:
        logger.info(f"Knowledge search endpoint not available (status: {response.status_code})")
        assert response.status_code in [404, 503]  # Not found or service unavailable

@pytest.mark.asyncio
async def test_get_task(test_server_client: AsyncClient):
    """Test task endpoint."""
    # Create a test task ID
    test_id = f"test_task_{uuid.uuid4().hex}"
    
    response = await test_server_client.post(
        "/task",
        json={
            "task_id": test_id,
            "status": "pending",
            "result": None
        }
    )
    assert response.status_code in [200, 404, 503]  # Allow various responses depending on component availability

@pytest.mark.asyncio
async def test_invalid_request(test_server_client: AsyncClient):
    """Test invalid request handling."""
    response = await test_server_client.post(
        "/tools/invalid-tool",
        json={
            "name": "invalid-tool",
            "arguments": {}
        }
    )
    assert response.status_code in [404, 400]  # Either not found or bad request

@pytest.mark.asyncio
async def test_not_found(test_server_client: AsyncClient):
    """Test 404 handling."""
    response = await test_server_client.get("/nonexistent-endpoint")
    assert response.status_code == 404

@pytest.mark.asyncio
async def test_server_lifecycle():
    """Test server lifecycle."""
    # This is a safety check to ensure we're not breaking anything
    # The actual server lifecycle is tested by the conftest fixtures
    assert True  # Replace with real checks if needed

@pytest.mark.asyncio
async def test_vector_store_search_threshold_validation(test_server_client: AsyncClient, setup_test_vector_store):
    """Test that the vector store search endpoint validates threshold values."""
    # Skip if vector store setup failed
    if setup_test_vector_store is None:
        pytest.skip("Vector store setup failed, skipping test")
    
    # Test invalid threshold greater than 1.0
    response = await test_server_client.get("/api/vector-store/search?query=test&threshold=1.5")
    assert response.status_code == 422
    assert "threshold" in response.text
    assert "less than or equal to" in response.text

    # Test invalid threshold less than 0.0
    response = await test_server_client.get("/api/vector-store/search?query=test&threshold=-0.5")
    assert response.status_code == 422
    assert "threshold" in response.text
    assert "greater than or equal to" in response.text

    # Test boundary value 0.0 (should be valid)
    response = await test_server_client.get("/api/vector-store/search?query=test&threshold=0.0")
    assert response.status_code == 200
    data = response.json()
    assert "results" in data
    assert data["threshold"] == 0.0

    # Test boundary value 1.0 (should be valid)
    response = await test_server_client.get("/api/vector-store/search?query=test&threshold=1.0")
    assert response.status_code == 200
    data = response.json()
    assert "results" in data
    assert data["threshold"] == 1.0

    # Test with valid filter parameters
    response = await test_server_client.get("/api/vector-store/search?query=test&threshold=0.7&file_type=python&path_pattern=src/*")
    assert response.status_code == 200
    data = response.json()
    assert "results" in data
    assert "query" in data
    assert "total_results" in data
    assert "limit" in data
    assert "threshold" in data
    assert data["threshold"] == 0.7

    # If we have results, check their format
    if data["results"]:
        result = data["results"][0]
        assert "id" in result
        assert "score" in result
        assert "text" in result
        assert "file_path" in result
        assert "line_range" in result
        assert "type" in result
        assert "language" in result
        assert "timestamp" in result

@pytest.mark.asyncio
async def test_vector_store_search_functionality(test_server_client: AsyncClient, setup_test_vector_store):
    """Test comprehensive vector store search functionality.
    
    This test validates the full functionality of the vector store search endpoint,
    including result format, filtering, and metadata handling.
    
    The test checks:
    1. Basic search returns properly formatted results
    2. File type filtering works correctly
    3. Path pattern filtering works correctly
    4. Limit parameter controls result count
    5. Results contain all required metadata fields
    """
    # Skip if vector store setup failed
    if setup_test_vector_store is None:
        pytest.skip("Vector store setup failed, skipping test")
    
    # Test basic search functionality
    response = await test_server_client.get(
        "/api/vector-store/search",
        params={
            "query": "test query",
            "threshold": 0.7,
            "limit": 5
        }
    )
    
    # We should have a successful response now that the vector store is initialized
    assert response.status_code == 200
    data = response.json()
    
    # Validate response structure
    assert "query" in data
    assert data["query"] == "test query"
    assert "results" in data
    assert "threshold" in data
    assert data["threshold"] == 0.7
    assert "total_results" in data
    assert "limit" in data
    assert data["limit"] == 5
    
    # Test with file type filter
    response = await test_server_client.get(
        "/api/vector-store/search",
        params={
            "query": "test query",
            "threshold": 0.7,
            "limit": 5,
            "file_type": "python"
        }
    )
    assert response.status_code == 200
    data = response.json()
    assert "file_type" in data
    assert data["file_type"] == "python"
    
    # Test with path pattern filter
    response = await test_server_client.get(
        "/api/vector-store/search",
        params={
            "query": "test query",
            "threshold": 0.7,
            "limit": 5,
            "path_pattern": "src/**/*.py"
        }
    )
    assert response.status_code == 200
    data = response.json()
    assert "path_pattern" in data
    assert data["path_pattern"] == "src/**/*.py"
    
    # Test with limit=1
    response = await test_server_client.get(
        "/api/vector-store/search",
        params={
            "query": "test query",
            "threshold": 0.7,
            "limit": 1
        }
    )
    assert response.status_code == 200
    data = response.json()
    assert data["limit"] == 1
    
    # If we have results, verify the result format
    if data["results"]:
        result = data["results"][0]
        # Check all required fields are present
        assert "id" in result
        assert "score" in result
        assert "text" in result
        assert "file_path" in result
        assert "line_range" in result
        assert "type" in result
        assert "language" in result
        assert "timestamp" in result
        
        # Validate data types
        assert isinstance(result["id"], str)
        assert isinstance(result["score"], (int, float))
        assert isinstance(result["text"], str)
        assert isinstance(result["file_path"], str)
        assert isinstance(result["line_range"], str)
        assert isinstance(result["type"], str)
        assert isinstance(result["language"], str)
        assert isinstance(result["timestamp"], str)

@pytest.mark.asyncio
async def test_vector_store_search_error_handling(test_server_client: AsyncClient, setup_test_vector_store):
    """Test error handling for vector store search endpoint.
    
    This test validates the error handling capabilities of the vector store search endpoint
    when provided with invalid or missing required parameters.
    
    The test checks:
    1. Missing query parameter returns appropriate error
    2. Invalid limit parameter (negative/zero) returns appropriate error
    """
    # Skip if vector store setup failed
    if setup_test_vector_store is None:
        pytest.skip("Vector store setup failed, skipping test")
    
    # Test missing query parameter
    response = await test_server_client.get(
        "/api/vector-store/search",
        params={
            "threshold": 0.7,
            "limit": 5
        }
    )
    
    # Missing required query parameter should return 422
    assert response.status_code == 422
    data = response.json()
    assert "detail" in data
    assert any("query" in error["loc"] for error in data["detail"])
    
    # Test invalid limit parameter (negative)
    response = await test_server_client.get(
        "/api/vector-store/search",
        params={
            "query": "test query",
            "threshold": 0.7,
            "limit": -5
        }
    )
    assert response.status_code == 422
    data = response.json()
    assert "detail" in data
    assert any("limit" in error["loc"] for error in data["detail"])
    
    # Test invalid limit parameter (zero)
    response = await test_server_client.get(
        "/api/vector-store/search",
        params={
            "query": "test query",
            "threshold": 0.7,
            "limit": 0
        }
    )
    assert response.status_code == 422
    data = response.json()
    assert "detail" in data
    assert any("limit" in error["loc"] for error in data["detail"])

@pytest.mark.asyncio
async def test_vector_store_search_performance(test_server_client: AsyncClient, setup_test_vector_store):
    """Test performance of vector store search endpoint.
    
    This test measures the response time of the vector store search endpoint
    to ensure it meets performance requirements.
    
    The test checks:
    1. Search response time is within acceptable limits (< 1000ms)
    2. Multiple consecutive searches maintain performance
    """
    # Skip if vector store setup failed
    if setup_test_vector_store is None:
        pytest.skip("Vector store setup failed, skipping test")
        
    # Define performance thresholds
    max_response_time_ms = 1000  # 1 second maximum response time
    
    # Perform timed search tests
    for i in range(3):  # Test 3 consecutive searches
        start_time = time.time()
        
        response = await test_server_client.get(
            "/api/vector-store/search",
            params={
                "query": f"test performance query {i}",
                "threshold": 0.7,
                "limit": 5
            }
        )
        
        end_time = time.time()
        response_time_ms = (end_time - start_time) * 1000
        
        assert response.status_code == 200
        logger.info(f"Search {i+1} response time: {response_time_ms:.2f}ms")
        
        # Assert performance is within acceptable limits
        assert response_time_ms < max_response_time_ms, \
            f"Search response time ({response_time_ms:.2f}ms) exceeds threshold ({max_response_time_ms}ms)"
        
        # Verify we got a valid response
        data = response.json()
        assert "results" in data
        assert "query" in data

@pytest.mark.asyncio
async def test_vector_store_search_threshold_validation_mock(test_server_client: AsyncClient):
    """Test that the vector store search endpoint validates threshold values using mock approach.
    
    This test isolates FastAPI's parameter validation from the actual server initialization.
    It doesn't test the vector store implementation but only the parameter validation logic.
    """
    # First, check if server is responding at all by checking health endpoint
    health_response = await test_server_client.get("/health")
    
    # If we can't even reach the server, skip the test
    if health_response.status_code >= 500:
        pytest.skip(f"Server is not responding (status: {health_response.status_code})")
    
    # Create a list of test cases: (threshold, expected_validation_error)
    # None for expected_validation_error means we expect validation to pass
    test_cases = [
        # Invalid thresholds (should fail validation)
        (1.5, "less than or equal to 1.0"),
        (-0.5, "greater than or equal to 0.0"),
        # Valid thresholds (should pass validation)
        (0.0, None),
        (1.0, None),
        (0.7, None),
    ]
    
    # Try each test case
    for threshold, expected_validation_error in test_cases:
        # Skip testing health check which will never have parameter validation errors
        # Here we're just testing the static validation in the FastAPI route definition
        # This will trigger validation errors regardless of server state
        response = await test_server_client.get(f"/api/vector-store/search?query=test&threshold={threshold}")
        
        # Check response based on expected validation
        if expected_validation_error:
            # If validation error is expected, check for 422 status
            # Note: If we got 503, parameter validation didn't even happen
            # In some test environments this is normal, so we'll skip the assertion
            if response.status_code == 503:
                logger.info(f"Server returned 503 for threshold={threshold}, "
                           f"parameter validation couldn't be tested due to server state")
                continue
                
            # If we get here, we should have a 422 validation error
            assert response.status_code == 422, \
                f"Expected 422 for invalid threshold {threshold}, got {response.status_code}: {response.text}"
            
            # Check if validation error message contains expected text
            assert expected_validation_error in response.text, \
                f"Expected validation error to contain '{expected_validation_error}', got: {response.text}"
            
            logger.info(f"Threshold {threshold} correctly failed validation with message containing '{expected_validation_error}'")
        else:
            # For valid thresholds, skip assertion if server returned 503
            if response.status_code == 503:
                logger.info(f"Server returned 503 for valid threshold={threshold}, "
                           f"but parameter validation passed (otherwise would be 422)")
                continue
                
            # If we get a non-503 response for a valid threshold, it should be 200
            # (or 404 if the endpoint doesn't exist in test server)
            assert response.status_code in [200, 404], \
                f"Expected 200 for valid threshold {threshold}, got {response.status_code}: {response.text}"
            
            logger.info(f"Threshold {threshold} correctly passed validation")
    
    logger.info("Completed threshold parameter validation tests")

```

--------------------------------------------------------------------------------
/output.txt:
--------------------------------------------------------------------------------

```
============================= test session starts ==============================
platform darwin -- Python 3.13.2, pytest-8.3.5, pluggy-1.5.0 -- /Users/tosinakinosho/workspaces/mcp-codebase-insight/.venv/bin/python3.13
cachedir: .pytest_cache
rootdir: /Users/tosinakinosho/workspaces/mcp-codebase-insight
configfile: pytest.ini
plugins: cov-6.0.0, anyio-4.9.0, asyncio-0.26.0
asyncio: mode=Mode.STRICT, asyncio_default_fixture_loop_scope=session, asyncio_default_test_loop_scope=function
collecting ... collected 106 items

tests/components/test_core_components.py::test_adr_manager PASSED        [  0%]
tests/components/test_core_components.py::test_knowledge_base PASSED     [  1%]
tests/components/test_core_components.py::test_task_manager PASSED       [  2%]
tests/components/test_core_components.py::test_metrics_manager PASSED    [  3%]
tests/components/test_core_components.py::test_health_manager PASSED     [  4%]
tests/components/test_core_components.py::test_cache_manager PASSED      [  5%]
tests/components/test_core_components.py::test_documentation_manager PASSED [  6%]
tests/components/test_core_components.py::test_debug_system PASSED       [  7%]
tests/components/test_embeddings.py::test_embedder_initialization PASSED [  8%]
tests/components/test_embeddings.py::test_embedder_embedding PASSED      [  9%]
tests/components/test_knowledge_base.py::test_knowledge_base_initialization PASSED [ 10%]
tests/components/test_knowledge_base.py::test_add_and_get_pattern PASSED [ 11%]
tests/components/test_knowledge_base.py::test_find_similar_patterns PASSED [ 12%]
tests/components/test_knowledge_base.py::test_update_pattern PASSED      [ 13%]
tests/components/test_sse_components.py::test_mcp_server_initialization PASSED [ 14%]
tests/components/test_sse_components.py::test_register_tools PASSED      [ 15%]
tests/components/test_sse_components.py::test_get_starlette_app FAILED   [ 16%]
tests/components/test_sse_components.py::test_create_sse_server FAILED   [ 16%]
tests/components/test_sse_components.py::test_vector_search_tool PASSED  [ 17%]
tests/components/test_sse_components.py::test_knowledge_search_tool PASSED [ 18%]
tests/components/test_sse_components.py::test_adr_list_tool FAILED       [ 19%]
tests/components/test_sse_components.py::test_task_status_tool FAILED    [ 20%]
tests/components/test_sse_components.py::test_sse_handle_connect FAILED  [ 21%]

=================================== FAILURES ===================================
____________________________ test_get_starlette_app ____________________________

mock_create_sse = <MagicMock name='create_sse_server' id='5349118976'>
mcp_server = <src.mcp_codebase_insight.core.sse.MCP_CodebaseInsightServer object at 0x13ed274d0>

    @patch('mcp_codebase_insight.core.sse.create_sse_server')
    async def test_get_starlette_app(mock_create_sse, mcp_server):
        """Test getting the Starlette app for the MCP server."""
        # Set up the mock
        mock_app = MagicMock()
        mock_create_sse.return_value = mock_app
    
        # Reset the cached app to force a new creation
        mcp_server._starlette_app = None
    
        # Get the Starlette app
        app = mcp_server.get_starlette_app()
    
        # Verify tools were registered
        assert mcp_server.tools_registered is True
    
        # Verify create_sse_server was called with the MCP server
>       mock_create_sse.assert_called_once_with(mcp_server.mcp_server)

tests/components/test_sse_components.py:178: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <MagicMock name='create_sse_server' id='5349118976'>
args = (<mcp.server.fastmcp.server.FastMCP object at 0x13ed24410>,), kwargs = {}
msg = "Expected 'create_sse_server' to be called once. Called 0 times."

    def assert_called_once_with(self, /, *args, **kwargs):
        """assert that the mock was called exactly once and that that call was
        with the specified arguments."""
        if not self.call_count == 1:
            msg = ("Expected '%s' to be called once. Called %s times.%s"
                   % (self._mock_name or 'mock',
                      self.call_count,
                      self._calls_repr()))
>           raise AssertionError(msg)
E           AssertionError: Expected 'create_sse_server' to be called once. Called 0 times.

/opt/homebrew/Cellar/[email protected]/3.13.2/Frameworks/Python.framework/Versions/3.13/lib/python3.13/unittest/mock.py:988: AssertionError
---------------------------- Captured stdout setup -----------------------------
{"event": "MCP Codebase Insight server initialized", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.688819Z"}
------------------------------ Captured log setup ------------------------------
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "MCP Codebase Insight server initialized", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.688819Z"}
----------------------------- Captured stdout call -----------------------------
{"event": "Registering tools with MCP server", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.693189Z"}
{"event": "Some critical dependencies are not available: task_manager", "logger": "src.mcp_codebase_insight.core.sse", "level": "warning", "timestamp": "2025-04-18T06:51:43.693272Z"}
{"event": "Tools requiring these dependencies will not be registered", "logger": "src.mcp_codebase_insight.core.sse", "level": "warning", "timestamp": "2025-04-18T06:51:43.693321Z"}
{"event": "MCP tools registration completed", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.697672Z"}
{"event": "Initializing SSE transport with endpoint: /sse", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.697772Z"}
{"event": "Created SSE server with routes:", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.698263Z"}
{"event": "Route: /health, methods: {'HEAD', 'GET'}", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.698395Z"}
{"event": "Route: /sse, methods: {'HEAD', 'GET'}", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.698465Z"}
{"event": "Route: /message, methods: {'POST'}", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.698514Z"}
------------------------------ Captured log call -------------------------------
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Registering tools with MCP server", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.693189Z"}
WARNING  src.mcp_codebase_insight.core.sse:logger.py:75 {"event": "Some critical dependencies are not available: task_manager", "logger": "src.mcp_codebase_insight.core.sse", "level": "warning", "timestamp": "2025-04-18T06:51:43.693272Z"}
WARNING  src.mcp_codebase_insight.core.sse:logger.py:75 {"event": "Tools requiring these dependencies will not be registered", "logger": "src.mcp_codebase_insight.core.sse", "level": "warning", "timestamp": "2025-04-18T06:51:43.693321Z"}
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "MCP tools registration completed", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.697672Z"}
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Initializing SSE transport with endpoint: /sse", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.697772Z"}
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Created SSE server with routes:", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.698263Z"}
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Route: /health, methods: {'HEAD', 'GET'}", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.698395Z"}
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Route: /sse, methods: {'HEAD', 'GET'}", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.698465Z"}
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Route: /message, methods: {'POST'}", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.698514Z"}
____________________________ test_create_sse_server ____________________________

mock_starlette = <MagicMock name='Starlette' id='5349123680'>
mock_transport = <MagicMock name='CodebaseInsightSseTransport' id='5349125024'>

    @patch('mcp_codebase_insight.core.sse.CodebaseInsightSseTransport')
    @patch('mcp_codebase_insight.core.sse.Starlette')
    async def test_create_sse_server(mock_starlette, mock_transport):
        """Test creating the SSE server."""
        # Set up mocks
        mock_mcp = MagicMock(spec=FastMCP)
        mock_transport_instance = MagicMock()
        mock_transport.return_value = mock_transport_instance
        mock_app = MagicMock()
        mock_starlette.return_value = mock_app
    
        # Create the SSE server
        app = create_sse_server(mock_mcp)
    
        # Verify CodebaseInsightSseTransport was initialized correctly
>       mock_transport.assert_called_once_with("/sse")

tests/components/test_sse_components.py:199: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <MagicMock name='CodebaseInsightSseTransport' id='5349125024'>
args = ('/sse',), kwargs = {}
msg = "Expected 'CodebaseInsightSseTransport' to be called once. Called 0 times."

    def assert_called_once_with(self, /, *args, **kwargs):
        """assert that the mock was called exactly once and that that call was
        with the specified arguments."""
        if not self.call_count == 1:
            msg = ("Expected '%s' to be called once. Called %s times.%s"
                   % (self._mock_name or 'mock',
                      self.call_count,
                      self._calls_repr()))
>           raise AssertionError(msg)
E           AssertionError: Expected 'CodebaseInsightSseTransport' to be called once. Called 0 times.

/opt/homebrew/Cellar/[email protected]/3.13.2/Frameworks/Python.framework/Versions/3.13/lib/python3.13/unittest/mock.py:988: AssertionError
----------------------------- Captured stdout call -----------------------------
{"event": "Initializing SSE transport with endpoint: /sse", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.754343Z"}
{"event": "Created SSE server with routes:", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.754481Z"}
{"event": "Route: /health, methods: {'HEAD', 'GET'}", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.754566Z"}
{"event": "Route: /sse, methods: {'HEAD', 'GET'}", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.754606Z"}
{"event": "Route: /message, methods: {'POST'}", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.754640Z"}
------------------------------ Captured log call -------------------------------
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Initializing SSE transport with endpoint: /sse", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.754343Z"}
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Created SSE server with routes:", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.754481Z"}
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Route: /health, methods: {'HEAD', 'GET'}", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.754566Z"}
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Route: /sse, methods: {'HEAD', 'GET'}", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.754606Z"}
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Route: /message, methods: {'POST'}", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.754640Z"}
______________________________ test_adr_list_tool ______________________________

mcp_server = <src.mcp_codebase_insight.core.sse.MCP_CodebaseInsightServer object at 0x13ed7ef90>

    async def test_adr_list_tool(mcp_server):
        """Test the ADR list tool."""
        # Make sure tools are registered
        if not mcp_server.tools_registered:
            mcp_server.register_tools()
    
        # Mock the FastMCP add_tool method to capture calls
        with patch.object(mcp_server.mcp_server, 'add_tool') as mock_add_tool:
            # Re-register the ADR list tool
            mcp_server._register_adr()
    
            # Verify tool was registered with correct parameters
            mock_add_tool.assert_called_once()
            args = mock_add_tool.call_args[0]
>           assert args[0] == "list-adrs"  # Tool name
E           IndexError: tuple index out of range

tests/components/test_sse_components.py:319: IndexError
---------------------------- Captured stdout setup -----------------------------
{"event": "MCP Codebase Insight server initialized", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.796820Z"}
------------------------------ Captured log setup ------------------------------
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "MCP Codebase Insight server initialized", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.796820Z"}
----------------------------- Captured stdout call -----------------------------
{"event": "Registering tools with MCP server", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.797106Z"}
{"event": "Some critical dependencies are not available: task_manager", "logger": "src.mcp_codebase_insight.core.sse", "level": "warning", "timestamp": "2025-04-18T06:51:43.797158Z"}
{"event": "Tools requiring these dependencies will not be registered", "logger": "src.mcp_codebase_insight.core.sse", "level": "warning", "timestamp": "2025-04-18T06:51:43.797197Z"}
{"event": "MCP tools registration completed", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.799588Z"}
------------------------------ Captured log call -------------------------------
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Registering tools with MCP server", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.797106Z"}
WARNING  src.mcp_codebase_insight.core.sse:logger.py:75 {"event": "Some critical dependencies are not available: task_manager", "logger": "src.mcp_codebase_insight.core.sse", "level": "warning", "timestamp": "2025-04-18T06:51:43.797158Z"}
WARNING  src.mcp_codebase_insight.core.sse:logger.py:75 {"event": "Tools requiring these dependencies will not be registered", "logger": "src.mcp_codebase_insight.core.sse", "level": "warning", "timestamp": "2025-04-18T06:51:43.797197Z"}
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "MCP tools registration completed", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.799588Z"}
____________________________ test_task_status_tool _____________________________

mcp_server = <src.mcp_codebase_insight.core.sse.MCP_CodebaseInsightServer object at 0x13ef72030>

    async def test_task_status_tool(mcp_server):
        """Test the task status tool."""
        # Make sure tools are registered
        if not mcp_server.tools_registered:
            mcp_server.register_tools()
    
        # Mock the FastMCP add_tool method to capture calls
        with patch.object(mcp_server.mcp_server, 'add_tool') as mock_add_tool:
            # Re-register the task status tool
            mcp_server._register_task()
    
            # Verify tool was registered with correct parameters
            mock_add_tool.assert_called_once()
            args = mock_add_tool.call_args[0]
>           assert args[0] == "get-task-status"  # Tool name
E           IndexError: tuple index out of range

tests/components/test_sse_components.py:338: IndexError
---------------------------- Captured stdout setup -----------------------------
{"event": "MCP Codebase Insight server initialized", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.806759Z"}
------------------------------ Captured log setup ------------------------------
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "MCP Codebase Insight server initialized", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.806759Z"}
----------------------------- Captured stdout call -----------------------------
{"event": "Registering tools with MCP server", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.807096Z"}
{"event": "Some critical dependencies are not available: task_manager", "logger": "src.mcp_codebase_insight.core.sse", "level": "warning", "timestamp": "2025-04-18T06:51:43.807156Z"}
{"event": "Tools requiring these dependencies will not be registered", "logger": "src.mcp_codebase_insight.core.sse", "level": "warning", "timestamp": "2025-04-18T06:51:43.807197Z"}
{"event": "MCP tools registration completed", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.810043Z"}
------------------------------ Captured log call -------------------------------
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Registering tools with MCP server", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.807096Z"}
WARNING  src.mcp_codebase_insight.core.sse:logger.py:75 {"event": "Some critical dependencies are not available: task_manager", "logger": "src.mcp_codebase_insight.core.sse", "level": "warning", "timestamp": "2025-04-18T06:51:43.807156Z"}
WARNING  src.mcp_codebase_insight.core.sse:logger.py:75 {"event": "Tools requiring these dependencies will not be registered", "logger": "src.mcp_codebase_insight.core.sse", "level": "warning", "timestamp": "2025-04-18T06:51:43.807197Z"}
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "MCP tools registration completed", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.810043Z"}
___________________________ test_sse_handle_connect ____________________________

mock_starlette = <MagicMock name='Starlette' id='5349128384'>
mock_transport = <MagicMock name='SseServerTransport' id='5349128720'>

    @patch('mcp_codebase_insight.core.sse.SseServerTransport')
    @patch('mcp_codebase_insight.core.sse.Starlette')
    async def test_sse_handle_connect(mock_starlette, mock_transport):
        """Test the SSE connection handling functionality."""
        # Set up mocks
        mock_transport_instance = MagicMock()
        mock_transport.return_value = mock_transport_instance
    
        mock_mcp = MagicMock(spec=FastMCP)
        # For MCP v1.5.0, create a mock run method instead of initialization options
        mock_mcp.run = AsyncMock()
    
        mock_request = MagicMock()
        mock_request.client = "127.0.0.1"
        mock_request.scope = {"type": "http"}
    
        # Mock the transport's connect_sse method
        mock_streams = (AsyncMock(), AsyncMock())
        mock_cm = MagicMock()
        mock_cm.__aenter__ = AsyncMock(return_value=mock_streams)
        mock_cm.__aexit__ = AsyncMock()
        mock_transport_instance.connect_sse.return_value = mock_cm
    
        # Create a mock handler and add it to our mock app instance
        handle_sse = AsyncMock()
        mock_app = MagicMock()
        mock_starlette.return_value = mock_app
    
        # Set up a mock route that we can access
        mock_route = MagicMock()
        mock_route.path = "/sse/"
        mock_route.endpoint = handle_sse
        mock_app.routes = [mock_route]
    
        # Create the SSE server
        app = create_sse_server(mock_mcp)
    
        # Extract the actual handler from the route configuration
>       routes_kwarg = mock_starlette.call_args.kwargs.get('routes', [])
E       AttributeError: 'NoneType' object has no attribute 'kwargs'

tests/components/test_sse_components.py:381: AttributeError
----------------------------- Captured stdout call -----------------------------
{"event": "Initializing SSE transport with endpoint: /sse", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.817486Z"}
{"event": "Created SSE server with routes:", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.817591Z"}
{"event": "Route: /health, methods: {'HEAD', 'GET'}", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.817674Z"}
{"event": "Route: /sse, methods: {'HEAD', 'GET'}", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.817714Z"}
{"event": "Route: /message, methods: {'POST'}", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.817749Z"}
------------------------------ Captured log call -------------------------------
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Initializing SSE transport with endpoint: /sse", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.817486Z"}
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Created SSE server with routes:", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.817591Z"}
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Route: /health, methods: {'HEAD', 'GET'}", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.817674Z"}
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Route: /sse, methods: {'HEAD', 'GET'}", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.817714Z"}
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Route: /message, methods: {'POST'}", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:51:43.817749Z"}
--------------------------- Captured stdout teardown ---------------------------
Cleaning up test collection: test_collection_a41f92f0
HTTP Request: DELETE http://localhost:6333/collections/test_collection_a41f92f0 "HTTP/1.1 200 OK"
Found 0 server states at end of session
---------------------------- Captured log teardown -----------------------------
INFO     conftest:conftest.py:169 Cleaning up test collection: test_collection_a41f92f0
INFO     httpx:_client.py:1025 HTTP Request: DELETE http://localhost:6333/collections/test_collection_a41f92f0 "HTTP/1.1 200 OK"
INFO     conftest:conftest.py:530 Found 0 server states at end of session

---------- coverage: platform darwin, python 3.13.2-final-0 ----------
Name                                                Stmts   Miss Branch BrPart  Cover   Missing
-----------------------------------------------------------------------------------------------
src/mcp_codebase_insight/__init__.py                    3      0      0      0   100%
src/mcp_codebase_insight/__main__.py                   28     28      0      0     0%   3-76
src/mcp_codebase_insight/asgi.py                        5      5      0      0     0%   3-11
src/mcp_codebase_insight/core/__init__.py               2      0      0      0   100%
src/mcp_codebase_insight/core/adr.py                  127     50     26      5    54%   75-111, 118-134, 186, 202, 204->206, 207, 209, 220-227
src/mcp_codebase_insight/core/cache.py                168     42     68     26    68%   33, 36, 42->exit, 70-71, 77-78, 90, 97->exit, 102-103, 109, 124-125, 142-143, 160-161, 167-169, 173-176, 181, 187, 193, 199, 205, 217, 220, 225, 228->exit, 234, 236->238, 238->exit, 243-249, 254, 258, 261->265, 265->270, 267-268, 274
src/mcp_codebase_insight/core/component_status.py       8      0      0      0   100%
src/mcp_codebase_insight/core/config.py                63     23     14      4    60%   38, 44-45, 47-51, 64-67, 91-105, 109, 117, 121-122
src/mcp_codebase_insight/core/debug.py                122     69     34      0    34%   58-78, 82-97, 122-128, 138-153, 161-168, 172-205
src/mcp_codebase_insight/core/di.py                    99     62     14      0    33%   40, 53-76, 80-82, 86-97, 101-106, 110-112, 116-120, 124-132, 136-144, 148-156, 160-169
src/mcp_codebase_insight/core/documentation.py        165    111     52      1    25%   53-77, 84-100, 134, 150-167, 175-189, 201-214, 228-316
src/mcp_codebase_insight/core/embeddings.py            77     28     18      3    61%   29->exit, 48-58, 79-83, 88, 104-106, 114-128, 132
src/mcp_codebase_insight/core/errors.py                96     27      2      0    70%   55-58, 62, 77, 88, 99, 110, 121, 132, 143, 154, 165, 176, 187, 198, 209, 220, 231, 242, 253, 264, 275, 279-282
src/mcp_codebase_insight/core/health.py               140     58     26      8    54%   52-71, 75-98, 111, 113, 128, 146, 156-162, 168->178, 170-171, 180-181, 190-191, 215-216, 232-233, 235-236, 259-260, 262-263
src/mcp_codebase_insight/core/knowledge.py            253    100     74     25    55%   95, 105->109, 114, 119-124, 129->exit, 131-138, 143->exit, 145-151, 155, 167, 170->175, 172-173, 208->223, 230, 250, 252->254, 254->256, 257, 258->260, 261, 263, 265, 270->285, 298, 303, 305, 307, 320->318, 335-351, 361-379, 404-421, 432-445, 457-470, 479-488, 496-503, 507-514, 518-524
src/mcp_codebase_insight/core/metrics.py              108     41     38     11    58%   43, 47, 58-59, 62-65, 70, 74, 80-83, 89-100, 111, 122, 127-128, 138, 145, 151, 153, 165-183
src/mcp_codebase_insight/core/prompts.py               72     72     16      0     0%   3-262
src/mcp_codebase_insight/core/sse.py                  220    116     40      9    46%   29-37, 62-108, 130-141, 153-154, 162, 171-178, 186-188, 202-207, 239, 280-285, 293, 302-303, 315->321, 330-331, 338-339, 343-344, 349-380, 393-394, 398-419, 432-433, 437-458, 471-472, 476-483, 502->504
src/mcp_codebase_insight/core/state.py                168    120     54      0    22%   48-53, 63-77, 84-93, 97-98, 102, 106-144, 148, 161-162, 167, 171, 175, 179, 183-335
src/mcp_codebase_insight/core/task_tracker.py          48     28     12      0    33%   29-37, 45-52, 60-78, 86, 94, 102, 106-107
src/mcp_codebase_insight/core/tasks.py                259    172     74      1    26%   89-113, 117-134, 138-140, 144-162, 203, 217-233, 237-245, 254-264, 268-318, 323-341, 349-357, 363-377, 384-397, 404-415, 422-432, 439-462
src/mcp_codebase_insight/core/vector_store.py         177     73     26      5    58%   62->67, 78->93, 84-90, 99-100, 119-122, 127-129, 145-146, 158-159, 164-165, 170-184, 200-201, 233-235, 264-266, 270, 290, 327-393, 411
src/mcp_codebase_insight/models.py                     18      0      0      0   100%
src/mcp_codebase_insight/server.py                    630    536    128      0    12%   55-109, 121-138, 142-1491, 1549-1550, 1554-1561, 1585-1590, 1595, 1599-1616, 1620-1622, 1626, 1638-1664, 1668-1688
src/mcp_codebase_insight/server_test_isolation.py      48     38     18      0    15%   31-39, 44-99
src/mcp_codebase_insight/utils/__init__.py              2      0      0      0   100%
src/mcp_codebase_insight/utils/logger.py               29      5      0      0    83%   52-53, 82, 89, 97
src/mcp_codebase_insight/version.py                    14     14      2      0     0%   3-22
-----------------------------------------------------------------------------------------------
TOTAL                                                3149   1818    736     98    38%

=========================== short test summary info ============================
FAILED tests/components/test_sse_components.py::test_get_starlette_app - AssertionError: Expected 'create_sse_server' to be called once. Called 0 times.
FAILED tests/components/test_sse_components.py::test_create_sse_server - AssertionError: Expected 'CodebaseInsightSseTransport' to be called once. Called 0 times.
FAILED tests/components/test_sse_components.py::test_adr_list_tool - IndexError: tuple index out of range
FAILED tests/components/test_sse_components.py::test_task_status_tool - IndexError: tuple index out of range
FAILED tests/components/test_sse_components.py::test_sse_handle_connect - AttributeError: 'NoneType' object has no attribute 'kwargs'
!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 5 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!
================== 5 failed, 18 passed, 34 warnings in 7.50s ===================

```

--------------------------------------------------------------------------------
/scripts/verify_build.py:
--------------------------------------------------------------------------------

```python
#!/usr/bin/env python
"""
Automated End-to-End Build Verification Script

This script automates the process of verifying an end-to-end build by:
1. Triggering the build process
2. Gathering verification criteria from the vector database
3. Analyzing build results against success criteria
4. Contextual verification using the vector database
5. Determining build status and generating a report
"""

import os
import sys
import json
import logging
import asyncio
import argparse
import subprocess
from datetime import datetime
from pathlib import Path
from typing import Dict, List, Any, Optional, Tuple
import uuid

from qdrant_client import QdrantClient
from qdrant_client.http.models import Filter, FieldCondition, MatchValue

# Add the project root to the Python path
sys.path.insert(0, str(Path(__file__).resolve().parent.parent))

from src.mcp_codebase_insight.core.vector_store import VectorStore, SearchResult
from src.mcp_codebase_insight.core.embeddings import SentenceTransformerEmbedding
from src.mcp_codebase_insight.core.config import ServerConfig

# Configure logging
logging.basicConfig(
    level=logging.INFO,
    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
    handlers=[
        logging.StreamHandler(),
        logging.FileHandler(Path('logs/build_verification.log'))
    ]
)
logger = logging.getLogger('build_verification')

class BuildVerifier:
    """Automated build verification system."""
    
    def __init__(self, config_path: Optional[str] = None):
        """Initialize the build verifier.
        
        Args:
            config_path: Path to the configuration file (optional)
        """
        self.config = self._load_config(config_path)
        self.vector_store = None
        self.embedder = None
        self.build_output = ""
        self.build_logs = []
        self.success_criteria = []
        self.build_start_time = None
        self.build_end_time = None
        self.test_results = {}
        self.critical_components = []
        self.dependency_map = {}
    
    def _load_config(self, config_path: Optional[str]) -> Dict[str, Any]:
        """Load configuration from file or environment variables.
        
        Args:
            config_path: Path to the configuration file
            
        Returns:
            Configuration dictionary
        """
        config = {
            'qdrant_url': os.environ.get('QDRANT_URL', 'http://localhost:6333'),
            'qdrant_api_key': os.environ.get('QDRANT_API_KEY', ''),
            'collection_name': os.environ.get('COLLECTION_NAME', 'mcp-codebase-insight'),
            'embedding_model': os.environ.get('EMBEDDING_MODEL', 'sentence-transformers/all-MiniLM-L6-v2'),
            'build_command': os.environ.get('BUILD_COMMAND', 'make build'),
            'test_command': os.environ.get('TEST_COMMAND', 'make test'),
            'success_criteria': {
                'min_test_coverage': float(os.environ.get('MIN_TEST_COVERAGE', '80.0')),
                'max_allowed_failures': int(os.environ.get('MAX_ALLOWED_FAILURES', '0')),
                'critical_modules': os.environ.get('CRITICAL_MODULES', '').split(','),
                'performance_threshold_ms': int(os.environ.get('PERFORMANCE_THRESHOLD_MS', '500'))
            }
        }
        
        # Override with config file if provided
        if config_path:
            try:
                with open(config_path, 'r') as f:
                    file_config = json.load(f)
                    config.update(file_config)
            except Exception as e:
                logger.error(f"Failed to load config from {config_path}: {e}")
        
        return config
    
    async def initialize(self):
        """Initialize the build verifier."""
        logger.info("Initializing build verifier...")
        
        # Initialize embedder if not already initialized
        if self.embedder is None or not getattr(self.embedder, 'initialized', False):
            logger.info("Initializing embedder...")
            self.embedder = SentenceTransformerEmbedding(model_name=self.config['embedding_model'])
            await self.embedder.initialize()
        else:
            logger.info("Using pre-initialized embedder")
        
        # Initialize vector store
        logger.info(f"Connecting to vector store at {self.config['qdrant_url']}...")
        self.vector_store = VectorStore(
            url=self.config['qdrant_url'],
            embedder=self.embedder,
            collection_name=self.config['collection_name'],
            api_key=self.config['qdrant_api_key'],
            vector_name="default"  # Specify a vector name for the collection
        )
        await self.vector_store.initialize()
        
        # Load dependency map from vector database
        await self._load_dependency_map()
        
        # Load critical components
        await self._load_critical_components()
        
        logger.info("Build verifier initialized successfully")
    
    async def _load_dependency_map(self):
        """Load dependency map from vector database."""
        logger.info("Loading dependency map from vector database...")
        
        # Query for dependency information
        dependencies = await self.vector_store.search(
            text="dependency map between components",
            filter_conditions={"must": [{"key": "type", "match": {"value": "architecture"}}]},
            limit=10
        )
        
        if dependencies:
            for result in dependencies:
                if "dependencies" in result.metadata:
                    self.dependency_map.update(result.metadata["dependencies"])
                    
        if not self.dependency_map:
            # Try to load from file as fallback
            try:
                with open('dependency_map.txt', 'r') as f:
                    for line in f:
                        if '->' in line:
                            source, target = line.strip().split('->')
                            source = source.strip()
                            target = target.strip()
                            if source not in self.dependency_map:
                                self.dependency_map[source] = []
                            self.dependency_map[source].append(target)
            except FileNotFoundError:
                logger.warning("Dependency map file not found")
        
        logger.info(f"Loaded dependency map with {len(self.dependency_map)} entries")
    
    async def _load_critical_components(self):
        """Load critical components from vector database or config."""
        logger.info("Loading critical components...")
        
        # Load from vector database
        critical_components = await self.vector_store.search(
            text="critical system components",
            filter_conditions={"must": [{"key": "type", "match": {"value": "architecture"}}]},
            limit=5
        )
        
        if critical_components:
            for result in critical_components:
                if "critical_components" in result.metadata:
                    # Extend the list instead of updating
                    self.critical_components.extend(result.metadata["critical_components"])
        
        # Add from config as fallback
        config_critical = self.config.get('success_criteria', {}).get('critical_modules', [])
        if config_critical:
            self.critical_components.extend(config_critical)
        
        # Remove duplicates while preserving order
        self.critical_components = list(dict.fromkeys(self.critical_components))
        
        logger.info(f"Loaded {len(self.critical_components)} critical components")
    
    async def trigger_build(self) -> bool:
        """Trigger the end-to-end build process.
        
        Returns:
            True if build command executed successfully, False otherwise
        """
        logger.info("Triggering end-to-end build...")
        self.build_start_time = datetime.now()
        
        try:
            # Execute build command
            logger.info(f"Running build command: {self.config['build_command']}")
            build_process = subprocess.Popen(
                self.config['build_command'],
                shell=True,
                stdout=subprocess.PIPE,
                stderr=subprocess.PIPE,
                text=True
            )
            
            stdout, stderr = build_process.communicate()
            self.build_output = stdout
            
            # Store build logs
            self.build_logs = [line for line in stdout.split('\n') if line.strip()]
            if stderr:
                self.build_logs.extend([f"ERROR: {line}" for line in stderr.split('\n') if line.strip()])
            
            build_success = build_process.returncode == 0
            build_status = "SUCCESS" if build_success else "FAILURE"
            logger.info(f"Build {build_status} (exit code: {build_process.returncode})")
            
            self.build_end_time = datetime.now()
            return build_success
            
        except Exception as e:
            logger.error(f"Failed to execute build command: {e}")
            self.build_end_time = datetime.now()
            self.build_logs.append(f"ERROR: Failed to execute build command: {e}")
            return False
    
    async def run_tests(self) -> bool:
        """Run the test suite.
        
        Returns:
            True if tests passed successfully, False otherwise
        """
        logger.info("Running tests...")
        
        try:
            # Execute test command
            logger.info(f"Running test command: {self.config['test_command']}")
            test_process = subprocess.Popen(
                self.config['test_command'],
                shell=True,
                stdout=subprocess.PIPE,
                stderr=subprocess.PIPE,
                text=True
            )
            
            stdout, stderr = test_process.communicate()
            
            # Parse and store test results
            self._parse_test_results(stdout)
            
            # Store test logs
            self.build_logs.extend([line for line in stdout.split('\n') if line.strip()])
            if stderr:
                self.build_logs.extend([f"ERROR: {line}" for line in stderr.split('\n') if line.strip()])
            
            tests_success = test_process.returncode == 0
            test_status = "SUCCESS" if tests_success else "FAILURE"
            logger.info(f"Tests {test_status} (exit code: {test_process.returncode})")
            
            return tests_success
            
        except Exception as e:
            logger.error(f"Failed to execute test command: {e}")
            self.build_logs.append(f"ERROR: Failed to execute test command: {e}")
            return False
    
    def _parse_test_results(self, test_output: str):
        """Parse test results from test output.
        
        Args:
            test_output: Output from the test command
        """
        # Initialize test summary
        self.test_results = {
            "total": 0,
            "passed": 0,
            "failed": 0,
            "skipped": 0,
            "coverage": 0.0,
            "duration_ms": 0,
            "failures": []
        }
        
        # Parse pytest output
        for line in test_output.split('\n'):
            # Count total tests
            if "collected " in line:
                try:
                    total_part = line.split("collected ")[1].split()[0]
                    self.test_results["total"] = int(total_part)
                except (IndexError, ValueError):
                    pass
            
            # Parse test failures - extract just the test path and name
            if "FAILED " in line:
                # Full line format is typically like "......FAILED tests/test_module.py::test_function [70%]"
                # Extract just the "FAILED tests/test_module.py::test_function" part
                try:
                    failure_part = line.split("FAILED ")[1].split("[")[0].strip()
                    failure = f"FAILED {failure_part}"
                    self.test_results["failures"].append(failure)
                    self.test_results["failed"] += 1
                except (IndexError, ValueError):
                    # If splitting fails, add the whole line as a fallback
                    self.test_results["failures"].append(line.strip())
                    self.test_results["failed"] += 1
            
            # Check for coverage percentage in the TOTAL line
            if "TOTAL" in line and "%" in line:
                try:
                    # Extract coverage from line like "TOTAL 600 100 83%"
                    parts = line.split()
                    for i, part in enumerate(parts):
                        if "%" in part:
                            coverage_percent = part.replace("%", "").strip()
                            self.test_results["coverage"] = float(coverage_percent)
                            break
                except (IndexError, ValueError):
                    pass
        
        # Calculate passed tests - if we have total but no failed or skipped,
        # assume all tests passed
        if self.test_results["total"] > 0:
            self.test_results["passed"] = self.test_results["total"] - self.test_results.get("failed", 0) - self.test_results.get("skipped", 0)
        
        logger.info(f"Parsed test results: {self.test_results['passed']}/{self.test_results['total']} tests passed, "
                   f"{self.test_results['coverage']}% coverage")
    
    async def gather_verification_criteria(self):
        """Gather verification criteria from the vector database."""
        logger.info("Gathering verification criteria...")
        
        # Query for success criteria
        results = await self.vector_store.search(
            text="build verification success criteria",
            filter_conditions={"must": [{"key": "type", "match": {"value": "build_verification"}}]},
            limit=5
        )
        
        if results:
            criteria = []
            for result in results:
                if "criteria" in result.metadata:
                    criteria.extend(result.metadata["criteria"])
            
            if criteria:
                self.success_criteria = criteria
                logger.info(f"Loaded {len(criteria)} success criteria from vector database")
                return
        
        # Use default criteria if none found in the vector database
        logger.info("Using default success criteria")
        self.success_criteria = [
            f"All tests must pass (maximum {self.config['success_criteria']['max_allowed_failures']} failures allowed)",
            f"Test coverage must be at least {self.config['success_criteria']['min_test_coverage']}%",
            "Build process must complete without errors",
            f"Critical modules ({', '.join(self.critical_components)}) must pass all tests",
            f"Performance tests must complete within {self.config['success_criteria']['performance_threshold_ms']}ms"
        ]
    
    def _detect_build_success(self) -> bool:
        """Detect if the build was successful based on build logs.
        
        Returns:
            bool: True if build succeeded, False otherwise
        """
        # Check logs for serious build errors
        for log in self.build_logs:
            if log.startswith("ERROR: Build failed") or "BUILD FAILED" in log.upper():
                logger.info("Detected build failure in logs")
                return False
        
        # Consider build successful if no serious errors found
        return True
    
    async def analyze_build_results(self) -> Tuple[bool, Dict[str, Any]]:
        """Analyze build results against success criteria.
        
        Returns:
            Tuple of (build_passed, results_dict)
        """
        logger.info("Analyzing build results...")
        
        # Initialize analysis results
        results = {
            "build_success": False,
            "tests_success": False,
            "coverage_success": False,
            "critical_modules_success": False,
            "performance_success": False,
            "overall_success": False,
            "criteria_results": {},
            "failure_analysis": [],
        }
        
        # Check if the build was successful
        results["build_success"] = self._detect_build_success()
        
        # Check test results
        max_failures = self.config['success_criteria']['max_allowed_failures']
        results["tests_success"] = self.test_results.get("failed", 0) <= max_failures
        
        # Check coverage
        min_coverage = self.config['success_criteria']['min_test_coverage']
        current_coverage = self.test_results.get("coverage", 0.0)
        
        # For development purposes, we might want to temporarily ignore coverage requirements
        # if there are tests passing but coverage reporting is not working properly
        if self.test_results.get("total", 0) > 0 and self.test_results.get("passed", 0) > 0:
            # If tests are passing but coverage is 0, assume coverage tool issues and pass this check
            results["coverage_success"] = current_coverage >= min_coverage
        else:
            results["coverage_success"] = current_coverage >= min_coverage
        
        # Check critical modules
        critical_module_failures = []
        for failure in self.test_results.get("failures", []):
            for module in self.critical_components:
                if module in failure:
                    critical_module_failures.append(failure)
                    break
        
        results["critical_modules_success"] = len(critical_module_failures) == 0
        if not results["critical_modules_success"]:
            results["failure_analysis"].append({
                "type": "critical_module_failure",
                "description": f"Failures in critical modules: {len(critical_module_failures)}",
                "details": critical_module_failures
            })
        
        # Check performance (if available)
        performance_threshold = self.config['success_criteria']['performance_threshold_ms']
        current_performance = self.test_results.get("duration_ms", 0)
        if current_performance > 0:  # Only check if we have performance data
            results["performance_success"] = current_performance <= performance_threshold
            if not results["performance_success"]:
                results["failure_analysis"].append({
                    "type": "performance_issue",
                    "description": f"Performance threshold exceeded: {current_performance}ms > {performance_threshold}ms",
                    "details": f"Tests took {current_performance}ms, threshold is {performance_threshold}ms"
                })
        else:
            # No performance data available, assume success
            results["performance_success"] = True
        
        # Evaluate each criterion
        for criterion in self.success_criteria:
            criterion_result = {
                "criterion": criterion,
                "passed": False,
                "details": ""
            }
            
            if "All tests must pass" in criterion:
                criterion_result["passed"] = results["tests_success"]
                criterion_result["details"] = (
                    f"{self.test_results.get('passed', 0)}/{self.test_results.get('total', 0)} tests passed, "
                    f"{self.test_results.get('failed', 0)} failed"
                )
                
            elif "coverage" in criterion.lower():
                criterion_result["passed"] = results["coverage_success"]
                
                if self.test_results.get("total", 0) > 0 and self.test_results.get("passed", 0) > 0 and current_coverage == 0.0:
                    criterion_result["details"] = (
                        f"Coverage tool may not be working correctly. {self.test_results.get('passed', 0)} tests passing, ignoring coverage requirement during development."
                    )
                else:
                    criterion_result["details"] = (
                        f"Coverage: {current_coverage}%, required: {min_coverage}%"
                    )
                
            elif "build process" in criterion.lower():
                criterion_result["passed"] = results["build_success"]
                criterion_result["details"] = "Build completed successfully" if results["build_success"] else "Build errors detected"
                
            elif "critical modules" in criterion.lower():
                criterion_result["passed"] = results["critical_modules_success"]
                criterion_result["details"] = (
                    "All critical modules passed tests" if results["critical_modules_success"] 
                    else f"{len(critical_module_failures)} failures in critical modules"
                )
                
            elif "performance" in criterion.lower():
                criterion_result["passed"] = results["performance_success"]
                if current_performance > 0:
                    criterion_result["details"] = (
                        f"Performance: {current_performance}ms, threshold: {performance_threshold}ms"
                    )
                else:
                    criterion_result["details"] = "No performance data available"
            
            results["criteria_results"][criterion] = criterion_result
        
        # Determine overall success
        results["overall_success"] = all([
            results["build_success"],
            results["tests_success"],
            results["coverage_success"],
            results["critical_modules_success"],
            results["performance_success"]
        ])
        
        logger.info(f"Build analysis complete: {'PASS' if results['overall_success'] else 'FAIL'}")
        return results["overall_success"], results
    
    async def contextual_verification(self, analysis_results: Dict[str, Any]) -> Dict[str, Any]:
        """Perform contextual verification using the vector database.
        
        Args:
            analysis_results: Results from the build analysis
            
        Returns:
            Updated analysis results with contextual verification
        """
        logger.info("Performing contextual verification...")
        
        # Only perform detailed analysis if there are failures
        if analysis_results["overall_success"]:
            logger.info("Build successful, skipping detailed contextual verification")
            return analysis_results
        
        # Identify failed tests
        failed_tests = self.test_results.get("failures", [])
        
        if not failed_tests:
            logger.info("No test failures to analyze")
            return analysis_results
        
        logger.info(f"Analyzing {len(failed_tests)} test failures...")
        
        # Initialize contextual verification results
        contextual_results = []
        
        # Analyze each failure
        for failure in failed_tests:
            # Extract module name from failure
            module_name = self._extract_module_from_failure(failure)
            
            if not module_name:
                continue
                
            # Get dependencies for the module
            dependencies = self.dependency_map.get(module_name, [])
            
            # Query vector database for relevant information
            query = f"common issues and solutions for {module_name} failures"
            results = await self.vector_store.search(
                text=query,
                filter_conditions={"must": [{"key": "type", "match": {"value": "troubleshooting"}}]},
                limit=3
            )
            
            failure_analysis = {
                "module": module_name,
                "failure": failure,
                "dependencies": dependencies,
                "potential_causes": [],
                "recommended_actions": []
            }
            
            if results:
                for result in results:
                    if "potential_causes" in result.metadata:
                        failure_analysis["potential_causes"].extend(result.metadata["potential_causes"])
                    if "recommended_actions" in result.metadata:
                        failure_analysis["recommended_actions"].extend(result.metadata["recommended_actions"])
            
            # If no specific guidance found, provide general advice
            if not failure_analysis["potential_causes"]:
                failure_analysis["potential_causes"] = [
                    f"Recent changes to {module_name}",
                    f"Changes in dependencies: {', '.join(dependencies)}",
                    "Integration issues between components"
                ]
                
            if not failure_analysis["recommended_actions"]:
                failure_analysis["recommended_actions"] = [
                    f"Review recent changes to {module_name}",
                    f"Check integration with dependencies: {', '.join(dependencies)}",
                    "Run tests in isolation to identify specific failure points"
                ]
            
            contextual_results.append(failure_analysis)
        
        # Add contextual verification results to analysis
        analysis_results["contextual_verification"] = contextual_results
        
        logger.info(f"Contextual verification complete: {len(contextual_results)} failures analyzed")
        return analysis_results
    
    def _extract_module_from_failure(self, failure: str) -> Optional[str]:
        """Extract module name from a test failure.
        
        Args:
            failure: Test failure message
            
        Returns:
            Module name or None if not found
        """
        # This is a simple implementation that assumes the module name
        # is in the format: "FAILED path/to/module.py::test_function"
        
        if "FAILED " in failure:
            try:
                path = failure.split("FAILED ")[1].split("::")[0]
                # Convert path to module name
                module_name = path.replace("/", ".").replace(".py", "")
                return module_name
            except IndexError:
                pass
        
        return None
    
    def generate_report(self, results: Dict[str, Any]) -> Dict[str, Any]:
        """Generate a build verification report.
        
        Args:
            results: Analysis results
            
        Returns:
            Report dictionary
        """
        logger.info("Generating build verification report...")
        
        build_duration = (self.build_end_time - self.build_start_time).total_seconds() if self.build_end_time else 0
        
        report = {
            "build_verification_report": {
                "timestamp": datetime.now().isoformat(),
                "build_info": {
                    "start_time": self.build_start_time.isoformat() if self.build_start_time else None,
                    "end_time": self.build_end_time.isoformat() if self.build_end_time else None,
                    "duration_seconds": build_duration,
                    "build_command": self.config["build_command"],
                    "test_command": self.config["test_command"]
                },
                "test_summary": {
                    "total": self.test_results.get("total", 0),
                    "passed": self.test_results.get("passed", 0),
                    "failed": self.test_results.get("failed", 0),
                    "skipped": self.test_results.get("skipped", 0),
                    "coverage": self.test_results.get("coverage", 0.0)
                },
                "verification_results": {
                    "overall_status": "PASS" if results["overall_success"] else "FAIL",
                    "criteria_results": results["criteria_results"]
                }
            }
        }
        
        # Add failure analysis if available
        if "failure_analysis" in results and results["failure_analysis"]:
            report["build_verification_report"]["failure_analysis"] = results["failure_analysis"]
        
        # Add contextual verification if available
        if "contextual_verification" in results:
            report["build_verification_report"]["contextual_verification"] = results["contextual_verification"]
        
        # Add a summary field for quick review
        criteria_count = len(results["criteria_results"])
        passed_criteria = sum(1 for c in results["criteria_results"].values() if c["passed"])
        report["build_verification_report"]["summary"] = (
            f"Build verification: {report['build_verification_report']['verification_results']['overall_status']}. "
            f"{passed_criteria}/{criteria_count} criteria passed. "
            f"{self.test_results.get('passed', 0)}/{self.test_results.get('total', 0)} tests passed with "
            f"{self.test_results.get('coverage', 0.0)}% coverage."
        )
        
        logger.info(f"Report generated: {report['build_verification_report']['summary']}")
        return report
    
    async def save_report(self, report: Dict[str, Any], report_file: str = "build_verification_report.json"):
        """Save build verification report to file and vector database.
        
        Args:
            report: Build verification report
            report_file: Path to save the report file
        """
        logger.info(f"Saving report to {report_file}...")
        
        # Save to file
        try:
            with open(report_file, 'w') as f:
                json.dump(report, f, indent=2)
            logger.info(f"Report saved to {report_file}")
        except Exception as e:
            logger.error(f"Failed to save report to file: {e}")
        
        # Store in vector database
        try:
            # Extract report data for metadata
            build_info = report.get("build_verification_report", {})
            verification_results = build_info.get("verification_results", {})
            overall_status = verification_results.get("overall_status", "UNKNOWN")
            timestamp = build_info.get("timestamp", datetime.now().isoformat())
            
            # Generate a consistent ID with prefix
            report_id = f"build-verification-{uuid.uuid4()}"
            report_text = json.dumps(report)
            
            # Store report in vector database with separate parameters instead of using id
            # This avoids the 'tuple' object has no attribute 'id' error
            await self.vector_store.add_vector(
                text=report_text,
                metadata={
                    "id": report_id,  # Include ID in metadata
                    "type": "build_verification_report",
                    "timestamp": timestamp,
                    "overall_status": overall_status
                }
            )
            logger.info(f"Report stored in vector database with ID: {report_id}")
        except Exception as e:
            logger.error(f"Failed to store report in vector database: {e}")
    
    async def cleanup(self):
        """Clean up resources."""
        logger.info("Cleaning up resources...")
        
        if self.vector_store:
            await self.vector_store.cleanup()
            await self.vector_store.close()
    
    async def verify_build(self, output_file: str = "logs/build_verification_report.json") -> bool:
        """Verify the build process and generate a report.
        
        Args:
            output_file: Output file path for the report
            
        Returns:
            True if build verification passed, False otherwise
        """
        try:
            # Initialize components
            await self.initialize()
            
            # Trigger build
            build_success = await self.trigger_build()
            
            # Run tests if build was successful
            if build_success:
                await self.run_tests()
            
            # Gather verification criteria
            await self.gather_verification_criteria()
            
            # Analyze build results
            success, results = await self.analyze_build_results()
            
            # Perform contextual verification
            results = await self.contextual_verification(results)
            
            # Generate report
            report = self.generate_report(results)
            
            # Save report
            await self.save_report(report, output_file)
            
            return success
            
        except Exception as e:
            logger.error(f"Build verification failed: {e}")
            return False
            
        finally:
            # Clean up resources
            await self.cleanup()

async def main():
    """Main function."""
    parser = argparse.ArgumentParser(description="Build Verification Script")
    parser.add_argument("--config", help="Path to configuration file")
    parser.add_argument("--output", default="logs/build_verification_report.json", help="Output file path for report")
    args = parser.parse_args()
    
    # Create logs directory if it doesn't exist
    os.makedirs("logs", exist_ok=True)
    
    verifier = BuildVerifier(args.config)
    success = await verifier.verify_build(args.output)
    
    print(f"\nBuild verification {'PASSED' if success else 'FAILED'}")
    print(f"Report saved to {args.output}")
    
    # Exit with status code based on verification result
    sys.exit(0 if success else 1)

if __name__ == "__main__":
    asyncio.run(main()) 
```

--------------------------------------------------------------------------------
/prepare_codebase.sh:
--------------------------------------------------------------------------------

```bash
#!/bin/bash
set -x  # Enable debugging

# Set output files
STRUCTURE_FILE="codebase_structure.txt"
DEPENDENCY_MAP_FILE="dependency_map.txt"
DOC_NODES_FILE="documentation_nodes.txt"
USER_DOC_MAP_FILE="user_doc_mapping.txt"
VECTOR_GRAPH_FILE="vector_relationship_graph.txt"
LLM_PROMPT_FILE="llm_prompts.txt"
SYSTEM_ARCHITECTURE_FILE="system_architecture.txt"
TECHNICAL_DEBT_FILE="technical_debt.txt"
README_CONTEXT_FILE="readme_context.txt"

# Create prompts directory structure
PROMPTS_DIR="./prompts"
mkdir -p "$PROMPTS_DIR"/{system,technical,dependency,custom}

# Check if project_environment.txt exists and source it if it does
if [ -f "project_environment.txt" ]; then
  echo "Loading environment information from project_environment.txt..."
  # Source the environment info
  source project_environment.txt
else
  echo "No project_environment.txt found. Running capture_env_info.sh to generate it..."
  # Check if capture_env_info.sh exists and run it
  if [ -f "./capture_env_info.sh" ]; then
    bash ./capture_env_info.sh
    source project_environment.txt
  else
    echo "Warning: capture_env_info.sh not found. Environment information will be limited."
  fi
fi

# Define directories to ignore for the file search
IGNORE_DIRS=("node_modules" ".venv" "venv" "vendor" "test_env")

# Create directory for module summaries
mkdir -p module_summaries

# Construct the 'find' command to exclude ignored directories
FIND_CMD="find ."
for dir in "${IGNORE_DIRS[@]}"; do
    FIND_CMD+=" -path ./$dir -prune -o"
done
FIND_CMD+=" -type f \( -name '*.js' -o -name '*.jsx' -o -name '*.ts' -o -name '*.tsx' -o -name '*.py' -o -name '*.md' -o -name '*.mdx' -o -name '*.sh' -o -name '*.yaml' -o -name '*.yml' -o -name '*.json' -o -name '*.cfg' -o -name '*.conf' -o -name '*.tfvars' -o -name '*.tf' \) -print | sort"

# Debugging: Show the generated find command
echo "Executing command: $FIND_CMD"

# Execute and store results
eval "$FIND_CMD" > "$STRUCTURE_FILE"

# Check if files were captured
if [ ! -s "$STRUCTURE_FILE" ]; then
    echo "⚠️ Warning: No matching files found. Please check directory paths."
fi

# Count the number of files found.
FILE_COUNT=$(wc -l < "$STRUCTURE_FILE")

# 1. Code Dependency Graph
echo "Generating code dependency graph..."
echo "# Code Dependency Graph" > "$DEPENDENCY_MAP_FILE"
echo "# Generated on $(date)" >> "$DEPENDENCY_MAP_FILE"
echo "# Environment: $OPERATING_SYSTEM" >> "$DEPENDENCY_MAP_FILE"
if [ -n "$PYTHON_VERSION" ]; then
  echo "# Python: $PYTHON_VERSION" >> "$DEPENDENCY_MAP_FILE"
fi
if [ -n "$NODE_VERSION" ]; then
  echo "# Node.js: $NODE_VERSION" >> "$DEPENDENCY_MAP_FILE"
fi
if [ -n "$ANSIBLE_VERSION" ]; then
  echo "# Ansible: $ANSIBLE_VERSION" >> "$DEPENDENCY_MAP_FILE"
fi
echo "" >> "$DEPENDENCY_MAP_FILE"

# Function to extract dependencies, tailored for graph generation
extract_dependencies() {
    local file="$1"
    local file_type="$2"
    
    # Add "./" prefix for consistency
    local current_dir="./"
    file="${current_dir}${file#./}"

    if [[ "$file_type" == "python" ]]; then
        while IFS= read -r line; do
            if [[ "$line" =~ ^(import|from) ]]; then
                line=$(echo "$line" | sed 's/#.*$//' | tr -s ' ')
                if [[ "$line" != *'"'* && "$line" != *"'"* ]]; then
                    # Capture module/file being imported
                    imported_module=$(echo "$line" | sed -e 's/import //g' -e 's/from //g' -e 's/ .*//g' | tr -d ' ')
                    echo "$file -> $imported_module (Python)" >> "$DEPENDENCY_MAP_FILE"
                fi
            fi
        done < "$file"
    elif [[ "$file_type" == "js" || "$file_type" == "jsx" || "$file_type" == "ts" || "$file_type" == "tsx" ]]; then
        while IFS= read -r line; do
            if [[ "$line" =~ (import|require) ]]; then
                line=$(echo "$line" | sed 's/\/\/.*$//' | sed 's/\/\*.*\*\///g' | tr -s ' ')
                 if [[ "$line" != *'"'* && "$line" != *"'"* ]]; then
                    # Capture module/file being imported
                    imported_module=$(echo "$line" | sed -n "s/.*\(import\|require\).*\(('|\"\)\([^'\"]*\)\('|\"\).*/\3/p" | tr -d ' ')
                    echo "$file -> $imported_module (JavaScript/TypeScript)" >> "$DEPENDENCY_MAP_FILE"
                fi
            fi
        done < "$file"
    elif [[ "$file_type" == "sh" ]]; then
        while IFS= read -r line; do
            if [[ "$line" =~ ^(source|.) ]]; then
               line=$(echo "$line" | sed 's/#.*$//' | tr -s ' ')
                if [[ "$line" != *'"'* && "$line" != *"'"* ]]; then
                    imported_module=$(echo "$line" | sed -n "s/source \([^ ]*\).*/\1/p" | tr -d ' ')
                    echo "$file -> $imported_module (Shell)" >> "$DEPENDENCY_MAP_FILE"
                fi
            fi
        done < "$file"
     elif [[ "$file_type" == "yaml" || "$file_type" == "yml" ]]; then
        while IFS= read -r line; do
             if [[ "$line" =~ ^(\ *[a-zA-Z0-9_-]+\:) ]]; then
                echo "$file -> $line (YAML)" >> "$DEPENDENCY_MAP_FILE"
             fi
        done < "$file"
    elif [[ "$file_type" == "tf" ]]; then
        while IFS= read -r line; do
            if [[ "$line" =~ resource|module|data ]]; then
                line=$(echo "$line" | sed 's/#.*$//' | tr -s ' ')
                echo "$file -> $line (Terraform)" >> "$DEPENDENCY_MAP_FILE"
            fi
        done < "$file"
    fi
}

# Process each file from the structure file
while IFS= read -r file; do
    if [ -f "$file" ]; then
        extension="${file##*.}"
        case "$extension" in
            py)      file_type="python";;
            js|jsx)  file_type="js";;
            ts|tsx)  file_type="ts";;
            sh)      file_type="sh";;
            yaml)    file_type="yaml";;
            yml)     file_type="yml";;
            *)       file_type="other";;
        esac
        if [[ "$file_type" == "python" || "$file_type" == "js" || "$file_type" == "ts" || "$file_type" == "sh" || "$file_type" == "yaml" || "$file_type" == "yml" ]]; then
            extract_dependencies "$file" "$file_type"
        fi
    fi
done < "$STRUCTURE_FILE"

# 2. Documentation Linking
echo "Generating documentation nodes..."
echo "# Documentation Nodes" > "$DOC_NODES_FILE"

# Function to extract function/class signatures (for documentation linking)
extract_doc_nodes() {
    local file="$1"
    local file_type="$2"
    
     # Add "./" prefix for consistency
    local current_dir="./"
    file="${current_dir}${file#./}"

    if [[ "$file_type" == "python" ]]; then
        while IFS= read -r line; do
            if [[ "$line" =~ ^(def|class) ]]; then
                # Extract function/class name and signature
                signature=$(echo "$line" | sed 's/#.*$//' | tr -s ' ')
                echo "$file: $signature (Python)" >> "$DOC_NODES_FILE"
            fi
        done < "$file"
    elif [[ "$file_type" == "js" || "$file_type" == "jsx" || "$file_type" == "ts" || "$file_type" == "tsx" ]]; then
        while IFS= read -r line; do
            if [[ "$line" =~ ^(function|class) ]]; then
                signature=$(echo "$line" | sed 's/\/\/.*$//' | sed 's/\/\*.*\*\///g' | tr -s ' ')
                echo "$file: $signature (JavaScript/TypeScript)" >> "$DOC_NODES_FILE"
            fi
        done < "$file"
     elif [[ "$file_type" == "sh" ]]; then
        while IFS= read -r line; do
            if [[ "$line" =~ ^(function ) ]]; then
                signature=$(echo "$line" | sed 's/#.*$//' | tr -s ' ')
                echo "$file: $signature (Shell)" >> "$DOC_NODES_FILE"
            fi
        done < "$file"
    fi
}

# Process each file to extract documentation nodes
while IFS= read -r file; do
    if [ -f "$file" ]; then
        extension="${file##*.}"
        case "$extension" in
            py)      file_type="python";;
            js|jsx)  file_type="js";;
            ts|tsx)  file_type="ts";;
            sh)      file_type="sh";;
            yaml)    file_type="yaml";;
            yml)     file_type="yml";;
            *)       file_type="other";;
        esac
        if [[ "$file_type" == "python" || "$file_type" == "js" || "$file_type" == "ts" || "$file_type" == "sh" ]]; then
            extract_doc_nodes "$file" "$file_type"
        fi
    fi
done < "$STRUCTURE_FILE"

# 3. User Documentation Mapping
echo "Generating user documentation mapping..."
echo "# User Documentation Mapping" > "$USER_DOC_MAP_FILE"

#  Function to map user documentation (Markdown files) to code elements.
map_user_docs() {
    local file="$1"
     # Add "./" prefix for consistency
    local current_dir="./"
    file="${current_dir}${file#./}"
    
    # Very basic mapping:  Look for code element names in Markdown
    if [[ "$file" =~ \.md$ || "$file" =~ \.mdx$ ]]; then # Only process Markdown files
        while IFS= read -r line; do
            # This is a simplified approach.  A real tool would use AST parsing.
            if [[ "$line" =~ (def |class |function ) ]]; then  # very rough
                echo "$file contains: $line" >> "$USER_DOC_MAP_FILE"
            fi
        done < "$file"
    fi
}

# Process each file to map user documentation
while IFS= read -r file; do
    if [ -f "$file" ]; then
        extension="${file##*.}"
        case "$extension" in
            md|mdx)  file_type="md";;
            *)       file_type="other";;
        esac
        if [[ "$file_type" == "md" ]]; then
            map_user_docs "$file" >> "$USER_DOC_MAP_FILE"
        fi
    fi
done < "$STRUCTURE_FILE"

# Extract key information from README.md
echo "Analyzing README.md for project context..."
echo "# README.md Analysis" > "$README_CONTEXT_FILE"
echo "# Generated on $(date)" >> "$README_CONTEXT_FILE"
echo "" >> "$README_CONTEXT_FILE"

if [ -f "README.md" ]; then
  # Extract project name and description
  echo "## Project Information" >> "$README_CONTEXT_FILE"
  # Look for a title (# Title)
  PROJECT_TITLE=$(grep "^# " README.md | head -1 | sed 's/^# //')
  echo "Project Title: $PROJECT_TITLE" >> "$README_CONTEXT_FILE"
  
  # Extract what appears to be a project description (first paragraph after title)
  PROJECT_DESCRIPTION=$(sed -n '/^# /,/^$/{/^# /d; /^$/d; p}' README.md | head -3)
  echo "Project Description: $PROJECT_DESCRIPTION" >> "$README_CONTEXT_FILE"
  
  # Look for architecture information
  echo -e "\n## Architecture Information" >> "$README_CONTEXT_FILE"
  grep -A 10 -i "architecture\|structure\|design\|overview" README.md >> "$README_CONTEXT_FILE" 2>/dev/null || echo "No explicit architecture information found." >> "$README_CONTEXT_FILE"
  
  # Extract documentation links
  echo -e "\n## Documentation Links" >> "$README_CONTEXT_FILE"
  grep -o "\[.*\](.*)" README.md | grep -i "doc\|guide\|tutorial\|wiki" >> "$README_CONTEXT_FILE" 2>/dev/null || echo "No documentation links found." >> "$README_CONTEXT_FILE"
  
  # Check for setup instructions
  echo -e "\n## Setup Instructions" >> "$README_CONTEXT_FILE"
  grep -A 15 -i "setup\|install\|getting started\|prerequisites" README.md >> "$README_CONTEXT_FILE" 2>/dev/null || echo "No setup instructions found." >> "$README_CONTEXT_FILE"
  
  # Prepare a summary for prompts
  README_SUMMARY=$(echo "$PROJECT_DESCRIPTION" | tr '\n' ' ' | cut -c 1-200)
  
  echo "README.md analysis saved to $README_CONTEXT_FILE"
else
  echo "No README.md found at the root of the project." >> "$README_CONTEXT_FILE"
  # Try to find READMEs in subdirectories
  READMES=$(find . -name "README.md" -not -path "*/node_modules/*" -not -path "*/.git/*" -not -path "*/dist/*" -not -path "*/build/*")
  if [ -n "$READMES" ]; then
    echo "Found README.md files in subdirectories: $READMES" >> "$README_CONTEXT_FILE"
    # Process the first README found
    FIRST_README=$(echo "$READMES" | head -1)
    echo "Analyzing $FIRST_README as fallback..." >> "$README_CONTEXT_FILE"
    
    # Extract project name and description
    echo -e "\n## Project Information (from $FIRST_README)" >> "$README_CONTEXT_FILE"
    PROJECT_TITLE=$(grep "^# " "$FIRST_README" | head -1 | sed 's/^# //')
    echo "Project Title: $PROJECT_TITLE" >> "$README_CONTEXT_FILE"
    
    PROJECT_DESCRIPTION=$(sed -n '/^# /,/^$/{/^# /d; /^$/d; p}' "$FIRST_README" | head -3)
    echo "Project Description: $PROJECT_DESCRIPTION" >> "$README_CONTEXT_FILE"
    
    # Prepare a summary for prompts
    README_SUMMARY=$(echo "$PROJECT_DESCRIPTION" | tr '\n' ' ' | cut -c 1-200)
  else
    echo "No README.md files found in the project." >> "$README_CONTEXT_FILE"
    README_SUMMARY="No README.md found in the project."
  fi
fi

# Copy README context file to prompts directory
cp "$README_CONTEXT_FILE" "$PROMPTS_DIR/system/"

# NEW: System Architecture Analysis 
echo "Analyzing system architecture..."
echo "# System Architecture Analysis" > "$SYSTEM_ARCHITECTURE_FILE"
echo "# Generated on $(date)" >> "$SYSTEM_ARCHITECTURE_FILE"
echo "# Environment: $OPERATING_SYSTEM" >> "$SYSTEM_ARCHITECTURE_FILE"
echo "" >> "$SYSTEM_ARCHITECTURE_FILE"

# Identify key system components based on directory structure and file types
echo "## System Components" >> "$SYSTEM_ARCHITECTURE_FILE"

# Count files by type to identify primary languages/frameworks
echo "### Primary Languages/Frameworks" >> "$SYSTEM_ARCHITECTURE_FILE"
echo "Counting files by extension to identify primary technologies..." >> "$SYSTEM_ARCHITECTURE_FILE"
grep -o '\.[^./]*$' "$STRUCTURE_FILE" | sort | uniq -c | sort -nr >> "$SYSTEM_ARCHITECTURE_FILE"

# Identify architectural patterns based on directory names and file content
echo "" >> "$SYSTEM_ARCHITECTURE_FILE"
echo "### Detected Architectural Patterns" >> "$SYSTEM_ARCHITECTURE_FILE"

# Look for common architectural clues in directory names
echo "Directory structure analysis:" >> "$SYSTEM_ARCHITECTURE_FILE"
for pattern in "api" "service" "controller" "model" "view" "component" "middleware" "util" "helper" "config" "test" "frontend" "backend" "client" "server"; do
  count=$(find . -type d -name "*$pattern*" | wc -l)
  if [ "$count" -gt 0 ]; then
    echo "- Found $count directories matching pattern '$pattern'" >> "$SYSTEM_ARCHITECTURE_FILE"
  fi
done

# Check for deployment and infrastructure files
echo "" >> "$SYSTEM_ARCHITECTURE_FILE"
echo "### Infrastructure and Deployment" >> "$SYSTEM_ARCHITECTURE_FILE"
for file in "Dockerfile" "docker-compose.yml" ".github/workflows" "Jenkinsfile" "terraform" "k8s" "helm"; do
  if [ -e "$file" ]; then
    echo "- Found $file" >> "$SYSTEM_ARCHITECTURE_FILE"
  fi
done

# NEW: Technical Debt Analysis
echo "Gathering technical debt indicators..."
TECH_DEBT_DATA_FILE="technical_debt_data.txt"
TECH_DEBT_PROMPT_FILE="$PROMPTS_DIR/technical/technical_debt_prompt.txt"
echo "# Technical Debt Indicators" > "$TECH_DEBT_DATA_FILE"
echo "# Generated on $(date)" >> "$TECH_DEBT_DATA_FILE"
echo "" >> "$TECH_DEBT_DATA_FILE"

# Count files by type for primary languages
echo "## Primary Languages" >> "$TECH_DEBT_DATA_FILE"
LANGUAGE_COUNTS=$(grep -o '\.[^./]*$' "$STRUCTURE_FILE" | sort | uniq -c | sort -nr)
echo "$LANGUAGE_COUNTS" >> "$TECH_DEBT_DATA_FILE"
PRIMARY_LANGUAGES=$(echo "$LANGUAGE_COUNTS" | head -5 | awk '{print $2}' | tr '\n' ', ' | sed 's/,$//' | sed 's/\.//')
LANGUAGE_COUNT=$(echo "$LANGUAGE_COUNTS" | wc -l)

# Look for code comments indicating technical debt
echo -e "\n## TODO, FIXME, and HACK Comments" >> "$TECH_DEBT_DATA_FILE"
TODO_COMMENTS=$(grep -r --include="*.py" --include="*.js" --include="*.jsx" --include="*.ts" --include="*.tsx" --include="*.sh" --include="*.yml" --include="*.yaml" --include="*.tf" "TODO\|FIXME\|HACK" . 2>/dev/null | grep -v "node_modules\|venv\|.git" | sort)
TODO_COUNT=$(echo "$TODO_COMMENTS" | grep -v '^$' | wc -l)
echo "Found $TODO_COUNT TODO/FIXME/HACK comments" >> "$TECH_DEBT_DATA_FILE"
# Sample up to 10 TODO comments
TODO_SAMPLES=$(echo "$TODO_COMMENTS" | head -10)
echo "$TODO_SAMPLES" >> "$TECH_DEBT_DATA_FILE"

# Check for deprecated dependencies if we have package.json or requirements.txt
echo -e "\n## Dependency Analysis" >> "$TECH_DEBT_DATA_FILE"
NODE_DEPS=""
if [ -f "package.json" ]; then
  echo "### Node.js Dependencies" >> "$TECH_DEBT_DATA_FILE"
  NODE_DEPS=$(grep -A 100 "dependencies" package.json | grep -B 100 "}" | grep ":" | head -15)
  echo "$NODE_DEPS" >> "$TECH_DEBT_DATA_FILE"
fi

PYTHON_DEPS=""
if [ -f "requirements.txt" ]; then
  echo -e "\n### Python Dependencies" >> "$TECH_DEBT_DATA_FILE"
  PYTHON_DEPS=$(cat requirements.txt | head -15)
  echo "$PYTHON_DEPS" >> "$TECH_DEBT_DATA_FILE"
fi

# Look for large files that might indicate complexity issues
echo -e "\n## Potentially Complex Files (> 500 lines)" >> "$TECH_DEBT_DATA_FILE"
LARGE_FILES=$(find . -type f \( -name "*.py" -o -name "*.js" -o -name "*.jsx" -o -name "*.ts" -o -name "*.tsx" \) -not -path "*/node_modules/*" -not -path "*/venv/*" -not -path "*/.git/*" -exec wc -l {} \; | awk '$1 > 500' | sort -nr)
LARGE_FILES_COUNT=$(echo "$LARGE_FILES" | grep -v '^$' | wc -l)
echo "Found $LARGE_FILES_COUNT large files (>500 lines)" >> "$TECH_DEBT_DATA_FILE"
LARGE_FILES_SAMPLES=$(echo "$LARGE_FILES" | head -10)
echo "$LARGE_FILES_SAMPLES" >> "$TECH_DEBT_DATA_FILE"

# Check for potential circular dependencies
echo -e "\n## Potential Circular Dependencies" >> "$TECH_DEBT_DATA_FILE"
# This is a very basic check that could be improved
if [ -f "$DEPENDENCY_MAP_FILE" ]; then
  DEPENDENCY_SAMPLES=$(grep " -> " "$DEPENDENCY_MAP_FILE" | head -15)
  IMPORT_COUNT=$(grep -c " -> " "$DEPENDENCY_MAP_FILE")
  # Find modules that are both imported and import others
  HIGH_COUPLING=$(grep " -> " "$DEPENDENCY_MAP_FILE" | awk '{print $1; print $3}' | sort | uniq -c | sort -nr | head -10)
  echo "Found $IMPORT_COUNT import relationships" >> "$TECH_DEBT_DATA_FILE"
  echo -e "\nHighly coupled components:" >> "$TECH_DEBT_DATA_FILE"
  echo "$HIGH_COUPLING" >> "$TECH_DEBT_DATA_FILE"
fi

# Now create the technical debt prompt for LLM
echo "Generating technical debt analysis prompt for LLM..."

cat > "$TECH_DEBT_PROMPT_FILE" << EOL
# Technical Debt Analysis Prompt

## Context
You are analyzing the technical debt in a codebase with the following characteristics:
- ${FILE_COUNT} files across ${LANGUAGE_COUNT} languages/frameworks
- Primary languages: ${PRIMARY_LANGUAGES}
- Environment: ${OPERATING_SYSTEM:-Unknown OS}, Python ${PYTHON_VERSION:-Unknown}, Node.js ${NODE_VERSION:-Unknown}
- Project summary: ${README_SUMMARY:-No project description available}

## Available Data
The following data has been collected to assist your analysis:
1. TODO/FIXME/HACK comments (count: ${TODO_COUNT})
2. Large files exceeding 500 lines (count: ${LARGE_FILES_COUNT})
3. Dependency information (${IMPORT_COUNT} import relationships found)
4. Directory structure patterns and architectural indicators

## Sample Data Points
### TODO/FIXME Examples:
${TODO_SAMPLES}

### Large Files:
${LARGE_FILES_SAMPLES}

### Dependency Data:
${DEPENDENCY_SAMPLES}

### Highly Coupled Components:
${HIGH_COUPLING}

## Instructions
Please analyze the technical debt in this codebase by:

1. **Categorizing the technical debt** into these types:
   - Code quality issues
   - Architectural problems
   - Outdated dependencies 
   - Testing gaps
   - Documentation shortfalls

2. **Identifying potential root causes** of the technical debt:
   - Time pressure and deadlines
   - Knowledge gaps
   - Changing requirements
   - Architectural erosion over time
   - Legacy code integration

3. **Assessing the potential impact** of the technical debt:
   - On system stability
   - On maintainability
   - On performance
   - On security
   - On team productivity

4. **Recommending a prioritized remediation plan** that:
   - Addresses high-impact issues first
   - Considers interdependencies between components
   - Provides realistic, incremental steps
   - Balances short-term fixes with long-term improvements
   - Suggests preventative measures to avoid future debt

5. **Creating a high-level technical debt map** showing:
   - Which components contain the most concerning debt
   - How the debt in one area affects other parts of the system
   - Which areas would provide the highest ROI if addressed

Please format your response as a structured technical debt analysis report with clear sections, actionable insights, and system-level thinking.
EOL

# Generate a minimal technical debt file that points to the prompt
cat > "$TECHNICAL_DEBT_FILE" << EOL
# Technical Debt Analysis
# Generated on $(date)

This file contains basic technical debt indicators. For a comprehensive analysis,
copy the contents of "$TECH_DEBT_PROMPT_FILE" and submit it to an LLM like Claude,
ChatGPT, or use it with Cursor's AI capabilities.

## Summary of Technical Debt Indicators
- TODO/FIXME/HACK comments: ${TODO_COUNT}
- Large files (>500 lines): ${LARGE_FILES_COUNT}
- Import relationships: ${IMPORT_COUNT:-Unknown}
- Primary languages: ${PRIMARY_LANGUAGES}

For full data points, see: ${TECH_DEBT_DATA_FILE}
For LLM analysis prompt, see: ${TECH_DEBT_PROMPT_FILE}

To get a complete analysis, run:
cat ${TECH_DEBT_PROMPT_FILE} | pbcopy  # On macOS
# or
cat ${TECH_DEBT_PROMPT_FILE} | xclip -selection clipboard  # On Linux with xclip
# Then paste into your preferred LLM interface
EOL

# Update project_environment.txt with technical debt indicators
if [ -f "project_environment.txt" ]; then
  echo -e "\n# Technical Debt Indicators" >> project_environment.txt
  echo "TECH_DEBT_TODO_COUNT=\"$TODO_COUNT\"" >> project_environment.txt
  echo "TECH_DEBT_LARGE_FILES_COUNT=\"$LARGE_FILES_COUNT\"" >> project_environment.txt
  echo "TECH_DEBT_PROMPT_FILE=\"$TECH_DEBT_PROMPT_FILE\"" >> project_environment.txt
  echo "TECH_DEBT_DATA_FILE=\"$TECH_DEBT_DATA_FILE\"" >> project_environment.txt
fi

# Generate Dependency Analysis Prompt
echo "Generating dependency analysis prompt for LLM..."
DEPENDENCY_ANALYSIS_FILE="dependency_analysis.txt"
DEPENDENCY_PROMPT_FILE="$PROMPTS_DIR/dependency/dependency_analysis_prompt.txt"

# Get some key metrics for the prompt
MODULE_COUNT=$(grep " -> " "$DEPENDENCY_MAP_FILE" | awk '{print $1}' | sort | uniq | wc -l)
IMPORT_COUNT=$(grep -c " -> " "$DEPENDENCY_MAP_FILE")
# Find highly coupled modules
HIGH_COUPLING=$(grep " -> " "$DEPENDENCY_MAP_FILE" | awk '{print $1}' | sort | uniq -c | sort -nr | head -10)
# Find modules with most incoming dependencies
HIGH_INCOMING=$(grep " -> " "$DEPENDENCY_MAP_FILE" | awk '{print $3}' | sort | uniq -c | sort -nr | head -10)

cat > "$DEPENDENCY_PROMPT_FILE" << EOL
# Dependency Graph Analysis Prompt

## Context
You are analyzing the dependency structure in a codebase with the following characteristics:
- ${FILE_COUNT} files across ${LANGUAGE_COUNT} languages/frameworks
- ${MODULE_COUNT} modules with dependencies
- ${IMPORT_COUNT} total import relationships
- Primary languages: ${PRIMARY_LANGUAGES}
- Environment: ${OPERATING_SYSTEM:-Unknown OS}, Python ${PYTHON_VERSION:-Unknown}, Node.js ${NODE_VERSION:-Unknown}
- Project summary: ${README_SUMMARY:-No project description available}

## Available Data
The dependency map shows how modules depend on each other. Here are some key metrics:

### Modules with most outgoing dependencies (highest coupling):
${HIGH_COUPLING}

### Modules with most incoming dependencies (highest dependency):
${HIGH_INCOMING}

### Sample dependencies:
$(grep " -> " "$DEPENDENCY_MAP_FILE" | head -20)

## Instructions
Please analyze the dependency structure of this codebase by:

1. **Identifying problematic dependency patterns**:
   - Modules with excessive coupling (too many dependencies)
   - Core modules that too many other modules depend on (high risk)
   - Potential circular dependencies or dependency chains
   - Architectural layering violations (if detectable)

2. **Evaluating the modularity of the system**:
   - Is the codebase well-modularized or tightly coupled?
   - Are there clear boundaries between subsystems?
   - Does the dependency structure reflect good architecture?
   - Are there signs of "spaghetti code" in the dependencies?

3. **Recommending improvements to the dependency structure**:
   - Which modules should be refactored to reduce coupling?
   - How could dependencies be better organized?
   - Are there opportunities to introduce abstractions/interfaces?
   - What architectural patterns might help improve the structure?

4. **Creating a dependency health assessment**:
   - Rate the overall health of the dependency structure
   - Identify the highest priority areas for improvement
   - Suggest metrics to track dependency health over time
   - Estimate the long-term maintainability based on dependencies

Please format your response as a structured dependency analysis report with clear sections, 
visualizations (described in text if needed), and specific, actionable recommendations.
EOL

# Generate a minimal dependency analysis file that points to the prompt
cat > "$DEPENDENCY_ANALYSIS_FILE" << EOL
# Dependency Analysis
# Generated on $(date)

This file contains basic dependency metrics. For a comprehensive analysis,
copy the contents of "$DEPENDENCY_PROMPT_FILE" and submit it to an LLM like Claude,
ChatGPT, or use it with Cursor's AI capabilities.

## Summary of Dependency Metrics
- Modules with dependencies: ${MODULE_COUNT}
- Import relationships: ${IMPORT_COUNT}
- Primary languages: ${PRIMARY_LANGUAGES}

For the dependency map, see: ${DEPENDENCY_MAP_FILE}
For LLM analysis prompt, see: ${DEPENDENCY_PROMPT_FILE}

To get a complete analysis, run:
cat ${DEPENDENCY_PROMPT_FILE} | pbcopy  # On macOS
# or
cat ${DEPENDENCY_PROMPT_FILE} | xclip -selection clipboard  # On Linux with xclip
# Then paste into your preferred LLM interface
EOL

# Update project_environment.txt with dependency analysis references
if [ -f "project_environment.txt" ]; then
  echo -e "\n# Dependency Analysis Information" >> project_environment.txt
  echo "DEPENDENCY_PROMPT_FILE=\"$DEPENDENCY_PROMPT_FILE\"" >> project_environment.txt
  echo "DEPENDENCY_ANALYSIS_FILE=\"$DEPENDENCY_ANALYSIS_FILE\"" >> project_environment.txt
  echo "MODULE_COUNT=\"$MODULE_COUNT\"" >> project_environment.txt
  echo "IMPORT_COUNT=\"$IMPORT_COUNT\"" >> project_environment.txt
fi

# Generate a meta-prompt to create custom analysis prompts
echo "Creating meta-prompt for generating custom analysis prompts..."
META_PROMPT_FILE="$PROMPTS_DIR/meta_prompt_generator.txt"

cat > "$META_PROMPT_FILE" << EOL
# Meta-Prompt: Generate Custom Codebase Analysis Prompts

## Context
You've been given information about a codebase with these characteristics:
- ${FILE_COUNT} files across ${LANGUAGE_COUNT} languages/frameworks
- Primary languages: ${PRIMARY_LANGUAGES}
- Environment: ${OPERATING_SYSTEM:-Unknown OS}, Python ${PYTHON_VERSION:-Unknown}, Node.js ${NODE_VERSION:-Unknown}
- Project summary: ${README_SUMMARY:-No project description available}
- Detected architectural patterns: $(grep "Found" "$SYSTEM_ARCHITECTURE_FILE" | head -5 | tr '\n' ', ' | sed 's/,$//')

## Task
Generate a specialized analysis prompt that will help developers understand and improve this codebase. The prompt should be tailored to this specific codebase's characteristics and the developer's goal.

## Developer's Goal
[REPLACE THIS WITH YOUR SPECIFIC GOAL, e.g., "Improve test coverage", "Refactor for better performance", "Prepare for cloud migration"]

## Instructions
1. Create a prompt that guides an LLM to analyze the codebase specifically for the stated goal
2. Include relevant context from the codebase metrics above
3. Structure the prompt with clear sections including:
   - Background information about the codebase
   - Specific questions to address about the goal
   - Instructions for formatting the response
4. Focus on systems thinking principles that consider the entire codebase, not just isolated components
5. Include specific metrics or artifacts the LLM should look for in its analysis

## Output
Provide the complete text of the new analysis prompt, ready to be saved to a file and used with an LLM.
EOL

echo "Meta-prompt generator created at $META_PROMPT_FILE"

# Create a README for the prompts directory
cat > "$PROMPTS_DIR/README.md" << EOL
# Analysis Prompts

This directory contains prompts for analyzing the codebase using LLMs:

- **system/**: Prompts related to overall system architecture
- **technical/**: Prompts for analyzing technical debt and code quality
- **dependency/**: Prompts for analyzing dependencies and module relationships
- **custom/**: Location for your custom analysis prompts

## Usage

1. Select a prompt relevant to your analysis needs
2. Copy its contents to your clipboard: \`cat prompts/technical/technical_debt_prompt.txt | pbcopy\`
3. Paste into an LLM like Claude or ChatGPT
4. Review the analysis and insights

## Creating Custom Prompts

Use the meta-prompt generator to create custom analysis prompts:
\`\`\`
cat prompts/meta_prompt_generator.txt | pbcopy
# Then paste into an LLM, replace the [GOAL] placeholder, and follow the instructions
\`\`\`

## Available Prompts

- **Meta-Prompt Generator**: Generate custom analysis prompts for specific goals
- **Technical Debt Analysis**: Analyze and prioritize technical debt in the codebase
- **Dependency Structure Analysis**: Evaluate modularity and identify problematic dependencies
- **System Architecture Analysis**: Understand overall system design and architecture
EOL

# Create .gitignore entry for the prompts directory
if [ -f ".gitignore" ]; then
    if ! grep -q "^prompts/" ".gitignore"; then
        echo "prompts/" >> ".gitignore"
        echo "Added prompts/ to .gitignore"
    fi
else
    echo "prompts/" > ".gitignore"
    echo "Created .gitignore with prompts/ entry"
fi

# Move LLM prompts to the system directory
LLM_PROMPT_FILE="$PROMPTS_DIR/system/llm_prompts.txt"

# 4. Vector Graph Generation (Modified to include system architecture insights)
echo "Generating vector relationship graph prompt..."
cat > "$LLM_PROMPT_FILE" << 'EOL'
# LLM Prompts for Codebase Analysis

## 1. Code Dependency Graph Generation
Generate a code dependency graph using the following data:
-   `'"$STRUCTURE_FILE"'`: Lists all files.
-   `'"$DEPENDENCY_MAP_FILE"'`: Shows dependencies between files.

## 2. Documentation Linking Analysis
Analyze documentation links using:
-   `'"$STRUCTURE_FILE"'`: Lists all files.
-   `'"$DOC_NODES_FILE"'`: Lists code elements (functions, classes).
-    `'"$USER_DOC_MAP_FILE"'`: Maps documentation to code elements.

## 3. System Architecture Analysis
Apply systems thinking to analyze the application architecture using:
-   `'"$STRUCTURE_FILE"'`: Lists all files
-   `'"$DEPENDENCY_MAP_FILE"'`: Shows dependencies between files
-   `'"$SYSTEM_ARCHITECTURE_FILE"'`: System components and patterns analysis
-   `'"$TECH_DEBT_DATA_FILE"'`: Technical debt indicators

### Task:
Analyze the codebase as a complete system, including:
1. Identify system boundaries and integration points
2. Detect feedback loops and circular dependencies
3. Identify potential bottlenecks and single points of failure
4. Assess emergent behavior that may arise from component interactions
5. Analyze technical debt impact on overall system health

### Output Format:
Provide a systems thinking analysis that includes:
```
{
  "system_boundaries": [
    {"name": "Frontend", "components": ["component1", "component2"]},
    {"name": "Backend", "components": ["component3", "component4"]},
    {"name": "Data Layer", "components": ["component5"]}
  ],
  "integration_points": [
    {"name": "API Gateway", "type": "external_boundary", "risk_level": "medium"},
    {"name": "Database Access", "type": "internal", "risk_level": "high"}
  ],
  "feedback_loops": [
    {"components": ["componentA", "componentB", "componentC"], "type": "circular_dependency", "impact": "high"}
  ],
  "bottlenecks": [
    {"component": "componentX", "reason": "High coupling with 15 other components", "impact": "critical"}
  ],
  "technical_debt_hotspots": [
    {"component": "legacy_module", "type": "obsolete_dependencies", "impact": "high", "remediation_cost": "medium"}
  ]
}
```

## 5. Technical Debt Analysis
For a detailed technical debt analysis, use the prompt in `'"$TECH_DEBT_PROMPT_FILE"'`. 
This prompt will guide you through:
1. Categorizing technical debt types
2. Identifying root causes
3. Assessing impact on the system
4. Creating a prioritized remediation plan
5. Mapping debt across the system

## 6. Dependency Structure Analysis
For a detailed analysis of the dependency structure, use the prompt in `'"$DEPENDENCY_PROMPT_FILE"'`.
This prompt will guide you through:
1. Identifying problematic dependency patterns
2. Evaluating system modularity
3. Recommending structural improvements
4. Creating a dependency health assessment
EOL

echo "Directory structure saved to $STRUCTURE_FILE."
echo "Code dependency graph data saved to $DEPENDENCY_MAP_FILE."
echo "Documentation nodes data saved to $DOC_NODES_FILE."
echo "User documentation mapping data saved to $USER_DOC_MAP_FILE."
echo "System architecture analysis saved to $SYSTEM_ARCHITECTURE_FILE."
echo "Technical debt data saved to $TECH_DEBT_DATA_FILE."
echo "Technical debt analysis prompt saved to $TECH_DEBT_PROMPT_FILE."
echo "Dependency analysis data saved to $DEPENDENCY_ANALYSIS_FILE."
echo "Dependency analysis prompt saved to $DEPENDENCY_PROMPT_FILE."
echo "README.md analysis saved to $README_CONTEXT_FILE."
echo "Meta-prompt generator saved to $META_PROMPT_FILE."
echo "Prompts directory created at $PROMPTS_DIR with README.md"
echo "LLM prompts saved to $LLM_PROMPT_FILE."

# Update project_environment.txt with analysis results
if [ -f "project_environment.txt" ]; then
  echo -e "\n# Codebase Analysis Results" >> project_environment.txt
  echo "FILE_COUNT=\"$FILE_COUNT\"" >> project_environment.txt
  echo "SYSTEM_ARCHITECTURE_FILE=\"$SYSTEM_ARCHITECTURE_FILE\"" >> project_environment.txt
  echo "TECHNICAL_DEBT_FILE=\"$TECHNICAL_DEBT_FILE\"" >> project_environment.txt
  echo "DEPENDENCY_MAP_FILE=\"$DEPENDENCY_MAP_FILE\"" >> project_environment.txt
  echo "README_CONTEXT_FILE=\"$README_CONTEXT_FILE\"" >> project_environment.txt
  echo "PROMPTS_DIR=\"$PROMPTS_DIR\"" >> project_environment.txt
  
  # README.md context
  if [ -n "$PROJECT_TITLE" ]; then
    echo "PROJECT_TITLE=\"$PROJECT_TITLE\"" >> project_environment.txt
  fi
  if [ -n "$README_SUMMARY" ]; then
    echo "PROJECT_DESCRIPTION=\"$README_SUMMARY\"" >> project_environment.txt
  fi
  
  # Count number of TODO/FIXME comments as a technical debt indicator
  TECH_DEBT_COUNT=$(grep -c "TODO\|FIXME\|HACK" "$TECHNICAL_DEBT_FILE")
  echo "TECHNICAL_DEBT_INDICATORS=\"$TECH_DEBT_COUNT\"" >> project_environment.txt
  
  echo "Updated project_environment.txt with codebase analysis results."
fi

echo "✅ Codebase analysis complete!"
echo "📊 To use the analysis prompts with an LLM, see $PROMPTS_DIR/README.md"

```

--------------------------------------------------------------------------------
/error_logs.txt:
--------------------------------------------------------------------------------

```
============================= test session starts ==============================
platform darwin -- Python 3.13.2, pytest-8.3.5, pluggy-1.5.0 -- /Users/tosinakinosho/workspaces/mcp-codebase-insight/.venv/bin/python3.13
cachedir: .pytest_cache
rootdir: /Users/tosinakinosho/workspaces/mcp-codebase-insight
configfile: pytest.ini
testpaths: tests
plugins: cov-6.0.0, anyio-4.9.0, asyncio-0.26.0
asyncio: mode=Mode.STRICT, asyncio_default_fixture_loop_scope=session, asyncio_default_test_loop_scope=function
collecting ... collected 97 items

tests/components/test_core_components.py::test_adr_manager FAILED        [  1%]
tests/components/test_core_components.py::test_knowledge_base PASSED     [  2%]
tests/components/test_core_components.py::test_task_manager PASSED       [  3%]
tests/components/test_core_components.py::test_metrics_manager PASSED    [  4%]
tests/components/test_core_components.py::test_health_manager PASSED     [  5%]
tests/components/test_core_components.py::test_cache_manager PASSED      [  6%]
tests/components/test_core_components.py::test_documentation_manager PASSED [  7%]
tests/components/test_core_components.py::test_debug_system PASSED       [  8%]
tests/components/test_embeddings.py::test_embedder_initialization PASSED [  9%]
tests/components/test_embeddings.py::test_embedder_embedding PASSED      [ 10%]
tests/components/test_knowledge_base.py::test_knowledge_base_initialization PASSED [ 11%]
tests/components/test_knowledge_base.py::test_add_and_get_pattern PASSED [ 12%]
tests/components/test_knowledge_base.py::test_find_similar_patterns PASSED [ 13%]
tests/components/test_knowledge_base.py::test_update_pattern PASSED      [ 14%]
tests/components/test_sse_components.py::test_mcp_server_initialization PASSED [ 15%]
tests/components/test_sse_components.py::test_register_tools PASSED      [ 16%]
tests/components/test_sse_components.py::test_get_starlette_app FAILED   [ 17%]
tests/components/test_sse_components.py::test_create_sse_server FAILED   [ 18%]
tests/components/test_sse_components.py::test_vector_search_tool FAILED  [ 19%]
tests/components/test_sse_components.py::test_knowledge_search_tool FAILED [ 20%]
tests/components/test_sse_components.py::test_adr_list_tool FAILED       [ 21%]
tests/components/test_sse_components.py::test_task_status_tool FAILED    [ 22%]
tests/components/test_sse_components.py::test_sse_handle_connect FAILED  [ 23%]
tests/components/test_sse_components.py::test_sse_backpressure_handling PASSED [ 24%]
tests/components/test_sse_components.py::test_sse_connection_management PASSED [ 25%]
tests/components/test_sse_components.py::test_sse_keep_alive PASSED      [ 26%]
tests/components/test_sse_components.py::test_sse_error_handling PASSED  [ 27%]
tests/components/test_stdio_components.py::test_stdio_tool_registration SKIPPED [ 28%]
tests/components/test_stdio_components.py::test_stdio_message_streaming SKIPPED [ 29%]
tests/components/test_stdio_components.py::test_stdio_error_handling SKIPPED [ 30%]
tests/components/test_stdio_components.py::test_stdio_message_ordering SKIPPED [ 31%]
tests/components/test_stdio_components.py::test_stdio_large_message_handling SKIPPED [ 32%]
tests/components/test_task_manager.py::test_task_manager_initialization FAILED [ 34%]
tests/components/test_task_manager.py::test_create_and_get_task FAILED   [ 35%]

=================================== FAILURES ===================================
_______________________________ test_adr_manager _______________________________

test_config = ServerConfig(host='localhost', port=8000, log_level='DEBUG', qdrant_url='http://localhost:6333', qdrant_api_key=None, ...cache_dir=PosixPath('.test_cache/cache'), _state={'initialized': False, 'components': {}, 'metrics': {}, 'errors': []})
test_adr = {'consequences': 'Testing will be successful', 'context': 'This is a test ADR for testing', 'decision': 'We decided to...ion ready'], 'description': 'A test option for the ADR.', 'pros': ['Easy to implement'], 'title': 'Test Option'}], ...}

    @pytest.mark.asyncio
    async def test_adr_manager(test_config: ServerConfig, test_adr: dict):
        """Test ADR manager functions."""
        manager = ADRManager(test_config)
    
        # Test creation
>       adr = await manager.create_adr(
            title=test_adr["title"],
            context=test_adr["context"],
            options=test_adr["options"],
            decision=test_adr["decision"]
        )

tests/components/test_core_components.py:31: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <src.mcp_codebase_insight.core.adr.ADRManager object at 0x14793c050>
title = 'Test ADR', context = 'This is a test ADR for testing'
options = [{'cons': ['Not production ready'], 'description': 'A test option for the ADR.', 'pros': ['Easy to implement'], 'title': 'Test Option'}]
decision = 'We decided to test the ADR system', consequences = None

    async def create_adr(
        self,
        title: str,
        context: dict,
        options: List[dict],
        decision: str,
        consequences: Optional[Dict[str, List[str]]] = None
    ) -> ADR:
        """Create a new ADR."""
        adr_id = uuid4()
        now = datetime.utcnow()
    
        # Convert context dict to ADRContext
        adr_context = ADRContext(
>           problem=context["problem"],
            constraints=context["constraints"],
            assumptions=context.get("assumptions"),
            background=context.get("background")
        )
E       TypeError: string indices must be integers, not 'str'

src/mcp_codebase_insight/core/adr.py:150: TypeError
---------------------------- Captured stdout setup -----------------------------
Creating session-scoped event loop for process 8089
------------------------------ Captured log setup ------------------------------
INFO     conftest:conftest.py:49 Creating session-scoped event loop for process 8089
____________________________ test_get_starlette_app ____________________________

mock_create_sse = <MagicMock name='create_sse_server' id='6027601504'>
mcp_server = <src.mcp_codebase_insight.core.sse.MCP_CodebaseInsightServer object at 0x16763ee90>

    @patch('mcp_codebase_insight.core.sse.create_sse_server')
    async def test_get_starlette_app(mock_create_sse, mcp_server):
        """Test getting the Starlette app for the MCP server."""
        # Set up the mock
        mock_app = MagicMock()
        mock_create_sse.return_value = mock_app
    
        # Get the Starlette app
        app = mcp_server.get_starlette_app()
    
        # Verify tools were registered
        assert mcp_server.tools_registered is True
    
        # Verify create_sse_server was called with the MCP server
>       mock_create_sse.assert_called_once_with(mcp_server.mcp_server)

tests/components/test_sse_components.py:175: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <MagicMock name='create_sse_server' id='6027601504'>
args = (<mcp.server.fastmcp.server.FastMCP object at 0x16763d310>,), kwargs = {}
msg = "Expected 'create_sse_server' to be called once. Called 0 times."

    def assert_called_once_with(self, /, *args, **kwargs):
        """assert that the mock was called exactly once and that that call was
        with the specified arguments."""
        if not self.call_count == 1:
            msg = ("Expected '%s' to be called once. Called %s times.%s"
                   % (self._mock_name or 'mock',
                      self.call_count,
                      self._calls_repr()))
>           raise AssertionError(msg)
E           AssertionError: Expected 'create_sse_server' to be called once. Called 0 times.

/opt/homebrew/Cellar/[email protected]/3.13.2/Frameworks/Python.framework/Versions/3.13/lib/python3.13/unittest/mock.py:988: AssertionError
---------------------------- Captured stdout setup -----------------------------
{"event": "MCP Codebase Insight server initialized", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.416925Z"}
------------------------------ Captured log setup ------------------------------
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "MCP Codebase Insight server initialized", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.416925Z"}
----------------------------- Captured stdout call -----------------------------
{"event": "Registering tools with MCP server", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.421638Z"}
{"event": "Some critical dependencies are not available: task_manager", "logger": "src.mcp_codebase_insight.core.sse", "level": "warning", "timestamp": "2025-04-18T06:19:06.421754Z"}
{"event": "Tools requiring these dependencies will not be registered", "logger": "src.mcp_codebase_insight.core.sse", "level": "warning", "timestamp": "2025-04-18T06:19:06.421801Z"}
{"event": "MCP tools registration completed", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.426367Z"}
{"event": "Initializing SSE transport with endpoint: /sse", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.426490Z"}
{"event": "Created SSE server with routes:", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.427035Z"}
{"event": "Route: /health, methods: {'HEAD', 'GET'}", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.427173Z"}
{"event": "Route: /sse, methods: {'HEAD', 'GET'}", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.427221Z"}
{"event": "Route: /message, methods: {'POST'}", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.427268Z"}
------------------------------ Captured log call -------------------------------
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Registering tools with MCP server", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.421638Z"}
WARNING  src.mcp_codebase_insight.core.sse:logger.py:75 {"event": "Some critical dependencies are not available: task_manager", "logger": "src.mcp_codebase_insight.core.sse", "level": "warning", "timestamp": "2025-04-18T06:19:06.421754Z"}
WARNING  src.mcp_codebase_insight.core.sse:logger.py:75 {"event": "Tools requiring these dependencies will not be registered", "logger": "src.mcp_codebase_insight.core.sse", "level": "warning", "timestamp": "2025-04-18T06:19:06.421801Z"}
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "MCP tools registration completed", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.426367Z"}
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Initializing SSE transport with endpoint: /sse", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.426490Z"}
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Created SSE server with routes:", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.427035Z"}
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Route: /health, methods: {'HEAD', 'GET'}", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.427173Z"}
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Route: /sse, methods: {'HEAD', 'GET'}", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.427221Z"}
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Route: /message, methods: {'POST'}", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.427268Z"}
____________________________ test_create_sse_server ____________________________

mock_starlette = <MagicMock name='Starlette' id='6027603184'>
mock_transport = <MagicMock name='SseServerTransport' id='6027604192'>

    @patch('mcp_codebase_insight.core.sse.SseServerTransport')
    @patch('mcp_codebase_insight.core.sse.Starlette')
    async def test_create_sse_server(mock_starlette, mock_transport):
        """Test creating the SSE server."""
        # Set up mocks
        mock_mcp = MagicMock(spec=FastMCP)
        mock_transport_instance = MagicMock()
        mock_transport.return_value = mock_transport_instance
        mock_app = MagicMock()
        mock_starlette.return_value = mock_app
    
        # Create the SSE server
        app = create_sse_server(mock_mcp)
    
        # Verify SseServerTransport was initialized correctly
>       mock_transport.assert_called_once_with("/messages/")

tests/components/test_sse_components.py:196: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <MagicMock name='SseServerTransport' id='6027604192'>
args = ('/messages/',), kwargs = {}
msg = "Expected 'SseServerTransport' to be called once. Called 0 times."

    def assert_called_once_with(self, /, *args, **kwargs):
        """assert that the mock was called exactly once and that that call was
        with the specified arguments."""
        if not self.call_count == 1:
            msg = ("Expected '%s' to be called once. Called %s times.%s"
                   % (self._mock_name or 'mock',
                      self.call_count,
                      self._calls_repr()))
>           raise AssertionError(msg)
E           AssertionError: Expected 'SseServerTransport' to be called once. Called 0 times.

/opt/homebrew/Cellar/[email protected]/3.13.2/Frameworks/Python.framework/Versions/3.13/lib/python3.13/unittest/mock.py:988: AssertionError
----------------------------- Captured stdout call -----------------------------
{"event": "Initializing SSE transport with endpoint: /sse", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.463132Z"}
{"event": "Created SSE server with routes:", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.463323Z"}
{"event": "Route: /health, methods: {'HEAD', 'GET'}", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.463437Z"}
{"event": "Route: /sse, methods: {'HEAD', 'GET'}", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.463486Z"}
{"event": "Route: /message, methods: {'POST'}", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.463527Z"}
------------------------------ Captured log call -------------------------------
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Initializing SSE transport with endpoint: /sse", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.463132Z"}
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Created SSE server with routes:", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.463323Z"}
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Route: /health, methods: {'HEAD', 'GET'}", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.463437Z"}
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Route: /sse, methods: {'HEAD', 'GET'}", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.463486Z"}
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Route: /message, methods: {'POST'}", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.463527Z"}
___________________________ test_vector_search_tool ____________________________

mcp_server = <src.mcp_codebase_insight.core.sse.MCP_CodebaseInsightServer object at 0x1676368b0>

    async def test_vector_search_tool(mcp_server):
        """Test the vector search tool."""
        # Make sure tools are registered
        if not mcp_server.tools_registered:
            mcp_server.register_tools()
    
        # Mock the FastMCP add_tool method to capture calls
        with patch.object(mcp_server.mcp_server, 'add_tool') as mock_add_tool:
            # Re-register the vector search tool
            mcp_server._register_vector_search()
    
            # Verify tool was registered with correct parameters
            mock_add_tool.assert_called_once()
            args, kwargs = mock_add_tool.call_args
>           assert args[0] in ("vector-search", "search-vector", "vector_search")  # Accept possible variants
E           IndexError: tuple index out of range

tests/components/test_sse_components.py:219: IndexError
---------------------------- Captured stdout setup -----------------------------
{"event": "MCP Codebase Insight server initialized", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.501717Z"}
------------------------------ Captured log setup ------------------------------
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "MCP Codebase Insight server initialized", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.501717Z"}
----------------------------- Captured stdout call -----------------------------
{"event": "Registering tools with MCP server", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.502070Z"}
{"event": "Some critical dependencies are not available: task_manager", "logger": "src.mcp_codebase_insight.core.sse", "level": "warning", "timestamp": "2025-04-18T06:19:06.502127Z"}
{"event": "Tools requiring these dependencies will not be registered", "logger": "src.mcp_codebase_insight.core.sse", "level": "warning", "timestamp": "2025-04-18T06:19:06.502166Z"}
{"event": "MCP tools registration completed", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.504726Z"}
------------------------------ Captured log call -------------------------------
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Registering tools with MCP server", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.502070Z"}
WARNING  src.mcp_codebase_insight.core.sse:logger.py:75 {"event": "Some critical dependencies are not available: task_manager", "logger": "src.mcp_codebase_insight.core.sse", "level": "warning", "timestamp": "2025-04-18T06:19:06.502127Z"}
WARNING  src.mcp_codebase_insight.core.sse:logger.py:75 {"event": "Tools requiring these dependencies will not be registered", "logger": "src.mcp_codebase_insight.core.sse", "level": "warning", "timestamp": "2025-04-18T06:19:06.502166Z"}
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "MCP tools registration completed", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.504726Z"}
__________________________ test_knowledge_search_tool __________________________

mcp_server = <src.mcp_codebase_insight.core.sse.MCP_CodebaseInsightServer object at 0x167634640>

    async def test_knowledge_search_tool(mcp_server):
        """Test the knowledge search tool."""
        # Make sure tools are registered
        if not mcp_server.tools_registered:
            mcp_server.register_tools()
    
        # Mock the FastMCP add_tool method to capture calls
        with patch.object(mcp_server.mcp_server, 'add_tool') as mock_add_tool:
            # Re-register the knowledge search tool
            mcp_server._register_knowledge()
    
            # Verify tool was registered with correct parameters
            mock_add_tool.assert_called_once()
            args = mock_add_tool.call_args[0]
>           assert args[0] == "search-knowledge"  # Tool name
E           IndexError: tuple index out of range

tests/components/test_sse_components.py:239: IndexError
---------------------------- Captured stdout setup -----------------------------
{"event": "MCP Codebase Insight server initialized", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.510921Z"}
------------------------------ Captured log setup ------------------------------
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "MCP Codebase Insight server initialized", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.510921Z"}
----------------------------- Captured stdout call -----------------------------
{"event": "Registering tools with MCP server", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.511246Z"}
{"event": "Some critical dependencies are not available: task_manager", "logger": "src.mcp_codebase_insight.core.sse", "level": "warning", "timestamp": "2025-04-18T06:19:06.511300Z"}
{"event": "Tools requiring these dependencies will not be registered", "logger": "src.mcp_codebase_insight.core.sse", "level": "warning", "timestamp": "2025-04-18T06:19:06.511339Z"}
{"event": "MCP tools registration completed", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.513969Z"}
------------------------------ Captured log call -------------------------------
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Registering tools with MCP server", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.511246Z"}
WARNING  src.mcp_codebase_insight.core.sse:logger.py:75 {"event": "Some critical dependencies are not available: task_manager", "logger": "src.mcp_codebase_insight.core.sse", "level": "warning", "timestamp": "2025-04-18T06:19:06.511300Z"}
WARNING  src.mcp_codebase_insight.core.sse:logger.py:75 {"event": "Tools requiring these dependencies will not be registered", "logger": "src.mcp_codebase_insight.core.sse", "level": "warning", "timestamp": "2025-04-18T06:19:06.511339Z"}
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "MCP tools registration completed", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.513969Z"}
______________________________ test_adr_list_tool ______________________________

mcp_server = <src.mcp_codebase_insight.core.sse.MCP_CodebaseInsightServer object at 0x16760f1d0>

    async def test_adr_list_tool(mcp_server):
        """Test the ADR list tool."""
        # Make sure tools are registered
        if not mcp_server.tools_registered:
            mcp_server.register_tools()
    
        # Mock the FastMCP add_tool method to capture calls
        with patch.object(mcp_server.mcp_server, 'add_tool') as mock_add_tool:
            # Re-register the ADR list tool
            mcp_server._register_adr()
    
            # Verify tool was registered with correct parameters
            mock_add_tool.assert_called_once()
            args = mock_add_tool.call_args[0]
>           assert args[0] == "list-adrs"  # Tool name
E           IndexError: tuple index out of range

tests/components/test_sse_components.py:258: IndexError
---------------------------- Captured stdout setup -----------------------------
{"event": "MCP Codebase Insight server initialized", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.520244Z"}
------------------------------ Captured log setup ------------------------------
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "MCP Codebase Insight server initialized", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.520244Z"}
----------------------------- Captured stdout call -----------------------------
{"event": "Registering tools with MCP server", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.520568Z"}
{"event": "Some critical dependencies are not available: task_manager", "logger": "src.mcp_codebase_insight.core.sse", "level": "warning", "timestamp": "2025-04-18T06:19:06.520642Z"}
{"event": "Tools requiring these dependencies will not be registered", "logger": "src.mcp_codebase_insight.core.sse", "level": "warning", "timestamp": "2025-04-18T06:19:06.520687Z"}
{"event": "MCP tools registration completed", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.523206Z"}
------------------------------ Captured log call -------------------------------
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Registering tools with MCP server", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.520568Z"}
WARNING  src.mcp_codebase_insight.core.sse:logger.py:75 {"event": "Some critical dependencies are not available: task_manager", "logger": "src.mcp_codebase_insight.core.sse", "level": "warning", "timestamp": "2025-04-18T06:19:06.520642Z"}
WARNING  src.mcp_codebase_insight.core.sse:logger.py:75 {"event": "Tools requiring these dependencies will not be registered", "logger": "src.mcp_codebase_insight.core.sse", "level": "warning", "timestamp": "2025-04-18T06:19:06.520687Z"}
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "MCP tools registration completed", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.523206Z"}
____________________________ test_task_status_tool _____________________________

mcp_server = <src.mcp_codebase_insight.core.sse.MCP_CodebaseInsightServer object at 0x167427350>

    async def test_task_status_tool(mcp_server):
        """Test the task status tool."""
        # Make sure tools are registered
        if not mcp_server.tools_registered:
            mcp_server.register_tools()
    
        # Mock the FastMCP add_tool method to capture calls
        with patch.object(mcp_server.mcp_server, 'add_tool') as mock_add_tool:
            # Re-register the task status tool
            mcp_server._register_task()
    
            # Verify tool was registered with correct parameters
            mock_add_tool.assert_called_once()
            args = mock_add_tool.call_args[0]
>           assert args[0] == "get-task-status"  # Tool name
E           IndexError: tuple index out of range

tests/components/test_sse_components.py:277: IndexError
---------------------------- Captured stdout setup -----------------------------
{"event": "MCP Codebase Insight server initialized", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.529946Z"}
------------------------------ Captured log setup ------------------------------
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "MCP Codebase Insight server initialized", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.529946Z"}
----------------------------- Captured stdout call -----------------------------
{"event": "Registering tools with MCP server", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.530262Z"}
{"event": "Some critical dependencies are not available: task_manager", "logger": "src.mcp_codebase_insight.core.sse", "level": "warning", "timestamp": "2025-04-18T06:19:06.530316Z"}
{"event": "Tools requiring these dependencies will not be registered", "logger": "src.mcp_codebase_insight.core.sse", "level": "warning", "timestamp": "2025-04-18T06:19:06.530356Z"}
{"event": "MCP tools registration completed", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.533000Z"}
------------------------------ Captured log call -------------------------------
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Registering tools with MCP server", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.530262Z"}
WARNING  src.mcp_codebase_insight.core.sse:logger.py:75 {"event": "Some critical dependencies are not available: task_manager", "logger": "src.mcp_codebase_insight.core.sse", "level": "warning", "timestamp": "2025-04-18T06:19:06.530316Z"}
WARNING  src.mcp_codebase_insight.core.sse:logger.py:75 {"event": "Tools requiring these dependencies will not be registered", "logger": "src.mcp_codebase_insight.core.sse", "level": "warning", "timestamp": "2025-04-18T06:19:06.530356Z"}
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "MCP tools registration completed", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.533000Z"}
___________________________ test_sse_handle_connect ____________________________

mock_starlette = <MagicMock name='Starlette' id='6027603856'>
mock_transport = <MagicMock name='SseServerTransport' id='6027607216'>

    @patch('mcp_codebase_insight.core.sse.SseServerTransport')
    @patch('mcp_codebase_insight.core.sse.Starlette')
    async def test_sse_handle_connect(mock_starlette, mock_transport):
        """Test the SSE connection handling functionality."""
        # Set up mocks
        mock_transport_instance = MagicMock()
        mock_transport.return_value = mock_transport_instance
    
        mock_mcp = MagicMock(spec=FastMCP)
        # For MCP v1.5.0, create a mock run method instead of initialization options
        mock_mcp.run = AsyncMock()
    
        mock_request = MagicMock()
        mock_request.client = "127.0.0.1"
        mock_request.scope = {"type": "http"}
    
        # Mock the transport's connect_sse method
        mock_streams = (AsyncMock(), AsyncMock())
        mock_cm = MagicMock()
        mock_cm.__aenter__ = AsyncMock(return_value=mock_streams)
        mock_cm.__aexit__ = AsyncMock()
        mock_transport_instance.connect_sse.return_value = mock_cm
    
        # Create a mock handler and add it to our mock app instance
        handle_sse = AsyncMock()
        mock_app = MagicMock()
        mock_starlette.return_value = mock_app
    
        # Set up a mock route that we can access
        mock_route = MagicMock()
        mock_route.path = "/sse/"
        mock_route.endpoint = handle_sse
        mock_app.routes = [mock_route]
    
        # Create the SSE server
        app = create_sse_server(mock_mcp)
    
        # Extract the actual handler from the route configuration
>       routes_kwarg = mock_starlette.call_args.kwargs.get('routes', [])
E       AttributeError: 'NoneType' object has no attribute 'kwargs'

tests/components/test_sse_components.py:320: AttributeError
----------------------------- Captured stdout call -----------------------------
{"event": "Initializing SSE transport with endpoint: /sse", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.543689Z"}
{"event": "Created SSE server with routes:", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.543845Z"}
{"event": "Route: /health, methods: {'HEAD', 'GET'}", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.543945Z"}
{"event": "Route: /sse, methods: {'HEAD', 'GET'}", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.543987Z"}
{"event": "Route: /message, methods: {'POST'}", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.544024Z"}
------------------------------ Captured log call -------------------------------
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Initializing SSE transport with endpoint: /sse", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.543689Z"}
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Created SSE server with routes:", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.543845Z"}
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Route: /health, methods: {'HEAD', 'GET'}", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.543945Z"}
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Route: /sse, methods: {'HEAD', 'GET'}", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.543987Z"}
INFO     src.mcp_codebase_insight.core.sse:logger.py:68 {"event": "Route: /message, methods: {'POST'}", "logger": "src.mcp_codebase_insight.core.sse", "level": "info", "timestamp": "2025-04-18T06:19:06.544024Z"}
_______________________ test_task_manager_initialization _______________________

task_manager = <async_generator object task_manager at 0x1675fac20>

    @pytest.mark.asyncio
    async def test_task_manager_initialization(task_manager: TaskManager):
        """Test that task manager initializes correctly."""
        assert task_manager is not None
>       assert task_manager.config is not None
E       AttributeError: 'async_generator' object has no attribute 'config'

tests/components/test_task_manager.py:25: AttributeError
___________________________ test_create_and_get_task ___________________________

task_manager = <async_generator object task_manager at 0x113b71be0>
test_code = '\ndef example_function():\n    """This is a test function for task manager tests."""\n    return "Hello, world!"\n\nc...Class:\n    def __init__(self):\n        self.value = 42\n        \n    def method(self):\n        return self.value\n'

    @pytest.mark.asyncio
    async def test_create_and_get_task(task_manager: TaskManager, test_code: str):
        """Test creating and retrieving tasks."""
        # Create task
>       task = await task_manager.create_task(
            type="code_analysis",
            title="Test task",
            description="Test task description",
            context={"code": test_code}
        )
E       AttributeError: 'async_generator' object has no attribute 'create_task'

tests/components/test_task_manager.py:31: AttributeError
--------------------------- Captured stdout teardown ---------------------------
Cleaning up test collection: test_collection_d3b69ea7
HTTP Request: DELETE http://localhost:6333/collections/test_collection_d3b69ea7 "HTTP/1.1 200 OK"
Found 0 server states at end of session
---------------------------- Captured log teardown -----------------------------
INFO     conftest:conftest.py:169 Cleaning up test collection: test_collection_d3b69ea7
INFO     httpx:_client.py:1025 HTTP Request: DELETE http://localhost:6333/collections/test_collection_d3b69ea7 "HTTP/1.1 200 OK"
INFO     conftest:conftest.py:525 Found 0 server states at end of session

---------- coverage: platform darwin, python 3.13.2-final-0 ----------
Name                                                Stmts   Miss Branch BrPart  Cover   Missing
-----------------------------------------------------------------------------------------------
src/mcp_codebase_insight/__init__.py                    3      0      0      0   100%
src/mcp_codebase_insight/__main__.py                   28     28      0      0     0%   3-76
src/mcp_codebase_insight/asgi.py                        5      5      0      0     0%   3-11
src/mcp_codebase_insight/core/__init__.py               2      0      0      0   100%
src/mcp_codebase_insight/core/adr.py                  127     71     26      0    37%   75-111, 118-134, 157-180, 184-190, 200-213, 220-227, 231-233
src/mcp_codebase_insight/core/cache.py                168     42     68     26    68%   33, 36, 42->exit, 70-71, 77-78, 90, 97->exit, 102-103, 109, 124-125, 142-143, 160-161, 167-169, 173-176, 181, 187, 193, 199, 205, 217, 220, 225, 228->exit, 234, 236->238, 238->exit, 243-249, 254, 258, 261->265, 265->270, 267-268, 274
src/mcp_codebase_insight/core/component_status.py       8      0      0      0   100%
src/mcp_codebase_insight/core/config.py                63     23     14      4    60%   38, 44-45, 47-51, 64-67, 91-105, 109, 117, 121-122
src/mcp_codebase_insight/core/debug.py                122     69     34      0    34%   58-78, 82-97, 122-128, 138-153, 161-168, 172-205
src/mcp_codebase_insight/core/di.py                    99     62     14      0    33%   40, 53-76, 80-82, 86-97, 101-106, 110-112, 116-120, 124-132, 136-144, 148-156, 160-169
src/mcp_codebase_insight/core/documentation.py        165    111     52      1    25%   53-77, 84-100, 134, 150-167, 175-189, 201-214, 228-316
src/mcp_codebase_insight/core/embeddings.py            77     28     18      3    61%   29->exit, 48-58, 79-83, 88, 104-106, 114-128, 132
src/mcp_codebase_insight/core/errors.py                96     27      2      0    70%   55-58, 62, 77, 88, 99, 110, 121, 132, 143, 154, 165, 176, 187, 198, 209, 220, 231, 242, 253, 264, 275, 279-282
src/mcp_codebase_insight/core/health.py               140     58     26      8    54%   52-71, 75-98, 111, 113, 128, 146, 156-162, 168->178, 170-171, 180-181, 190-191, 215-216, 232-233, 235-236, 259-260, 262-263
src/mcp_codebase_insight/core/knowledge.py            253    100     74     25    55%   95, 105->109, 114, 119-124, 129->exit, 131-138, 143->exit, 145-151, 155, 167, 170->175, 172-173, 208->223, 230, 250, 252->254, 254->256, 257, 258->260, 261, 263, 265, 270->285, 298, 303, 305, 307, 320->318, 335-351, 361-379, 404-421, 432-445, 457-470, 479-488, 496-503, 507-514, 518-524
src/mcp_codebase_insight/core/metrics.py              108     41     38     11    58%   43, 47, 58-59, 62-65, 70, 74, 80-83, 89-100, 111, 122, 127-128, 138, 145, 151, 153, 165-183
src/mcp_codebase_insight/core/prompts.py               72     72     16      0     0%   3-262
src/mcp_codebase_insight/core/sse.py                  220    116     40      9    46%   29-37, 62-108, 130-141, 153-154, 162, 171-178, 186-188, 202-207, 239, 280-285, 293, 302-303, 315->321, 330-331, 338-339, 343-344, 349-380, 393-394, 398-419, 432-433, 437-458, 471-472, 476-483, 502->504
src/mcp_codebase_insight/core/state.py                168    120     54      0    22%   48-53, 63-77, 84-93, 97-98, 102, 106-144, 148, 161-162, 167, 171, 175, 179, 183-335
src/mcp_codebase_insight/core/task_tracker.py          48     28     12      0    33%   29-37, 45-52, 60-78, 86, 94, 102, 106-107
src/mcp_codebase_insight/core/tasks.py                259    172     74      1    26%   89-113, 117-134, 138-140, 144-162, 203, 217-233, 237-245, 254-264, 268-318, 323-341, 349-357, 363-377, 384-397, 404-415, 422-432, 439-462
src/mcp_codebase_insight/core/vector_store.py         177     73     26      5    58%   62->67, 78->93, 84-90, 99-100, 119-122, 127-129, 145-146, 158-159, 164-165, 170-184, 200-201, 233-235, 264-266, 270, 290, 327-393, 411
src/mcp_codebase_insight/models.py                     18      0      0      0   100%
src/mcp_codebase_insight/server.py                    630    536    128      0    12%   55-109, 121-138, 142-1491, 1549-1550, 1554-1561, 1585-1590, 1595, 1599-1616, 1620-1622, 1626, 1638-1664, 1668-1688
src/mcp_codebase_insight/server_test_isolation.py      48     38     18      0    15%   31-39, 44-99
src/mcp_codebase_insight/utils/__init__.py              2      0      0      0   100%
src/mcp_codebase_insight/utils/logger.py               29      5      0      0    83%   52-53, 82, 89, 97
src/mcp_codebase_insight/version.py                    14     14      2      0     0%   3-22
-----------------------------------------------------------------------------------------------
TOTAL                                                3149   1839    736     93    37%

=========================== short test summary info ============================
FAILED tests/components/test_core_components.py::test_adr_manager - TypeError: string indices must be integers, not 'str'
FAILED tests/components/test_sse_components.py::test_get_starlette_app - AssertionError: Expected 'create_sse_server' to be called once. Called 0 times.
FAILED tests/components/test_sse_components.py::test_create_sse_server - AssertionError: Expected 'SseServerTransport' to be called once. Called 0 times.
FAILED tests/components/test_sse_components.py::test_vector_search_tool - IndexError: tuple index out of range
FAILED tests/components/test_sse_components.py::test_knowledge_search_tool - IndexError: tuple index out of range
FAILED tests/components/test_sse_components.py::test_adr_list_tool - IndexError: tuple index out of range
FAILED tests/components/test_sse_components.py::test_task_status_tool - IndexError: tuple index out of range
FAILED tests/components/test_sse_components.py::test_sse_handle_connect - AttributeError: 'NoneType' object has no attribute 'kwargs'
FAILED tests/components/test_task_manager.py::test_task_manager_initialization - AttributeError: 'async_generator' object has no attribute 'config'
FAILED tests/components/test_task_manager.py::test_create_and_get_task - AttributeError: 'async_generator' object has no attribute 'create_task'
!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 10 failures !!!!!!!!!!!!!!!!!!!!!!!!!!
============ 10 failed, 19 passed, 5 skipped, 35 warnings in 9.24s =============

```
Page 5/7FirstPrevNextLast