#
tokens: 49430/50000 43/291 files (page 2/11)
lines: off (toggle) GitHub
raw markdown copy
This is page 2 of 11. Use http://codebase.md/oraios/serena?lines=false&page={x} to view the full context.

# Directory Structure

```
├── .devcontainer
│   └── devcontainer.json
├── .dockerignore
├── .env.example
├── .github
│   ├── FUNDING.yml
│   ├── ISSUE_TEMPLATE
│   │   ├── config.yml
│   │   ├── feature_request.md
│   │   └── issue--bug--performance-problem--question-.md
│   └── workflows
│       ├── codespell.yml
│       ├── docker.yml
│       ├── junie.yml
│       ├── lint_and_docs.yaml
│       ├── publish.yml
│       └── pytest.yml
├── .gitignore
├── .serena
│   ├── memories
│   │   ├── adding_new_language_support_guide.md
│   │   ├── serena_core_concepts_and_architecture.md
│   │   ├── serena_repository_structure.md
│   │   └── suggested_commands.md
│   └── project.yml
├── .vscode
│   └── settings.json
├── CHANGELOG.md
├── CLAUDE.md
├── compose.yaml
├── CONTRIBUTING.md
├── docker_build_and_run.sh
├── DOCKER.md
├── Dockerfile
├── docs
│   ├── custom_agent.md
│   └── serena_on_chatgpt.md
├── flake.lock
├── flake.nix
├── lessons_learned.md
├── LICENSE
├── llms-install.md
├── public
│   └── .gitignore
├── pyproject.toml
├── README.md
├── resources
│   ├── serena-icons.cdr
│   ├── serena-logo-dark-mode.svg
│   ├── serena-logo.cdr
│   ├── serena-logo.svg
│   └── vscode_sponsor_logo.png
├── roadmap.md
├── scripts
│   ├── agno_agent.py
│   ├── demo_run_tools.py
│   ├── gen_prompt_factory.py
│   ├── mcp_server.py
│   ├── print_mode_context_options.py
│   └── print_tool_overview.py
├── src
│   ├── interprompt
│   │   ├── __init__.py
│   │   ├── .syncCommitId.remote
│   │   ├── .syncCommitId.this
│   │   ├── jinja_template.py
│   │   ├── multilang_prompt.py
│   │   ├── prompt_factory.py
│   │   └── util
│   │       ├── __init__.py
│   │       └── class_decorators.py
│   ├── README.md
│   ├── serena
│   │   ├── __init__.py
│   │   ├── agent.py
│   │   ├── agno.py
│   │   ├── analytics.py
│   │   ├── cli.py
│   │   ├── code_editor.py
│   │   ├── config
│   │   │   ├── __init__.py
│   │   │   ├── context_mode.py
│   │   │   └── serena_config.py
│   │   ├── constants.py
│   │   ├── dashboard.py
│   │   ├── generated
│   │   │   └── generated_prompt_factory.py
│   │   ├── gui_log_viewer.py
│   │   ├── mcp.py
│   │   ├── project.py
│   │   ├── prompt_factory.py
│   │   ├── resources
│   │   │   ├── config
│   │   │   │   ├── contexts
│   │   │   │   │   ├── agent.yml
│   │   │   │   │   ├── chatgpt.yml
│   │   │   │   │   ├── codex.yml
│   │   │   │   │   ├── context.template.yml
│   │   │   │   │   ├── desktop-app.yml
│   │   │   │   │   ├── ide-assistant.yml
│   │   │   │   │   └── oaicompat-agent.yml
│   │   │   │   ├── internal_modes
│   │   │   │   │   └── jetbrains.yml
│   │   │   │   ├── modes
│   │   │   │   │   ├── editing.yml
│   │   │   │   │   ├── interactive.yml
│   │   │   │   │   ├── mode.template.yml
│   │   │   │   │   ├── no-onboarding.yml
│   │   │   │   │   ├── onboarding.yml
│   │   │   │   │   ├── one-shot.yml
│   │   │   │   │   └── planning.yml
│   │   │   │   └── prompt_templates
│   │   │   │       ├── simple_tool_outputs.yml
│   │   │   │       └── system_prompt.yml
│   │   │   ├── dashboard
│   │   │   │   ├── dashboard.js
│   │   │   │   ├── index.html
│   │   │   │   ├── jquery.min.js
│   │   │   │   ├── serena-icon-16.png
│   │   │   │   ├── serena-icon-32.png
│   │   │   │   ├── serena-icon-48.png
│   │   │   │   ├── serena-logs-dark-mode.png
│   │   │   │   └── serena-logs.png
│   │   │   ├── project.template.yml
│   │   │   └── serena_config.template.yml
│   │   ├── symbol.py
│   │   ├── text_utils.py
│   │   ├── tools
│   │   │   ├── __init__.py
│   │   │   ├── cmd_tools.py
│   │   │   ├── config_tools.py
│   │   │   ├── file_tools.py
│   │   │   ├── jetbrains_plugin_client.py
│   │   │   ├── jetbrains_tools.py
│   │   │   ├── memory_tools.py
│   │   │   ├── symbol_tools.py
│   │   │   ├── tools_base.py
│   │   │   └── workflow_tools.py
│   │   └── util
│   │       ├── class_decorators.py
│   │       ├── exception.py
│   │       ├── file_system.py
│   │       ├── general.py
│   │       ├── git.py
│   │       ├── inspection.py
│   │       ├── logging.py
│   │       ├── shell.py
│   │       └── thread.py
│   └── solidlsp
│       ├── __init__.py
│       ├── .gitignore
│       ├── language_servers
│       │   ├── al_language_server.py
│       │   ├── bash_language_server.py
│       │   ├── clangd_language_server.py
│       │   ├── clojure_lsp.py
│       │   ├── common.py
│       │   ├── csharp_language_server.py
│       │   ├── dart_language_server.py
│       │   ├── eclipse_jdtls.py
│       │   ├── elixir_tools
│       │   │   ├── __init__.py
│       │   │   ├── elixir_tools.py
│       │   │   └── README.md
│       │   ├── erlang_language_server.py
│       │   ├── gopls.py
│       │   ├── intelephense.py
│       │   ├── jedi_server.py
│       │   ├── kotlin_language_server.py
│       │   ├── lua_ls.py
│       │   ├── marksman.py
│       │   ├── nixd_ls.py
│       │   ├── omnisharp
│       │   │   ├── initialize_params.json
│       │   │   ├── runtime_dependencies.json
│       │   │   └── workspace_did_change_configuration.json
│       │   ├── omnisharp.py
│       │   ├── perl_language_server.py
│       │   ├── pyright_server.py
│       │   ├── r_language_server.py
│       │   ├── ruby_lsp.py
│       │   ├── rust_analyzer.py
│       │   ├── solargraph.py
│       │   ├── sourcekit_lsp.py
│       │   ├── terraform_ls.py
│       │   ├── typescript_language_server.py
│       │   ├── vts_language_server.py
│       │   └── zls.py
│       ├── ls_config.py
│       ├── ls_exceptions.py
│       ├── ls_handler.py
│       ├── ls_logger.py
│       ├── ls_request.py
│       ├── ls_types.py
│       ├── ls_utils.py
│       ├── ls.py
│       ├── lsp_protocol_handler
│       │   ├── lsp_constants.py
│       │   ├── lsp_requests.py
│       │   ├── lsp_types.py
│       │   └── server.py
│       ├── settings.py
│       └── util
│           ├── subprocess_util.py
│           └── zip.py
├── test
│   ├── __init__.py
│   ├── conftest.py
│   ├── resources
│   │   └── repos
│   │       ├── al
│   │       │   └── test_repo
│   │       │       ├── app.json
│   │       │       └── src
│   │       │           ├── Codeunits
│   │       │           │   ├── CustomerMgt.Codeunit.al
│   │       │           │   └── PaymentProcessorImpl.Codeunit.al
│   │       │           ├── Enums
│   │       │           │   └── CustomerType.Enum.al
│   │       │           ├── Interfaces
│   │       │           │   └── IPaymentProcessor.Interface.al
│   │       │           ├── Pages
│   │       │           │   ├── CustomerCard.Page.al
│   │       │           │   └── CustomerList.Page.al
│   │       │           ├── TableExtensions
│   │       │           │   └── Item.TableExt.al
│   │       │           └── Tables
│   │       │               └── Customer.Table.al
│   │       ├── bash
│   │       │   └── test_repo
│   │       │       ├── config.sh
│   │       │       ├── main.sh
│   │       │       └── utils.sh
│   │       ├── clojure
│   │       │   └── test_repo
│   │       │       ├── deps.edn
│   │       │       └── src
│   │       │           └── test_app
│   │       │               ├── core.clj
│   │       │               └── utils.clj
│   │       ├── csharp
│   │       │   └── test_repo
│   │       │       ├── .gitignore
│   │       │       ├── Models
│   │       │       │   └── Person.cs
│   │       │       ├── Program.cs
│   │       │       ├── serena.sln
│   │       │       └── TestProject.csproj
│   │       ├── dart
│   │       │   └── test_repo
│   │       │       ├── .gitignore
│   │       │       ├── lib
│   │       │       │   ├── helper.dart
│   │       │       │   ├── main.dart
│   │       │       │   └── models.dart
│   │       │       └── pubspec.yaml
│   │       ├── elixir
│   │       │   └── test_repo
│   │       │       ├── .gitignore
│   │       │       ├── lib
│   │       │       │   ├── examples.ex
│   │       │       │   ├── ignored_dir
│   │       │       │   │   └── ignored_module.ex
│   │       │       │   ├── models.ex
│   │       │       │   ├── services.ex
│   │       │       │   ├── test_repo.ex
│   │       │       │   └── utils.ex
│   │       │       ├── mix.exs
│   │       │       ├── mix.lock
│   │       │       ├── scripts
│   │       │       │   └── build_script.ex
│   │       │       └── test
│   │       │           ├── models_test.exs
│   │       │           └── test_repo_test.exs
│   │       ├── erlang
│   │       │   └── test_repo
│   │       │       ├── hello.erl
│   │       │       ├── ignored_dir
│   │       │       │   └── ignored_module.erl
│   │       │       ├── include
│   │       │       │   ├── records.hrl
│   │       │       │   └── types.hrl
│   │       │       ├── math_utils.erl
│   │       │       ├── rebar.config
│   │       │       ├── src
│   │       │       │   ├── app.erl
│   │       │       │   ├── models.erl
│   │       │       │   ├── services.erl
│   │       │       │   └── utils.erl
│   │       │       └── test
│   │       │           ├── models_tests.erl
│   │       │           └── utils_tests.erl
│   │       ├── go
│   │       │   └── test_repo
│   │       │       └── main.go
│   │       ├── java
│   │       │   └── test_repo
│   │       │       ├── pom.xml
│   │       │       └── src
│   │       │           └── main
│   │       │               └── java
│   │       │                   └── test_repo
│   │       │                       ├── Main.java
│   │       │                       ├── Model.java
│   │       │                       ├── ModelUser.java
│   │       │                       └── Utils.java
│   │       ├── kotlin
│   │       │   └── test_repo
│   │       │       ├── .gitignore
│   │       │       ├── build.gradle.kts
│   │       │       └── src
│   │       │           └── main
│   │       │               └── kotlin
│   │       │                   └── test_repo
│   │       │                       ├── Main.kt
│   │       │                       ├── Model.kt
│   │       │                       ├── ModelUser.kt
│   │       │                       └── Utils.kt
│   │       ├── lua
│   │       │   └── test_repo
│   │       │       ├── .gitignore
│   │       │       ├── main.lua
│   │       │       ├── src
│   │       │       │   ├── calculator.lua
│   │       │       │   └── utils.lua
│   │       │       └── tests
│   │       │           └── test_calculator.lua
│   │       ├── markdown
│   │       │   └── test_repo
│   │       │       ├── api.md
│   │       │       ├── CONTRIBUTING.md
│   │       │       ├── guide.md
│   │       │       └── README.md
│   │       ├── nix
│   │       │   └── test_repo
│   │       │       ├── .gitignore
│   │       │       ├── default.nix
│   │       │       ├── flake.nix
│   │       │       ├── lib
│   │       │       │   └── utils.nix
│   │       │       ├── modules
│   │       │       │   └── example.nix
│   │       │       └── scripts
│   │       │           └── hello.sh
│   │       ├── perl
│   │       │   └── test_repo
│   │       │       ├── helper.pl
│   │       │       └── main.pl
│   │       ├── php
│   │       │   └── test_repo
│   │       │       ├── helper.php
│   │       │       ├── index.php
│   │       │       └── simple_var.php
│   │       ├── python
│   │       │   └── test_repo
│   │       │       ├── .gitignore
│   │       │       ├── custom_test
│   │       │       │   ├── __init__.py
│   │       │       │   └── advanced_features.py
│   │       │       ├── examples
│   │       │       │   ├── __init__.py
│   │       │       │   └── user_management.py
│   │       │       ├── ignore_this_dir_with_postfix
│   │       │       │   └── ignored_module.py
│   │       │       ├── scripts
│   │       │       │   ├── __init__.py
│   │       │       │   └── run_app.py
│   │       │       └── test_repo
│   │       │           ├── __init__.py
│   │       │           ├── complex_types.py
│   │       │           ├── models.py
│   │       │           ├── name_collisions.py
│   │       │           ├── nested_base.py
│   │       │           ├── nested.py
│   │       │           ├── overloaded.py
│   │       │           ├── services.py
│   │       │           ├── utils.py
│   │       │           └── variables.py
│   │       ├── r
│   │       │   └── test_repo
│   │       │       ├── .Rbuildignore
│   │       │       ├── DESCRIPTION
│   │       │       ├── examples
│   │       │       │   └── analysis.R
│   │       │       ├── NAMESPACE
│   │       │       └── R
│   │       │           ├── models.R
│   │       │           └── utils.R
│   │       ├── ruby
│   │       │   └── test_repo
│   │       │       ├── .solargraph.yml
│   │       │       ├── examples
│   │       │       │   └── user_management.rb
│   │       │       ├── lib.rb
│   │       │       ├── main.rb
│   │       │       ├── models.rb
│   │       │       ├── nested.rb
│   │       │       ├── services.rb
│   │       │       └── variables.rb
│   │       ├── rust
│   │       │   ├── test_repo
│   │       │   │   ├── Cargo.lock
│   │       │   │   ├── Cargo.toml
│   │       │   │   └── src
│   │       │   │       ├── lib.rs
│   │       │   │       └── main.rs
│   │       │   └── test_repo_2024
│   │       │       ├── Cargo.lock
│   │       │       ├── Cargo.toml
│   │       │       └── src
│   │       │           ├── lib.rs
│   │       │           └── main.rs
│   │       ├── swift
│   │       │   └── test_repo
│   │       │       ├── Package.swift
│   │       │       └── src
│   │       │           ├── main.swift
│   │       │           └── utils.swift
│   │       ├── terraform
│   │       │   └── test_repo
│   │       │       ├── data.tf
│   │       │       ├── main.tf
│   │       │       ├── outputs.tf
│   │       │       └── variables.tf
│   │       ├── typescript
│   │       │   └── test_repo
│   │       │       ├── .serena
│   │       │       │   └── project.yml
│   │       │       ├── index.ts
│   │       │       ├── tsconfig.json
│   │       │       └── use_helper.ts
│   │       └── zig
│   │           └── test_repo
│   │               ├── .gitignore
│   │               ├── build.zig
│   │               ├── src
│   │               │   ├── calculator.zig
│   │               │   ├── main.zig
│   │               │   └── math_utils.zig
│   │               └── zls.json
│   ├── serena
│   │   ├── __init__.py
│   │   ├── __snapshots__
│   │   │   └── test_symbol_editing.ambr
│   │   ├── config
│   │   │   ├── __init__.py
│   │   │   └── test_serena_config.py
│   │   ├── test_edit_marker.py
│   │   ├── test_mcp.py
│   │   ├── test_serena_agent.py
│   │   ├── test_symbol_editing.py
│   │   ├── test_symbol.py
│   │   ├── test_text_utils.py
│   │   ├── test_tool_parameter_types.py
│   │   └── util
│   │       ├── test_exception.py
│   │       └── test_file_system.py
│   └── solidlsp
│       ├── al
│       │   └── test_al_basic.py
│       ├── bash
│       │   ├── __init__.py
│       │   └── test_bash_basic.py
│       ├── clojure
│       │   ├── __init__.py
│       │   └── test_clojure_basic.py
│       ├── csharp
│       │   └── test_csharp_basic.py
│       ├── dart
│       │   ├── __init__.py
│       │   └── test_dart_basic.py
│       ├── elixir
│       │   ├── __init__.py
│       │   ├── conftest.py
│       │   ├── test_elixir_basic.py
│       │   ├── test_elixir_ignored_dirs.py
│       │   ├── test_elixir_integration.py
│       │   └── test_elixir_symbol_retrieval.py
│       ├── erlang
│       │   ├── __init__.py
│       │   ├── conftest.py
│       │   ├── test_erlang_basic.py
│       │   ├── test_erlang_ignored_dirs.py
│       │   └── test_erlang_symbol_retrieval.py
│       ├── go
│       │   └── test_go_basic.py
│       ├── java
│       │   └── test_java_basic.py
│       ├── kotlin
│       │   └── test_kotlin_basic.py
│       ├── lua
│       │   └── test_lua_basic.py
│       ├── markdown
│       │   ├── __init__.py
│       │   └── test_markdown_basic.py
│       ├── nix
│       │   └── test_nix_basic.py
│       ├── perl
│       │   └── test_perl_basic.py
│       ├── php
│       │   └── test_php_basic.py
│       ├── python
│       │   ├── test_python_basic.py
│       │   ├── test_retrieval_with_ignored_dirs.py
│       │   └── test_symbol_retrieval.py
│       ├── r
│       │   ├── __init__.py
│       │   └── test_r_basic.py
│       ├── ruby
│       │   ├── test_ruby_basic.py
│       │   └── test_ruby_symbol_retrieval.py
│       ├── rust
│       │   ├── test_rust_2024_edition.py
│       │   └── test_rust_basic.py
│       ├── swift
│       │   └── test_swift_basic.py
│       ├── terraform
│       │   └── test_terraform_basic.py
│       ├── typescript
│       │   └── test_typescript_basic.py
│       ├── util
│       │   └── test_zip.py
│       └── zig
│           └── test_zig_basic.py
└── uv.lock
```

# Files

--------------------------------------------------------------------------------
/test/solidlsp/java/test_java_basic.py:
--------------------------------------------------------------------------------

```python
import os

import pytest

from solidlsp import SolidLanguageServer
from solidlsp.ls_config import Language
from solidlsp.ls_utils import SymbolUtils


@pytest.mark.java
class TestJavaLanguageServer:
    @pytest.mark.parametrize("language_server", [Language.JAVA], indirect=True)
    def test_find_symbol(self, language_server: SolidLanguageServer) -> None:
        symbols = language_server.request_full_symbol_tree()
        assert SymbolUtils.symbol_tree_contains_name(symbols, "Main"), "Main class not found in symbol tree"
        assert SymbolUtils.symbol_tree_contains_name(symbols, "Utils"), "Utils class not found in symbol tree"
        assert SymbolUtils.symbol_tree_contains_name(symbols, "Model"), "Model class not found in symbol tree"

    @pytest.mark.parametrize("language_server", [Language.JAVA], indirect=True)
    def test_find_referencing_symbols(self, language_server: SolidLanguageServer) -> None:
        # Use correct Maven/Java file paths
        file_path = os.path.join("src", "main", "java", "test_repo", "Utils.java")
        refs = language_server.request_references(file_path, 4, 20)
        assert any("Main.java" in ref.get("relativePath", "") for ref in refs), "Main should reference Utils.printHello"

        # Dynamically determine the correct line/column for the 'Model' class name
        file_path = os.path.join("src", "main", "java", "test_repo", "Model.java")
        symbols = language_server.request_document_symbols(file_path)
        model_symbol = None
        for sym in symbols[0]:
            if sym.get("name") == "Model" and sym.get("kind") == 5:  # 5 = Class
                model_symbol = sym
                break
        assert model_symbol is not None, "Could not find 'Model' class symbol in Model.java"
        # Use selectionRange if present, otherwise fall back to range
        if "selectionRange" in model_symbol:
            sel_start = model_symbol["selectionRange"]["start"]
        else:
            sel_start = model_symbol["range"]["start"]
        refs = language_server.request_references(file_path, sel_start["line"], sel_start["character"])
        assert any(
            "Main.java" in ref.get("relativePath", "") for ref in refs
        ), "Main should reference Model (tried all positions in selectionRange)"

    @pytest.mark.parametrize("language_server", [Language.JAVA], indirect=True)
    def test_overview_methods(self, language_server: SolidLanguageServer) -> None:
        symbols = language_server.request_full_symbol_tree()
        assert SymbolUtils.symbol_tree_contains_name(symbols, "Main"), "Main missing from overview"
        assert SymbolUtils.symbol_tree_contains_name(symbols, "Utils"), "Utils missing from overview"
        assert SymbolUtils.symbol_tree_contains_name(symbols, "Model"), "Model missing from overview"

```

--------------------------------------------------------------------------------
/test/solidlsp/kotlin/test_kotlin_basic.py:
--------------------------------------------------------------------------------

```python
import os

import pytest

from solidlsp import SolidLanguageServer
from solidlsp.ls_config import Language
from solidlsp.ls_utils import SymbolUtils


@pytest.mark.kotlin
class TestKotlinLanguageServer:
    @pytest.mark.parametrize("language_server", [Language.KOTLIN], indirect=True)
    def test_find_symbol(self, language_server: SolidLanguageServer) -> None:
        symbols = language_server.request_full_symbol_tree()
        assert SymbolUtils.symbol_tree_contains_name(symbols, "Main"), "Main class not found in symbol tree"
        assert SymbolUtils.symbol_tree_contains_name(symbols, "Utils"), "Utils class not found in symbol tree"
        assert SymbolUtils.symbol_tree_contains_name(symbols, "Model"), "Model class not found in symbol tree"

    @pytest.mark.parametrize("language_server", [Language.KOTLIN], indirect=True)
    def test_find_referencing_symbols(self, language_server: SolidLanguageServer) -> None:
        # Use correct Kotlin file paths
        file_path = os.path.join("src", "main", "kotlin", "test_repo", "Utils.kt")
        refs = language_server.request_references(file_path, 3, 12)
        assert any("Main.kt" in ref.get("relativePath", "") for ref in refs), "Main should reference Utils.printHello"

        # Dynamically determine the correct line/column for the 'Model' class name
        file_path = os.path.join("src", "main", "kotlin", "test_repo", "Model.kt")
        symbols = language_server.request_document_symbols(file_path)
        model_symbol = None
        for sym in symbols[0]:
            print(sym)
            print("\n")
            if sym.get("name") == "Model" and sym.get("kind") == 23:  # 23 = Class
                model_symbol = sym
                break
        assert model_symbol is not None, "Could not find 'Model' class symbol in Model.kt"
        # Use selectionRange if present, otherwise fall back to range
        if "selectionRange" in model_symbol:
            sel_start = model_symbol["selectionRange"]["start"]
        else:
            sel_start = model_symbol["range"]["start"]
        refs = language_server.request_references(file_path, sel_start["line"], sel_start["character"])
        assert any(
            "Main.kt" in ref.get("relativePath", "") for ref in refs
        ), "Main should reference Model (tried all positions in selectionRange)"

    @pytest.mark.parametrize("language_server", [Language.KOTLIN], indirect=True)
    def test_overview_methods(self, language_server: SolidLanguageServer) -> None:
        symbols = language_server.request_full_symbol_tree()
        assert SymbolUtils.symbol_tree_contains_name(symbols, "Main"), "Main missing from overview"
        assert SymbolUtils.symbol_tree_contains_name(symbols, "Utils"), "Utils missing from overview"
        assert SymbolUtils.symbol_tree_contains_name(symbols, "Model"), "Model missing from overview"

```

--------------------------------------------------------------------------------
/test/resources/repos/python/test_repo/test_repo/variables.py:
--------------------------------------------------------------------------------

```python
"""
Test module for variable declarations and usage.

This module tests various types of variable declarations and usages including:
- Module-level variables
- Class-level variables
- Instance variables
- Variable reassignments
"""

from dataclasses import dataclass, field

# Module-level variables
module_var = "Initial module value"

reassignable_module_var = 10
reassignable_module_var = 20  # Reassigned

# Module-level variable with type annotation
typed_module_var: int = 42


# Regular class with class and instance variables
class VariableContainer:
    """Class that contains various variables."""

    # Class-level variables
    class_var = "Initial class value"

    reassignable_class_var = True
    reassignable_class_var = False  # Reassigned #noqa: PIE794

    # Class-level variable with type annotation
    typed_class_var: str = "typed value"

    def __init__(self):
        # Instance variables
        self.instance_var = "Initial instance value"
        self.reassignable_instance_var = 100

        # Instance variable with type annotation
        self.typed_instance_var: list[str] = ["item1", "item2"]

    def modify_instance_var(self):
        # Reassign instance variable
        self.instance_var = "Modified instance value"
        self.reassignable_instance_var = 200  # Reassigned

    def use_module_var(self):
        # Use module-level variables
        result = module_var + " used in method"
        other_result = reassignable_module_var + 5
        return result, other_result

    def use_class_var(self):
        # Use class-level variables
        result = VariableContainer.class_var + " used in method"
        other_result = VariableContainer.reassignable_class_var
        return result, other_result


# Dataclass with variables
@dataclass
class VariableDataclass:
    """Dataclass that contains various fields."""

    # Field variables with type annotations
    id: int
    name: str
    items: list[str] = field(default_factory=list)
    metadata: dict[str, str] = field(default_factory=dict)
    optional_value: float | None = None

    # This will be reassigned in various places
    status: str = "pending"


# Function that uses the module variables
def use_module_variables():
    """Function that uses module-level variables."""
    result = module_var + " used in function"
    other_result = reassignable_module_var * 2
    return result, other_result


# Create instances and use variables
dataclass_instance = VariableDataclass(id=1, name="Test")
dataclass_instance.status = "active"  # Reassign dataclass field

# Use variables at module level
module_result = module_var + " used at module level"
other_module_result = reassignable_module_var + 30

# Create a second dataclass instance with different status
second_dataclass = VariableDataclass(id=2, name="Another Test")
second_dataclass.status = "completed"  # Another reassignment of status

```

--------------------------------------------------------------------------------
/test/solidlsp/terraform/test_terraform_basic.py:
--------------------------------------------------------------------------------

```python
"""
Basic integration tests for the Terraform language server functionality.

These tests validate the functionality of the language server APIs
like request_references using the test repository.
"""

import pytest

from solidlsp import SolidLanguageServer
from solidlsp.ls_config import Language


@pytest.mark.terraform
class TestLanguageServerBasics:
    """Test basic functionality of the Terraform language server."""

    @pytest.mark.parametrize("language_server", [Language.TERRAFORM], indirect=True)
    def test_basic_definition(self, language_server: SolidLanguageServer) -> None:
        """Test basic definition lookup functionality."""
        # Simple test to verify the language server is working
        file_path = "main.tf"
        # Just try to get document symbols - this should work without hanging
        symbols = language_server.request_document_symbols(file_path)
        assert len(symbols) > 0, "Should find at least some symbols in main.tf"

    @pytest.mark.parametrize("language_server", [Language.TERRAFORM], indirect=True)
    def test_request_references_aws_instance(self, language_server: SolidLanguageServer) -> None:
        """Test request_references on an aws_instance resource."""
        # Get references to an aws_instance resource in main.tf
        file_path = "main.tf"
        # Find aws_instance resources
        symbols = language_server.request_document_symbols(file_path)
        aws_instance_symbol = next((s for s in symbols[0] if s.get("name") == 'resource "aws_instance" "web_server"'), None)
        if not aws_instance_symbol or "selectionRange" not in aws_instance_symbol:
            raise AssertionError("aws_instance symbol or its selectionRange not found")
        sel_start = aws_instance_symbol["selectionRange"]["start"]
        references = language_server.request_references(file_path, sel_start["line"], sel_start["character"])
        assert len(references) >= 1, "aws_instance should be referenced at least once"

    @pytest.mark.parametrize("language_server", [Language.TERRAFORM], indirect=True)
    def test_request_references_variable(self, language_server: SolidLanguageServer) -> None:
        """Test request_references on a variable."""
        # Get references to a variable in variables.tf
        file_path = "variables.tf"
        # Find variable definitions
        symbols = language_server.request_document_symbols(file_path)
        var_symbol = next((s for s in symbols[0] if s.get("name") == 'variable "instance_type"'), None)
        if not var_symbol or "selectionRange" not in var_symbol:
            raise AssertionError("variable symbol or its selectionRange not found")
        sel_start = var_symbol["selectionRange"]["start"]
        references = language_server.request_references(file_path, sel_start["line"], sel_start["character"])
        assert len(references) >= 1, "variable should be referenced at least once"

```

--------------------------------------------------------------------------------
/test/solidlsp/rust/test_rust_basic.py:
--------------------------------------------------------------------------------

```python
import os

import pytest

from solidlsp import SolidLanguageServer
from solidlsp.ls_config import Language
from solidlsp.ls_utils import SymbolUtils


@pytest.mark.rust
class TestRustLanguageServer:
    @pytest.mark.parametrize("language_server", [Language.RUST], indirect=True)
    def test_find_references_raw(self, language_server: SolidLanguageServer) -> None:
        # Directly test the request_references method for the add function
        file_path = os.path.join("src", "lib.rs")
        symbols = language_server.request_document_symbols(file_path)
        add_symbol = None
        for sym in symbols[0]:
            if sym.get("name") == "add":
                add_symbol = sym
                break
        assert add_symbol is not None, "Could not find 'add' function symbol in lib.rs"
        sel_start = add_symbol["selectionRange"]["start"]
        refs = language_server.request_references(file_path, sel_start["line"], sel_start["character"])
        assert any(
            "main.rs" in ref.get("relativePath", "") for ref in refs
        ), "main.rs should reference add (raw, tried all positions in selectionRange)"

    @pytest.mark.parametrize("language_server", [Language.RUST], indirect=True)
    def test_find_symbol(self, language_server: SolidLanguageServer) -> None:
        symbols = language_server.request_full_symbol_tree()
        assert SymbolUtils.symbol_tree_contains_name(symbols, "main"), "main function not found in symbol tree"
        assert SymbolUtils.symbol_tree_contains_name(symbols, "add"), "add function not found in symbol tree"
        # Add more as needed based on test_repo

    @pytest.mark.parametrize("language_server", [Language.RUST], indirect=True)
    def test_find_referencing_symbols(self, language_server: SolidLanguageServer) -> None:
        # Find references to 'add' defined in lib.rs, should be referenced from main.rs
        file_path = os.path.join("src", "lib.rs")
        symbols = language_server.request_document_symbols(file_path)
        add_symbol = None
        for sym in symbols[0]:
            if sym.get("name") == "add":
                add_symbol = sym
                break
        assert add_symbol is not None, "Could not find 'add' function symbol in lib.rs"
        sel_start = add_symbol["selectionRange"]["start"]
        refs = language_server.request_references(file_path, sel_start["line"], sel_start["character"])
        assert any(
            "main.rs" in ref.get("relativePath", "") for ref in refs
        ), "main.rs should reference add (tried all positions in selectionRange)"

    @pytest.mark.parametrize("language_server", [Language.RUST], indirect=True)
    def test_overview_methods(self, language_server: SolidLanguageServer) -> None:
        symbols = language_server.request_full_symbol_tree()
        assert SymbolUtils.symbol_tree_contains_name(symbols, "main"), "main missing from overview"
        assert SymbolUtils.symbol_tree_contains_name(symbols, "add"), "add missing from overview"

```

--------------------------------------------------------------------------------
/docs/serena_on_chatgpt.md:
--------------------------------------------------------------------------------

```markdown

# Connecting Serena MCP Server to ChatGPT via MCPO & Cloudflare Tunnel

This guide explains how to expose a **locally running Serena MCP server** (powered by MCPO) to the internet using **Cloudflare Tunnel**, and how to connect it to **ChatGPT as a Custom GPT with tool access**.

Once configured, ChatGPT becomes a powerful **coding agent** with direct access to your codebase, shell, and file system — so **read the security notes carefully**.

---
## Prerequisites

Make sure you have [uv](https://docs.astral.sh/uv/getting-started/installation/) 
and [cloudflared](https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/downloads/) installed.

## 1. Start the Serena MCP Server via MCPO

Run the following command to launch Serena as http server (assuming port 8000):

```bash
uvx mcpo --port 8000 --api-key <YOUR_SECRET_KEY> -- \
  uvx --from git+https://github.com/oraios/serena \
  serena start-mcp-server --context chatgpt --project $(pwd)
```

- `--api-key` is required to secure the server.
- `--project` should point to the root of your codebase.

You can also use other options, and you don't have to pass `--project` if you want to work on multiple projects
or want to activate it later. See 

```shell
uvx --from git+https://github.com/oraios/serena serena start-mcp-server --help
```

---

## 2. Expose the Server Using Cloudflare Tunnel

Run:

```bash
cloudflared tunnel --url http://localhost:8000
```

This will give you a **public HTTPS URL** like:

```
https://serena-agent-tunnel.trycloudflare.com
```

Your server is now securely exposed to the internet.

---

## 3. Connect It to ChatGPT (Custom GPT)

### Steps:

1. Go to [ChatGPT → Explore GPTs → Create](https://chat.openai.com/gpts/editor)
2. During setup, click **“Add APIs”**
3. Set up **API Key authentication** with the auth type as **Bearer** and enter the api key you used to start the MCPO server.
4. In the **Schema** section, click on **import from URL** and paste `<cloudflared_url>/openapi.json` with the URL you got from the previous step.
5. Add the following line to the top of the imported JSON schema:
    ```
     "servers": ["url": "<cloudflared_url>"],
    ```
   **Important**: don't include a trailing slash at the end of the URL!

ChatGPT will read the schema and create functions automatically.

---

## Security Warning — Read Carefully

Depending on your configuration and enabled tools, Serena's MCP server may:
- Execute **arbitrary shell commands**
- Read, write, and modify **files in your codebase**

This gives ChatGPT the same powers as a remote developer on your machine.

### ⚠️ Key Rules:
- **NEVER expose your API key**
- **Only expose this server when needed**, and monitor its use.

In your project’s `.serena/project.yml` or global config, you can disable tools like:

```yaml
excluded_tools:
  - execute_shell_command
  - ...
read_only: true
```

This is strongly recommended if you want a read-only or safer agent.


---

## Final Thoughts

With this setup, ChatGPT becomes a coding assistant **running on your local code** — able to index, search, edit, and even run shell commands depending on your configuration.

Use responsibly, and keep security in mind.

```

--------------------------------------------------------------------------------
/test/resources/repos/ruby/test_repo/variables.rb:
--------------------------------------------------------------------------------

```ruby
require './models.rb'

# Global variables for testing references
$global_counter = 0
$global_config = {
  debug: true,
  timeout: 30
}

class DataContainer
  attr_accessor :status, :data, :metadata

  def initialize
    @status = "pending"
    @data = {}
    @metadata = {
      created_at: Time.now,
      version: "1.0"
    }
  end

  def update_status(new_status)
    old_status = @status
    @status = new_status
    log_status_change(old_status, new_status)
  end

  def process_data(input_data)
    @data = input_data
    @status = "processing"
    
    # Process the data
    result = @data.transform_values { |v| v.to_s.upcase }
    @status = "completed"
    
    result
  end

  def get_metadata_info
    info = "Status: #{@status}, Version: #{@metadata[:version]}"
    info += ", Created: #{@metadata[:created_at]}"
    info
  end

  private

  def log_status_change(old_status, new_status)
    puts "Status changed from #{old_status} to #{new_status}"
  end
end

class StatusTracker
  def initialize
    @tracked_items = []
  end

  def add_item(item)
    @tracked_items << item
    item.status = "tracked" if item.respond_to?(:status=)
  end

  def find_by_status(target_status)
    @tracked_items.select { |item| item.status == target_status }
  end

  def update_all_status(new_status)
    @tracked_items.each do |item|
      item.status = new_status if item.respond_to?(:status=)
    end
  end
end

# Module level variables and functions
module ProcessingHelper
  PROCESSING_MODES = ["sync", "async", "batch"].freeze
  
  @@instance_count = 0
  
  def self.create_processor(mode = "sync")
    @@instance_count += 1
    {
      id: @@instance_count,
      mode: mode,
      created_at: Time.now
    }
  end
  
  def self.get_instance_count
    @@instance_count
  end
end

# Test instances for reference testing
dataclass_instance = DataContainer.new
dataclass_instance.status = "initialized"

second_dataclass = DataContainer.new  
second_dataclass.update_status("ready")

tracker = StatusTracker.new
tracker.add_item(dataclass_instance)
tracker.add_item(second_dataclass)

# Function that uses the variables
def demonstrate_variable_usage
  puts "Global counter: #{$global_counter}"
  
  container = DataContainer.new
  container.status = "demo"
  
  processor = ProcessingHelper.create_processor("async")
  puts "Created processor #{processor[:id]} in #{processor[:mode]} mode"
  
  container
end

# More complex variable interactions
class VariableInteractionTest
  def initialize
    @internal_status = "created"
    @data_containers = []
  end
  
  def add_container(container)
    @data_containers << container
    container.status = "added_to_collection"
    @internal_status = "modified"
  end
  
  def process_all_containers
    @data_containers.each do |container|
      container.status = "batch_processed"
    end
    @internal_status = "processing_complete"
  end
  
  def get_status_summary
    statuses = @data_containers.map(&:status)
    {
      internal: @internal_status,
      containers: statuses,
      count: @data_containers.length
    }
  end
end

# Create instances for testing
interaction_test = VariableInteractionTest.new
interaction_test.add_container(dataclass_instance)
interaction_test.add_container(second_dataclass)
```

--------------------------------------------------------------------------------
/test/resources/repos/python/test_repo/test_repo/utils.py:
--------------------------------------------------------------------------------

```python
"""
Utility functions and classes demonstrating various Python features.
"""

import logging
from collections.abc import Callable
from typing import Any, TypeVar

# Type variables for generic functions
T = TypeVar("T")
U = TypeVar("U")


def setup_logging(level: str = "INFO") -> logging.Logger:
    """Set up and return a configured logger"""
    levels = {
        "DEBUG": logging.DEBUG,
        "INFO": logging.INFO,
        "WARNING": logging.WARNING,
        "ERROR": logging.ERROR,
        "CRITICAL": logging.CRITICAL,
    }

    logger = logging.getLogger("test_repo")
    logger.setLevel(levels.get(level.upper(), logging.INFO))

    handler = logging.StreamHandler()
    formatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
    handler.setFormatter(formatter)
    logger.addHandler(handler)

    return logger


# Decorator example
def log_execution(func: Callable) -> Callable:
    """Decorator to log function execution"""

    def wrapper(*args, **kwargs):
        logger = logging.getLogger("test_repo")
        logger.info(f"Executing function: {func.__name__}")
        result = func(*args, **kwargs)
        logger.info(f"Completed function: {func.__name__}")
        return result

    return wrapper


# Higher-order function
def map_list(items: list[T], mapper: Callable[[T], U]) -> list[U]:
    """Map a function over a list of items"""
    return [mapper(item) for item in items]


# Class with various Python features
class ConfigManager:
    """Manages configuration with various access patterns"""

    _instance = None

    # Singleton pattern
    def __new__(cls, *args, **kwargs):
        if not cls._instance:
            cls._instance = super().__new__(cls)
        return cls._instance

    def __init__(self, initial_config: dict[str, Any] | None = None):
        if not hasattr(self, "initialized"):
            self.config = initial_config or {}
            self.initialized = True

    def __getitem__(self, key: str) -> Any:
        """Allow dictionary-like access"""
        return self.config.get(key)

    def __setitem__(self, key: str, value: Any) -> None:
        """Allow dictionary-like setting"""
        self.config[key] = value

    @property
    def debug_mode(self) -> bool:
        """Property example"""
        return self.config.get("debug", False)

    @debug_mode.setter
    def debug_mode(self, value: bool) -> None:
        self.config["debug"] = value


# Context manager example
class Timer:
    """Context manager for timing code execution"""

    def __init__(self, name: str = "Timer"):
        self.name = name
        self.start_time = None
        self.end_time = None

    def __enter__(self):
        import time

        self.start_time = time.time()
        return self

    def __exit__(self, exc_type, exc_val, exc_tb):
        import time

        self.end_time = time.time()
        print(f"{self.name} took {self.end_time - self.start_time:.6f} seconds")


# Functions with default arguments
def retry(func: Callable, max_attempts: int = 3, delay: float = 1.0) -> Any:
    """Retry a function with backoff"""
    import time

    for attempt in range(max_attempts):
        try:
            return func()
        except Exception as e:
            if attempt == max_attempts - 1:
                raise e
            time.sleep(delay * (2**attempt))

```

--------------------------------------------------------------------------------
/test/solidlsp/markdown/test_markdown_basic.py:
--------------------------------------------------------------------------------

```python
"""
Basic integration tests for the markdown language server functionality.

These tests validate the functionality of the language server APIs
like request_document_symbols using the markdown test repository.
"""

import pytest

from solidlsp import SolidLanguageServer
from solidlsp.ls_config import Language


@pytest.mark.markdown
class TestMarkdownLanguageServerBasics:
    """Test basic functionality of the markdown language server."""

    @pytest.mark.parametrize("language_server", [Language.MARKDOWN], indirect=True)
    def test_markdown_language_server_initialization(self, language_server: SolidLanguageServer) -> None:
        """Test that markdown language server can be initialized successfully."""
        assert language_server is not None
        assert language_server.language == Language.MARKDOWN

    @pytest.mark.parametrize("language_server", [Language.MARKDOWN], indirect=True)
    def test_markdown_request_document_symbols(self, language_server: SolidLanguageServer) -> None:
        """Test request_document_symbols for markdown files."""
        # Test getting symbols from README.md
        all_symbols, _root_symbols = language_server.request_document_symbols("README.md", include_body=False)

        # Extract heading symbols (LSP Symbol Kind 15 is String, but marksman uses kind 15 for headings)
        # Note: Different markdown LSPs may use different symbol kinds for headings
        # Marksman typically uses kind 15 (String) for markdown headings
        heading_names = [symbol["name"] for symbol in all_symbols]

        # Should detect headings from README.md
        assert "Test Repository" in heading_names or len(all_symbols) > 0, "Should find at least one heading"

    @pytest.mark.parametrize("language_server", [Language.MARKDOWN], indirect=True)
    def test_markdown_request_symbols_from_guide(self, language_server: SolidLanguageServer) -> None:
        """Test symbol detection in guide.md file."""
        all_symbols, _root_symbols = language_server.request_document_symbols("guide.md", include_body=False)

        # At least some headings should be found
        assert len(all_symbols) > 0, f"Should find headings in guide.md, found {len(all_symbols)}"

    @pytest.mark.parametrize("language_server", [Language.MARKDOWN], indirect=True)
    def test_markdown_request_symbols_from_api(self, language_server: SolidLanguageServer) -> None:
        """Test symbol detection in api.md file."""
        all_symbols, _root_symbols = language_server.request_document_symbols("api.md", include_body=False)

        # Should detect headings from api.md
        assert len(all_symbols) > 0, f"Should find headings in api.md, found {len(all_symbols)}"

    @pytest.mark.parametrize("language_server", [Language.MARKDOWN], indirect=True)
    def test_markdown_request_document_symbols_with_body(self, language_server: SolidLanguageServer) -> None:
        """Test request_document_symbols with body extraction."""
        # Test with include_body=True
        all_symbols, _root_symbols = language_server.request_document_symbols("README.md", include_body=True)

        # Should have found some symbols
        assert len(all_symbols) > 0, "Should find symbols in README.md"

        # Note: Not all markdown LSPs provide body information for symbols
        # This test is more lenient and just verifies the API works
        assert all_symbols is not None, "Should return symbols even if body extraction is limited"

```

--------------------------------------------------------------------------------
/src/serena/tools/config_tools.py:
--------------------------------------------------------------------------------

```python
import json

from serena.config.context_mode import SerenaAgentMode
from serena.tools import Tool, ToolMarkerDoesNotRequireActiveProject, ToolMarkerOptional


class ActivateProjectTool(Tool, ToolMarkerDoesNotRequireActiveProject):
    """
    Activates a project by name.
    """

    def apply(self, project: str) -> str:
        """
        Activates the project with the given name.

        :param project: the name of a registered project to activate or a path to a project directory
        """
        active_project = self.agent.activate_project_from_path_or_name(project)
        if active_project.is_newly_created:
            result_str = (
                f"Created and activated a new project with name '{active_project.project_name}' at {active_project.project_root}, language: {active_project.project_config.language.value}. "
                "You can activate this project later by name.\n"
                f"The project's Serena configuration is in {active_project.path_to_project_yml()}. In particular, you may want to edit the project name and the initial prompt."
            )
        else:
            result_str = f"Activated existing project with name '{active_project.project_name}' at {active_project.project_root}, language: {active_project.project_config.language.value}"

        if active_project.project_config.initial_prompt:
            result_str += f"\nAdditional project information:\n {active_project.project_config.initial_prompt}"
        result_str += (
            f"\nAvailable memories:\n {json.dumps(list(self.memories_manager.list_memories()))}"
            + "You should not read these memories directly, but rather use the `read_memory` tool to read them later if needed for the task."
        )
        result_str += f"\nAvailable tools:\n {json.dumps(self.agent.get_active_tool_names())}"
        return result_str


class RemoveProjectTool(Tool, ToolMarkerDoesNotRequireActiveProject, ToolMarkerOptional):
    """
    Removes a project from the Serena configuration.
    """

    def apply(self, project_name: str) -> str:
        """
        Removes a project from the Serena configuration.

        :param project_name: Name of the project to remove
        """
        self.agent.serena_config.remove_project(project_name)
        return f"Successfully removed project '{project_name}' from configuration."


class SwitchModesTool(Tool, ToolMarkerOptional):
    """
    Activates modes by providing a list of their names
    """

    def apply(self, modes: list[str]) -> str:
        """
        Activates the desired modes, like ["editing", "interactive"] or ["planning", "one-shot"]

        :param modes: the names of the modes to activate
        """
        mode_instances = [SerenaAgentMode.load(mode) for mode in modes]
        self.agent.set_modes(mode_instances)

        # Inform the Agent about the activated modes and the currently active tools
        result_str = f"Successfully activated modes: {', '.join([mode.name for mode in mode_instances])}" + "\n"
        result_str += "\n".join([mode_instance.prompt for mode_instance in mode_instances]) + "\n"
        result_str += f"Currently active tools: {', '.join(self.agent.get_active_tool_names())}"
        return result_str


class GetCurrentConfigTool(Tool):
    """
    Prints the current configuration of the agent, including the active and available projects, tools, contexts, and modes.
    """

    def apply(self) -> str:
        """
        Print the current configuration of the agent, including the active and available projects, tools, contexts, and modes.
        """
        return self.agent.get_current_config_overview()

```

--------------------------------------------------------------------------------
/test/solidlsp/util/test_zip.py:
--------------------------------------------------------------------------------

```python
import sys
import zipfile
from pathlib import Path

import pytest

from solidlsp.util.zip import SafeZipExtractor


@pytest.fixture
def temp_zip_file(tmp_path: Path) -> Path:
    """Create a temporary ZIP file for testing."""
    zip_path = tmp_path / "test.zip"
    with zipfile.ZipFile(zip_path, "w") as zipf:
        zipf.writestr("file1.txt", "Hello World 1")
        zipf.writestr("file2.txt", "Hello World 2")
        zipf.writestr("folder/file3.txt", "Hello World 3")
    return zip_path


def test_extract_all_success(temp_zip_file: Path, tmp_path: Path) -> None:
    """All files should extract without error."""
    dest_dir = tmp_path / "extracted"
    extractor = SafeZipExtractor(temp_zip_file, dest_dir, verbose=False)
    extractor.extract_all()

    assert (dest_dir / "file1.txt").read_text() == "Hello World 1"
    assert (dest_dir / "file2.txt").read_text() == "Hello World 2"
    assert (dest_dir / "folder" / "file3.txt").read_text() == "Hello World 3"


def test_include_patterns(temp_zip_file: Path, tmp_path: Path) -> None:
    """Only files matching include_patterns should be extracted."""
    dest_dir = tmp_path / "extracted"
    extractor = SafeZipExtractor(temp_zip_file, dest_dir, verbose=False, include_patterns=["*.txt"])
    extractor.extract_all()

    assert (dest_dir / "file1.txt").exists()
    assert (dest_dir / "file2.txt").exists()
    assert (dest_dir / "folder" / "file3.txt").exists()


def test_exclude_patterns(temp_zip_file: Path, tmp_path: Path) -> None:
    """Files matching exclude_patterns should be skipped."""
    dest_dir = tmp_path / "extracted"
    extractor = SafeZipExtractor(temp_zip_file, dest_dir, verbose=False, exclude_patterns=["file2.txt"])
    extractor.extract_all()

    assert (dest_dir / "file1.txt").exists()
    assert not (dest_dir / "file2.txt").exists()
    assert (dest_dir / "folder" / "file3.txt").exists()


def test_include_and_exclude_patterns(temp_zip_file: Path, tmp_path: Path) -> None:
    """Exclude should override include if both match."""
    dest_dir = tmp_path / "extracted"
    extractor = SafeZipExtractor(
        temp_zip_file,
        dest_dir,
        verbose=False,
        include_patterns=["*.txt"],
        exclude_patterns=["file1.txt"],
    )
    extractor.extract_all()

    assert not (dest_dir / "file1.txt").exists()
    assert (dest_dir / "file2.txt").exists()
    assert (dest_dir / "folder" / "file3.txt").exists()


def test_skip_on_error(monkeypatch, temp_zip_file: Path, tmp_path: Path) -> None:
    """Should skip a file that raises an error and continue extracting others."""
    dest_dir = tmp_path / "extracted"

    original_open = zipfile.ZipFile.open

    def failing_open(self, member, *args, **kwargs):
        if member.filename == "file2.txt":
            raise OSError("Simulated failure")
        return original_open(self, member, *args, **kwargs)

    # Patch the method on the class, not on an instance
    monkeypatch.setattr(zipfile.ZipFile, "open", failing_open)

    extractor = SafeZipExtractor(temp_zip_file, dest_dir, verbose=False)
    extractor.extract_all()

    assert (dest_dir / "file1.txt").exists()
    assert not (dest_dir / "file2.txt").exists()
    assert (dest_dir / "folder" / "file3.txt").exists()


@pytest.mark.skipif(not sys.platform.startswith("win"), reason="Windows-only test")
def test_long_path_normalization(temp_zip_file: Path, tmp_path: Path) -> None:
    r"""Ensure _normalize_path adds \\?\\ prefix on Windows."""
    dest_dir = tmp_path / ("a" * 250)  # Simulate long path
    extractor = SafeZipExtractor(temp_zip_file, dest_dir, verbose=False)
    norm_path = extractor._normalize_path(dest_dir / "file.txt")
    assert str(norm_path).startswith("\\\\?\\")

```

--------------------------------------------------------------------------------
/roadmap.md:
--------------------------------------------------------------------------------

```markdown
# Roadmap

This document gives an overview of the ongoing and future development of Serena.
If you have a proposal or want to discuss something, feel free to open a discussion
on Github. For a summary of the past development, see the [changelog](/CHANGELOG.md).

Want to see us reach our goals faster? You can help out with an issue, start a discussion, or 
inform us about funding opportunities so that we can devote more time to the project.

## Overall Goals

Serena has the potential to be the go-to tool for most LLM coding tasks, since it is 
unique in its ability to be used as MCP Server in any kind of environment
while still being a capable agent. We want to achieve the following goals in terms of functionality:

1. Top performance (comparable to API-based coding agents) when used through official (free) clients like Claude Desktop.
1. Lowering API costs and potentially improving performance of coding clients (Claude Code, Codex, Cline, Roo, Cursor/Windsurf/VSCode etc).
1. Transparency and simplicity of use. Achieved through the dashboard/logging GUI.
1. Integrations with major frameworks that don't accept MCP. Usable as a library.

Apart from the functional goals, we have the goal of having great code design, so that Serena can be viewed
as a reference for how to implement MCP Servers. Such projects are an emerging technology, and
best practices are yet to be determined. We will share our experiences in [lessons learned](/lessons_learned.md).


## Immediate/Ongoing

- Support for projects using multiple programming languages.
- Evaluate whether `ReplaceLinesTool` can be removed in favor of a more reliable and performant editing approach.
- Generally experiment with various approaches to editing tools
- Manual evaluation on selected tasks from SWE-verified
- Manual evaluation of cost-lowering and performance when used within popular non-MCP agents
- Improvements in prompts, in particular giving examples and extending modes and contexts

## Upcoming

- Publishing Serena as a package that can also be used as library
- Use linting and type-hierarchy from the LSP in tools
- Tools for refactoring (rename, move) - speculative, maybe won't do this.
- Tracking edits and rolling them back with the dashboard
- Improve configurability and safety of shell tool. Maybe autogeneration of tools from a list of commands and descriptions.
- Transparent comparison with DesktopCommander and ...
- Automatic evaluation using OpenHands, submission to SWE-Bench
- Evaluation whether incorporating other MCPs increases performance or usability (memory bank is a candidate)
- More documentation and best practices

## Stretch

- Allow for sandboxing and parallel instances of Serena, maybe use openhands or codex for that
- Incorporate a verifier model or generally a second model (maybe for applying edits) as a tool.
- Building on the above, allow for the second model itself to be reachable through an MCP server, so it can be used for free
- Tracking edits performed with shell tools

## Beyond Serena

The technologies and approaches taken in Serena can be used for various research and service ideas. Some thought that we had are:

- PR and issue assistant working with GitHub, similar to how [OpenHands](https://github.com/All-Hands-AI/OpenHands) 
  and [qodo](https://github.com/qodo-ai/pr-agent) operate. Should be callable through @serena
- Tuning a coding LLM with Serena's tools with RL on one-shot tasks. We would need compute-funding for that
- Develop a web app to quantitatively compare the performance of various agents by scraping PRs and manually crafted metadata.
  The main metric for coding agents should be *developer experience*, and that is hard to grasp and is poorly correlated with
  performance on current benchmarks.
```

--------------------------------------------------------------------------------
/src/interprompt/prompt_factory.py:
--------------------------------------------------------------------------------

```python
import logging
import os
from typing import Any

from .multilang_prompt import DEFAULT_LANG_CODE, LanguageFallbackMode, MultiLangPromptCollection, PromptList

log = logging.getLogger(__name__)


class PromptFactoryBase:
    """Base class for auto-generated prompt factory classes."""

    def __init__(self, prompts_dir: str | list[str], lang_code: str = DEFAULT_LANG_CODE, fallback_mode=LanguageFallbackMode.EXCEPTION):
        """
        :param prompts_dir: the directory containing the prompt templates and prompt lists.
            If a list is provided, will look for prompt templates in the dirs from left to right
            (first one containing the desired template wins).
        :param lang_code: the language code to use for retrieving the prompt templates and prompt lists.
            Leave as `default` for single-language use cases.
        :param fallback_mode: the fallback mode to use when a prompt template or prompt list is not found for the requested language.
            Irrelevant for single-language use cases.
        """
        self.lang_code = lang_code
        self._prompt_collection = MultiLangPromptCollection(prompts_dir, fallback_mode=fallback_mode)

    def _render_prompt(self, prompt_name: str, params: dict[str, Any]) -> str:
        del params["self"]
        return self._prompt_collection.render_prompt_template(prompt_name, params, lang_code=self.lang_code)

    def _get_prompt_list(self, prompt_name: str) -> PromptList:
        return self._prompt_collection.get_prompt_list(prompt_name, self.lang_code)


def autogenerate_prompt_factory_module(prompts_dir: str, target_module_path: str) -> None:
    """
    Auto-generates a prompt factory module for the given prompt directory.
    The generated `PromptFactory` class is meant to be the central entry class for retrieving and rendering prompt templates and prompt
    lists in your application.
    It will contain one method per prompt template and prompt list, and is useful for both single- and multi-language use cases.

    :param prompts_dir: the directory containing the prompt templates and prompt lists
    :param target_module_path: the path to the target module file (.py). Important: The module will be overwritten!
    """
    generated_code = """
# ruff: noqa
# black: skip
# mypy: ignore-errors

# NOTE: This module is auto-generated from interprompt.autogenerate_prompt_factory_module, do not edit manually!

from interprompt.multilang_prompt import PromptList
from interprompt.prompt_factory import PromptFactoryBase
from typing import Any


class PromptFactory(PromptFactoryBase):
    \"""
    A class for retrieving and rendering prompt templates and prompt lists.
    \"""
"""
    # ---- add methods based on prompt template names and parameters and prompt list names ----
    prompt_collection = MultiLangPromptCollection(prompts_dir)

    for template_name in prompt_collection.get_prompt_template_names():
        template_parameters = prompt_collection.get_prompt_template_parameters(template_name)
        if len(template_parameters) == 0:
            method_params_str = ""
        else:
            method_params_str = ", *, " + ", ".join([f"{param}: Any" for param in template_parameters])
        generated_code += f"""
    def create_{template_name}(self{method_params_str}) -> str:
        return self._render_prompt('{template_name}', locals())
"""
    for prompt_list_name in prompt_collection.get_prompt_list_names():
        generated_code += f"""
    def get_list_{prompt_list_name}(self) -> PromptList:
        return self._get_prompt_list('{prompt_list_name}')
"""
    os.makedirs(os.path.dirname(target_module_path), exist_ok=True)
    with open(target_module_path, "w", encoding="utf-8") as f:
        f.write(generated_code)
    log.info(f"Prompt factory generated successfully in {target_module_path}")

```

--------------------------------------------------------------------------------
/src/solidlsp/lsp_protocol_handler/server.py:
--------------------------------------------------------------------------------

```python
"""
This file provides the implementation of the JSON-RPC client, that launches and
communicates with the language server.

The initial implementation of this file was obtained from
https://github.com/predragnikolic/OLSP under the MIT License with the following terms:

MIT License

Copyright (c) 2023 Предраг Николић

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
"""

import dataclasses
import json
import logging
import os
from typing import Any, Union

from .lsp_types import ErrorCodes

StringDict = dict[str, Any]
PayloadLike = Union[list[StringDict], StringDict, None]
CONTENT_LENGTH = "Content-Length: "
ENCODING = "utf-8"
log = logging.getLogger(__name__)


@dataclasses.dataclass
class ProcessLaunchInfo:
    """
    This class is used to store the information required to launch a process.
    """

    # The command to launch the process
    cmd: str | list[str]

    # The environment variables to set for the process
    env: dict[str, str] = dataclasses.field(default_factory=dict)

    # The working directory for the process
    cwd: str = os.getcwd()


class LSPError(Exception):
    def __init__(self, code: ErrorCodes, message: str) -> None:
        super().__init__(message)
        self.code = code

    def to_lsp(self) -> StringDict:
        return {"code": self.code, "message": super().__str__()}

    @classmethod
    def from_lsp(cls, d: StringDict) -> "LSPError":
        return LSPError(d["code"], d["message"])

    def __str__(self) -> str:
        return f"{super().__str__()} ({self.code})"


def make_response(request_id: Any, params: PayloadLike) -> StringDict:
    return {"jsonrpc": "2.0", "id": request_id, "result": params}


def make_error_response(request_id: Any, err: LSPError) -> StringDict:
    return {"jsonrpc": "2.0", "id": request_id, "error": err.to_lsp()}


def make_notification(method: str, params: PayloadLike) -> StringDict:
    return {"jsonrpc": "2.0", "method": method, "params": params}


def make_request(method: str, request_id: Any, params: PayloadLike) -> StringDict:
    return {"jsonrpc": "2.0", "method": method, "id": request_id, "params": params}


class StopLoopException(Exception):
    pass


def create_message(payload: PayloadLike):
    body = json.dumps(payload, check_circular=False, ensure_ascii=False, separators=(",", ":")).encode(ENCODING)
    return (
        f"Content-Length: {len(body)}\r\n".encode(ENCODING),
        "Content-Type: application/vscode-jsonrpc; charset=utf-8\r\n\r\n".encode(ENCODING),
        body,
    )


class MessageType:
    error = 1
    warning = 2
    info = 3
    log = 4


def content_length(line: bytes) -> int | None:
    if line.startswith(b"Content-Length: "):
        _, value = line.split(b"Content-Length: ")
        value = value.strip()
        try:
            return int(value)
        except ValueError:
            raise ValueError(f"Invalid Content-Length header: {value}")
    return None

```

--------------------------------------------------------------------------------
/.serena/memories/serena_repository_structure.md:
--------------------------------------------------------------------------------

```markdown
# Serena Repository Structure

## Overview
Serena is a multi-language code assistant that combines two main components:
1. **Serena Core** - The main agent framework with tools and MCP server
2. **SolidLSP** - A unified Language Server Protocol wrapper for multiple programming languages

## Top-Level Structure

```
serena/
├── src/                          # Main source code
│   ├── serena/                   # Serena agent framework
│   ├── solidlsp/                 # LSP wrapper library  
│   └── interprompt/              # Multi-language prompt templates
├── test/                         # Test suites
│   ├── serena/                   # Serena agent tests
│   ├── solidlsp/                 # Language server tests
│   └── resources/repos/          # Test repositories for each language
├── scripts/                      # Build and utility scripts
├── resources/                    # Static resources and configurations
├── pyproject.toml               # Python project configuration
├── README.md                    # Project documentation
└── CHANGELOG.md                 # Version history
```

## Source Code Organization

### Serena Core (`src/serena/`)
- **`agent.py`** - Main SerenaAgent class that orchestrates everything
- **`tools/`** - MCP tools for file operations, symbols, memory, etc.
  - `file_tools.py` - File system operations (read, write, search)
  - `symbol_tools.py` - Symbol-based code operations (find, edit)
  - `memory_tools.py` - Knowledge persistence and retrieval
  - `config_tools.py` - Project and mode management
  - `workflow_tools.py` - Onboarding and meta-operations
- **`config/`** - Configuration management
  - `serena_config.py` - Main configuration classes
  - `context_mode.py` - Context and mode definitions
- **`util/`** - Utility modules
- **`mcp.py`** - MCP server implementation
- **`cli.py`** - Command-line interface

### SolidLSP (`src/solidlsp/`)
- **`ls.py`** - Main SolidLanguageServer class
- **`language_servers/`** - Language-specific implementations
  - `csharp_language_server.py` - C# (Microsoft.CodeAnalysis.LanguageServer)
  - `python_server.py` - Python (Pyright)
  - `typescript_language_server.py` - TypeScript
  - `rust_analyzer.py` - Rust
  - `gopls.py` - Go
  - And many more...
- **`ls_config.py`** - Language server configuration
- **`ls_types.py`** - LSP type definitions
- **`ls_utils.py`** - Utilities for working with LSP data

### Interprompt (`src/interprompt/`)
- Multi-language prompt template system
- Jinja2-based templating with language fallbacks

## Test Structure

### Language Server Tests (`test/solidlsp/`)
Each language has its own test directory:
```
test/solidlsp/
├── csharp/
│   └── test_csharp_basic.py
├── python/
│   └── test_python_basic.py
├── typescript/
│   └── test_typescript_basic.py
└── ...
```

### Test Resources (`test/resources/repos/`)
Contains minimal test projects for each language:
```
test/resources/repos/
├── csharp/test_repo/
│   ├── serena.sln
│   ├── TestProject.csproj
│   ├── Program.cs
│   └── Models/Person.cs
├── python/test_repo/
├── typescript/test_repo/
└── ...
```

### Test Infrastructure
- **`test/conftest.py`** - Shared test fixtures and utilities
- **`create_ls()`** function - Creates language server instances for testing
- **`language_server` fixture** - Parametrized fixture for multi-language tests

## Key Configuration Files

- **`pyproject.toml`** - Python dependencies, build config, and tool settings
- **`.serena/`** directories - Project-specific Serena configuration and memories
- **`CLAUDE.md`** - Instructions for AI assistants working on the project

## Dependencies Management

The project uses modern Python tooling:
- **uv** for fast dependency resolution and virtual environments
- **pytest** for testing with language-specific markers (`@pytest.mark.csharp`)
- **ruff** for linting and formatting
- **mypy** for type checking

## Build and Development

- **Docker support** - Full containerized development environment
- **GitHub Actions** - CI/CD with language server testing
- **Development scripts** in `scripts/` directory
```

--------------------------------------------------------------------------------
/src/solidlsp/language_servers/omnisharp/workspace_did_change_configuration.json:
--------------------------------------------------------------------------------

```json
{
    "RoslynExtensionsOptions": {
        "EnableDecompilationSupport": false,
        "EnableAnalyzersSupport": true,
        "EnableImportCompletion": true,
        "EnableAsyncCompletion": false,
        "DocumentAnalysisTimeoutMs": 30000,
        "DiagnosticWorkersThreadCount": 18,
        "AnalyzeOpenDocumentsOnly": true,
        "InlayHintsOptions": {
            "EnableForParameters": false,
            "ForLiteralParameters": false,
            "ForIndexerParameters": false,
            "ForObjectCreationParameters": false,
            "ForOtherParameters": false,
            "SuppressForParametersThatDifferOnlyBySuffix": false,
            "SuppressForParametersThatMatchMethodIntent": false,
            "SuppressForParametersThatMatchArgumentName": false,
            "EnableForTypes": false,
            "ForImplicitVariableTypes": false,
            "ForLambdaParameterTypes": false,
            "ForImplicitObjectCreation": false
        },
        "LocationPaths": null
    },
    "FormattingOptions": {
        "OrganizeImports": false,
        "EnableEditorConfigSupport": true,
        "NewLine": "\n",
        "UseTabs": false,
        "TabSize": 4,
        "IndentationSize": 4,
        "SpacingAfterMethodDeclarationName": false,
        "SeparateImportDirectiveGroups": false,
        "SpaceWithinMethodDeclarationParenthesis": false,
        "SpaceBetweenEmptyMethodDeclarationParentheses": false,
        "SpaceAfterMethodCallName": false,
        "SpaceWithinMethodCallParentheses": false,
        "SpaceBetweenEmptyMethodCallParentheses": false,
        "SpaceAfterControlFlowStatementKeyword": true,
        "SpaceWithinExpressionParentheses": false,
        "SpaceWithinCastParentheses": false,
        "SpaceWithinOtherParentheses": false,
        "SpaceAfterCast": false,
        "SpaceBeforeOpenSquareBracket": false,
        "SpaceBetweenEmptySquareBrackets": false,
        "SpaceWithinSquareBrackets": false,
        "SpaceAfterColonInBaseTypeDeclaration": true,
        "SpaceAfterComma": true,
        "SpaceAfterDot": false,
        "SpaceAfterSemicolonsInForStatement": true,
        "SpaceBeforeColonInBaseTypeDeclaration": true,
        "SpaceBeforeComma": false,
        "SpaceBeforeDot": false,
        "SpaceBeforeSemicolonsInForStatement": false,
        "SpacingAroundBinaryOperator": "single",
        "IndentBraces": false,
        "IndentBlock": true,
        "IndentSwitchSection": true,
        "IndentSwitchCaseSection": true,
        "IndentSwitchCaseSectionWhenBlock": true,
        "LabelPositioning": "oneLess",
        "WrappingPreserveSingleLine": true,
        "WrappingKeepStatementsOnSingleLine": true,
        "NewLinesForBracesInTypes": true,
        "NewLinesForBracesInMethods": true,
        "NewLinesForBracesInProperties": true,
        "NewLinesForBracesInAccessors": true,
        "NewLinesForBracesInAnonymousMethods": true,
        "NewLinesForBracesInControlBlocks": true,
        "NewLinesForBracesInAnonymousTypes": true,
        "NewLinesForBracesInObjectCollectionArrayInitializers": true,
        "NewLinesForBracesInLambdaExpressionBody": true,
        "NewLineForElse": true,
        "NewLineForCatch": true,
        "NewLineForFinally": true,
        "NewLineForMembersInObjectInit": true,
        "NewLineForMembersInAnonymousTypes": true,
        "NewLineForClausesInQuery": true
    },
    "FileOptions": {
        "SystemExcludeSearchPatterns": [
            "**/node_modules/**/*",
            "**/bin/**/*",
            "**/obj/**/*",
            "**/.git/**/*",
            "**/.git",
            "**/.svn",
            "**/.hg",
            "**/CVS",
            "**/.DS_Store",
            "**/Thumbs.db"
        ],
        "ExcludeSearchPatterns": []
    },
    "RenameOptions": {
        "RenameOverloads": false,
        "RenameInStrings": false,
        "RenameInComments": false
    },
    "ImplementTypeOptions": {
        "InsertionBehavior": 0,
        "PropertyGenerationBehavior": 0
    },
    "DotNetCliOptions": {
        "LocationPaths": null
    },
    "Plugins": {
        "LocationPaths": null
    }
}
```

--------------------------------------------------------------------------------
/test/solidlsp/perl/test_perl_basic.py:
--------------------------------------------------------------------------------

```python
import platform
from pathlib import Path

import pytest

from solidlsp import SolidLanguageServer
from solidlsp.ls_config import Language


@pytest.mark.perl
@pytest.mark.skipif(platform.system() == "Windows", reason="Perl::LanguageServer does not support native Windows operation")
class TestPerlLanguageServer:
    """
    Tests for Perl::LanguageServer integration.

    Perl::LanguageServer provides comprehensive LSP support for Perl including:
    - Document symbols (functions, variables)
    - Go to definition (including cross-file)
    - Find references (including cross-file) - this was not available in PLS
    """

    @pytest.mark.parametrize("language_server", [Language.PERL], indirect=True)
    @pytest.mark.parametrize("repo_path", [Language.PERL], indirect=True)
    def test_ls_is_running(self, language_server: SolidLanguageServer, repo_path: Path) -> None:
        """Test that the language server starts and stops successfully."""
        # The fixture already handles start and stop
        assert language_server.is_running()
        assert Path(language_server.language_server.repository_root_path).resolve() == repo_path.resolve()

    @pytest.mark.parametrize("language_server", [Language.PERL], indirect=True)
    def test_document_symbols(self, language_server: SolidLanguageServer) -> None:
        """Test that document symbols are correctly identified."""
        # Request document symbols
        all_symbols, _ = language_server.request_document_symbols("main.pl", include_body=False)

        assert all_symbols, "Expected to find symbols in main.pl"
        assert len(all_symbols) > 0, "Expected at least one symbol"

        # DEBUG: Print all symbols
        print("\n=== All symbols in main.pl ===")
        for s in all_symbols:
            line = s.get("range", {}).get("start", {}).get("line", "?")
            print(f"Line {line}: {s.get('name')} (kind={s.get('kind')})")

        # Check that we can find function symbols
        function_symbols = [s for s in all_symbols if s.get("kind") == 12]  # 12 = Function/Method
        assert len(function_symbols) >= 2, f"Expected at least 2 functions (greet, use_helper_function), found {len(function_symbols)}"

        function_names = [s.get("name") for s in function_symbols]
        assert "greet" in function_names, f"Expected 'greet' function in symbols, found: {function_names}"
        assert "use_helper_function" in function_names, f"Expected 'use_helper_function' in symbols, found: {function_names}"

    # @pytest.mark.skip(reason="Perl::LanguageServer cross-file definition tracking needs configuration")
    @pytest.mark.parametrize("language_server", [Language.PERL], indirect=True)
    def test_find_definition_across_files(self, language_server: SolidLanguageServer) -> None:
        definition_location_list = language_server.request_definition("main.pl", 17, 0)

        assert len(definition_location_list) == 1
        definition_location = definition_location_list[0]
        print(f"Found definition: {definition_location}")
        assert definition_location["uri"].endswith("helper.pl")
        assert definition_location["range"]["start"]["line"] == 4  # add method on line 2 (0-indexed 1)

    @pytest.mark.parametrize("language_server", [Language.PERL], indirect=True)
    def test_find_references_across_files(self, language_server: SolidLanguageServer) -> None:
        """Test finding references to a function across multiple files."""
        reference_locations = language_server.request_references("helper.pl", 4, 5)

        assert len(reference_locations) >= 2, f"Expected at least 2 references to helper_function, found {len(reference_locations)}"

        main_pl_refs = [ref for ref in reference_locations if ref["uri"].endswith("main.pl")]
        assert len(main_pl_refs) >= 2, f"Expected at least 2 references in main.pl, found {len(main_pl_refs)}"

        main_pl_lines = sorted([ref["range"]["start"]["line"] for ref in main_pl_refs])
        assert 17 in main_pl_lines, f"Expected reference at line 18 (0-indexed 17), found: {main_pl_lines}"
        assert 20 in main_pl_lines, f"Expected reference at line 21 (0-indexed 20), found: {main_pl_lines}"

```

--------------------------------------------------------------------------------
/src/solidlsp/util/zip.py:
--------------------------------------------------------------------------------

```python
import fnmatch
import logging
import os
import sys
import zipfile
from pathlib import Path
from typing import Optional

log = logging.getLogger(__name__)


class SafeZipExtractor:
    """
    A utility class for extracting ZIP archives safely.

    Features:
    - Handles long file paths on Windows
    - Skips files that fail to extract, continuing with the rest
    - Creates necessary directories automatically
    - Optional include/exclude pattern filters
    """

    def __init__(
        self,
        archive_path: Path,
        extract_dir: Path,
        verbose: bool = True,
        include_patterns: Optional[list[str]] = None,
        exclude_patterns: Optional[list[str]] = None,
    ) -> None:
        """
        Initialize the SafeZipExtractor.

        :param archive_path: Path to the ZIP archive file
        :param extract_dir: Directory where files will be extracted
        :param verbose: Whether to log status messages
        :param include_patterns: List of glob patterns for files to extract (None = all files)
        :param exclude_patterns: List of glob patterns for files to skip
        """
        self.archive_path = Path(archive_path)
        self.extract_dir = Path(extract_dir)
        self.verbose = verbose
        self.include_patterns = include_patterns or []
        self.exclude_patterns = exclude_patterns or []

    def extract_all(self) -> None:
        """
        Extract all files from the archive, skipping any that fail.
        """
        if not self.archive_path.exists():
            raise FileNotFoundError(f"Archive not found: {self.archive_path}")

        if self.verbose:
            log.info(f"Extracting from: {self.archive_path} to {self.extract_dir}")

        with zipfile.ZipFile(self.archive_path, "r") as zip_ref:
            for member in zip_ref.infolist():
                if self._should_extract(member.filename):
                    self._extract_member(zip_ref, member)
                elif self.verbose:
                    log.info(f"Skipped: {member.filename}")

    def _should_extract(self, filename: str) -> bool:
        """
        Determine whether a file should be extracted based on include/exclude patterns.

        :param filename: The file name from the archive
        :return: True if the file should be extracted
        """
        # If include_patterns is set, only extract if it matches at least one pattern
        if self.include_patterns:
            if not any(fnmatch.fnmatch(filename, pattern) for pattern in self.include_patterns):
                return False

        # If exclude_patterns is set, skip if it matches any pattern
        if self.exclude_patterns:
            if any(fnmatch.fnmatch(filename, pattern) for pattern in self.exclude_patterns):
                return False

        return True

    def _extract_member(self, zip_ref: zipfile.ZipFile, member: zipfile.ZipInfo) -> None:
        """
        Extract a single member from the archive with error handling.

        :param zip_ref: Open ZipFile object
        :param member: ZipInfo object representing the file
        """
        try:
            target_path = self.extract_dir / member.filename

            # Ensure directory structure exists
            target_path.parent.mkdir(parents=True, exist_ok=True)

            # Handle long paths on Windows
            final_path = self._normalize_path(target_path)

            # Extract file
            with zip_ref.open(member) as source, open(final_path, "wb") as target:
                target.write(source.read())

            if self.verbose:
                log.info(f"Extracted: {member.filename}")

        except Exception as e:
            log.error(f"Failed to extract {member.filename}: {e}")

    @staticmethod
    def _normalize_path(path: Path) -> Path:
        """
        Adjust path to handle long paths on Windows.

        :param path: Original path
        :return: Normalized path
        """
        if sys.platform.startswith("win"):
            return Path(rf"\\?\{os.path.abspath(path)}")
        return path


# Example usage:
# extractor = SafeZipExtractor(
#     archive_path=Path("file.nupkg"),
#     extract_dir=Path("extract_dir"),
#     include_patterns=["*.dll", "*.xml"],
#     exclude_patterns=["*.pdb"]
# )
# extractor.extract_all()

```

--------------------------------------------------------------------------------
/.serena/project.yml:
--------------------------------------------------------------------------------

```yaml
# language of the project (csharp, python, rust, java, typescript, javascript, go, cpp, or ruby)
# Special requirements:
#  * csharp: Requires the presence of a .sln file in the project folder.
language: python

# whether to use the project's gitignore file to ignore files
# Added on 2025-04-07
ignore_all_files_in_gitignore: true
# list of additional paths to ignore
# same syntax as gitignore, so you can use * and **
# Was previously called `ignored_dirs`, please update your config if you are using that.
# Added (renamed)on 2025-04-07
ignored_paths: []

# whether the project is in read-only mode
# If set to true, all editing tools will be disabled and attempts to use them will result in an error
# Added on 2025-04-18
read_only: false


# list of tool names to exclude. We recommend not excluding any tools, see the readme for more details.
# Below is the complete list of tools for convenience.
# To make sure you have the latest list of tools, and to view their descriptions, 
# execute `uv run scripts/print_tool_overview.py`.
#
#  * `activate_project`: Activates a project by name.
#  * `check_onboarding_performed`: Checks whether project onboarding was already performed.
#  * `create_text_file`: Creates/overwrites a file in the project directory.
#  * `delete_lines`: Deletes a range of lines within a file.
#  * `delete_memory`: Deletes a memory from Serena's project-specific memory store.
#  * `execute_shell_command`: Executes a shell command.
#  * `find_referencing_code_snippets`: Finds code snippets in which the symbol at the given location is referenced.
#  * `find_referencing_symbols`: Finds symbols that reference the symbol at the given location (optionally filtered by type).
#  * `find_symbol`: Performs a global (or local) search for symbols with/containing a given name/substring (optionally filtered by type).
#  * `get_current_config`: Prints the current configuration of the agent, including the active and available projects, tools, contexts, and modes.
#  * `get_symbols_overview`: Gets an overview of the top-level symbols defined in a given file.
#  * `initial_instructions`: Gets the initial instructions for the current project.
#     Should only be used in settings where the system prompt cannot be set,
#     e.g. in clients you have no control over, like Claude Desktop.
#  * `insert_after_symbol`: Inserts content after the end of the definition of a given symbol.
#  * `insert_at_line`: Inserts content at a given line in a file.
#  * `insert_before_symbol`: Inserts content before the beginning of the definition of a given symbol.
#  * `list_dir`: Lists files and directories in the given directory (optionally with recursion).
#  * `list_memories`: Lists memories in Serena's project-specific memory store.
#  * `onboarding`: Performs onboarding (identifying the project structure and essential tasks, e.g. for testing or building).
#  * `prepare_for_new_conversation`: Provides instructions for preparing for a new conversation (in order to continue with the necessary context).
#  * `read_file`: Reads a file within the project directory.
#  * `read_memory`: Reads the memory with the given name from Serena's project-specific memory store.
#  * `remove_project`: Removes a project from the Serena configuration.
#  * `replace_lines`: Replaces a range of lines within a file with new content.
#  * `replace_symbol_body`: Replaces the full definition of a symbol.
#  * `restart_language_server`: Restarts the language server, may be necessary when edits not through Serena happen.
#  * `search_for_pattern`: Performs a search for a pattern in the project.
#  * `summarize_changes`: Provides instructions for summarizing the changes made to the codebase.
#  * `switch_modes`: Activates modes by providing a list of their names
#  * `think_about_collected_information`: Thinking tool for pondering the completeness of collected information.
#  * `think_about_task_adherence`: Thinking tool for determining whether the agent is still on track with the current task.
#  * `think_about_whether_you_are_done`: Thinking tool for determining whether the task is truly completed.
#  * `write_memory`: Writes a named memory (for future reference) to Serena's project-specific memory store.
excluded_tools: []

# initial prompt for the project. It will always be given to the LLM upon activating the project
# (contrary to the memories, which are loaded on demand).
initial_prompt: ""

project_name: "serena"

```

--------------------------------------------------------------------------------
/src/serena/resources/project.template.yml:
--------------------------------------------------------------------------------

```yaml
# language of the project (csharp, python, rust, java, typescript, go, cpp, or ruby)
#  * For C, use cpp
#  * For JavaScript, use typescript
# Special requirements:
#  * csharp: Requires the presence of a .sln file in the project folder.
language: python

# whether to use the project's gitignore file to ignore files
# Added on 2025-04-07
ignore_all_files_in_gitignore: true
# list of additional paths to ignore
# same syntax as gitignore, so you can use * and **
# Was previously called `ignored_dirs`, please update your config if you are using that.
# Added (renamed) on 2025-04-07
ignored_paths: []

# whether the project is in read-only mode
# If set to true, all editing tools will be disabled and attempts to use them will result in an error
# Added on 2025-04-18
read_only: false

# list of tool names to exclude. We recommend not excluding any tools, see the readme for more details.
# Below is the complete list of tools for convenience.
# To make sure you have the latest list of tools, and to view their descriptions, 
# execute `uv run scripts/print_tool_overview.py`.
#
#  * `activate_project`: Activates a project by name.
#  * `check_onboarding_performed`: Checks whether project onboarding was already performed.
#  * `create_text_file`: Creates/overwrites a file in the project directory.
#  * `delete_lines`: Deletes a range of lines within a file.
#  * `delete_memory`: Deletes a memory from Serena's project-specific memory store.
#  * `execute_shell_command`: Executes a shell command.
#  * `find_referencing_code_snippets`: Finds code snippets in which the symbol at the given location is referenced.
#  * `find_referencing_symbols`: Finds symbols that reference the symbol at the given location (optionally filtered by type).
#  * `find_symbol`: Performs a global (or local) search for symbols with/containing a given name/substring (optionally filtered by type).
#  * `get_current_config`: Prints the current configuration of the agent, including the active and available projects, tools, contexts, and modes.
#  * `get_symbols_overview`: Gets an overview of the top-level symbols defined in a given file.
#  * `initial_instructions`: Gets the initial instructions for the current project.
#     Should only be used in settings where the system prompt cannot be set,
#     e.g. in clients you have no control over, like Claude Desktop.
#  * `insert_after_symbol`: Inserts content after the end of the definition of a given symbol.
#  * `insert_at_line`: Inserts content at a given line in a file.
#  * `insert_before_symbol`: Inserts content before the beginning of the definition of a given symbol.
#  * `list_dir`: Lists files and directories in the given directory (optionally with recursion).
#  * `list_memories`: Lists memories in Serena's project-specific memory store.
#  * `onboarding`: Performs onboarding (identifying the project structure and essential tasks, e.g. for testing or building).
#  * `prepare_for_new_conversation`: Provides instructions for preparing for a new conversation (in order to continue with the necessary context).
#  * `read_file`: Reads a file within the project directory.
#  * `read_memory`: Reads the memory with the given name from Serena's project-specific memory store.
#  * `remove_project`: Removes a project from the Serena configuration.
#  * `replace_lines`: Replaces a range of lines within a file with new content.
#  * `replace_symbol_body`: Replaces the full definition of a symbol.
#  * `restart_language_server`: Restarts the language server, may be necessary when edits not through Serena happen.
#  * `search_for_pattern`: Performs a search for a pattern in the project.
#  * `summarize_changes`: Provides instructions for summarizing the changes made to the codebase.
#  * `switch_modes`: Activates modes by providing a list of their names
#  * `think_about_collected_information`: Thinking tool for pondering the completeness of collected information.
#  * `think_about_task_adherence`: Thinking tool for determining whether the agent is still on track with the current task.
#  * `think_about_whether_you_are_done`: Thinking tool for determining whether the task is truly completed.
#  * `write_memory`: Writes a named memory (for future reference) to Serena's project-specific memory store.
excluded_tools: []

# initial prompt for the project. It will always be given to the LLM upon activating the project
# (contrary to the memories, which are loaded on demand).
initial_prompt: ""

project_name: "project_name"

```

--------------------------------------------------------------------------------
/lessons_learned.md:
--------------------------------------------------------------------------------

```markdown
# Lessons Learned

In this document we briefly collect what we have learned while developing and using Serena,
what works well and what doesn't.

## What Worked

### Separate Tool Logic From MCP Implementation

MCP is just another protocol, one should let the details of it creep into the application logic.
The official docs suggest using function annotations to define tools and prompts. While that may be
useful for small projects to get going fast, it is not wise for more serious projects. In Serena,
all tools are defined independently and then converted to instances of `MCPTool` using our `make_tool`
function.

### Autogenerated PromptFactory

Prompt templates are central for most LLM applications, so one needs good representations of them in the code,
while at the same time they often need to be customizable and exposed to users. In Serena we address these conflicting 
needs by defining prompt templates (in jinja format) in separate yamls that users can easily modify and by autogenerated
a `PromptFactory` class with meaningful method and parameter names from these yamls. The latter is committed to our code.
We separated out the generation logic into the [interprompt](/src/interprompt/README.md) subpackage that can be used as a library.

### Tempfiles and Snapshots for Testing of Editing Tools

We test most aspects of Serena by having a small "project" for each supported language in `tests/resources`.
For the editing tools, which would change the code in these projects, we use tempfiles to copy over the code.
The pretty awesome [syrupy](https://github.com/syrupy-project/syrupy) pytest plugin helped in developing
snapshot tests.

### Dashboard and GUI for Logging

It is very useful to know what the MCP Server is doing. We collect and display logs in a GUI or a web dashboard,
which helps a lot in seeing what's going on and in identifying any issues.

### Unrestricted Bash Tool

We know it's not particularly safe to permit unlimited shell commands outside a sandbox, but we did quite some
evaluations and so far... nothing bad has happened. Seems like the current versions of the AI overlords rarely want to execute `sudo rm - rf /`.
Still, we are working on a safer approach as well as better integration with sandboxing.

### Multilspy

The [multilspy](https://github.com/microsoft/multilspy/) project helped us a lot in getting started and stands at the core of Serena.
Many more well known python implementations of language servers were subpar in code quality and design (for example, missing types).

### Developing Serena with Serena

We clearly notice that the better the tool gets, the easier it is to make it even better

## Prompting

### Shouting and Emotive Language May Be Needed

When developing the `ReplaceRegexTool` we were initially not able to make Claude 4 (in Claude Desktop) use wildcards to save on output tokens. Neither
examples nor explicit instructions helped. It was only after adding 

```
IMPORTANT: REMEMBER TO USE WILDCARDS WHEN APPROPRIATE! I WILL BE VERY UNHAPPY IF YOU WRITE LONG REGEXES WITHOUT USING WILDCARDS INSTEAD!
```

to the initial instructions and to the tool description that Claude finally started following the instructions.

## What Didn't Work

### Lifespan Handling by MCP Clients

The MCP technology is clearly very green. Even though there is a lifespan context in the MCP SDK,
many clients, including Claude Desktop, fail to properly clean up, leaving zombie processes behind.
We mitigate this through the GUI window and the dashboard, so the user sees whether Serena is running
and can terminate it there.

### Trusting Asyncio

Running multiple asyncio apps led to non-deterministic 
event loop contamination and deadlocks, which were very hard to debug
and understand. We solved this with a large hammer, by putting all asyncio apps into a separate
process. It made the code much more complex and slightly enhanced RAM requirements, but it seems
like that was the only way to reliably overcome asyncio deadlock issues.

### Cross-OS Tkinter GUI

Different OS have different limitations when it comes to starting a window or dealing with Tkinter
installations. This was so messy to get right that we pivoted to a web-dashboard instead

### Editing Based on Line Numbers

Not only are LLMs notoriously bad in counting, but also the line numbers change after edit operations,
and LLMs are also often too dumb to understand that they should update the line numbers information they had
received before. We pivoted to string-matching and symbol-name based editing.
```

--------------------------------------------------------------------------------
/test/conftest.py:
--------------------------------------------------------------------------------

```python
import logging
from pathlib import Path

import pytest
from sensai.util.logging import configure

from serena.constants import SERENA_MANAGED_DIR_IN_HOME, SERENA_MANAGED_DIR_NAME
from serena.project import Project
from serena.util.file_system import GitignoreParser
from solidlsp.ls import SolidLanguageServer
from solidlsp.ls_config import Language, LanguageServerConfig
from solidlsp.ls_logger import LanguageServerLogger
from solidlsp.settings import SolidLSPSettings

configure(level=logging.ERROR)


@pytest.fixture(scope="session")
def resources_dir() -> Path:
    """Path to the test resources directory."""
    current_dir = Path(__file__).parent
    return current_dir / "resources"


class LanguageParamRequest:
    param: Language


def get_repo_path(language: Language) -> Path:
    return Path(__file__).parent / "resources" / "repos" / language / "test_repo"


def create_ls(
    language: Language,
    repo_path: str | None = None,
    ignored_paths: list[str] | None = None,
    trace_lsp_communication: bool = False,
    log_level: int = logging.ERROR,
) -> SolidLanguageServer:
    ignored_paths = ignored_paths or []
    if repo_path is None:
        repo_path = str(get_repo_path(language))
    gitignore_parser = GitignoreParser(str(repo_path))
    for spec in gitignore_parser.get_ignore_specs():
        ignored_paths.extend(spec.patterns)
    config = LanguageServerConfig(code_language=language, ignored_paths=ignored_paths, trace_lsp_communication=trace_lsp_communication)
    logger = LanguageServerLogger(log_level=log_level)
    return SolidLanguageServer.create(
        config,
        logger,
        repo_path,
        solidlsp_settings=SolidLSPSettings(solidlsp_dir=SERENA_MANAGED_DIR_IN_HOME, project_data_relative_path=SERENA_MANAGED_DIR_NAME),
    )


def create_default_ls(language: Language) -> SolidLanguageServer:
    repo_path = str(get_repo_path(language))
    return create_ls(language, repo_path)


def create_default_project(language: Language) -> Project:
    repo_path = str(get_repo_path(language))
    return Project.load(repo_path)


@pytest.fixture(scope="session")
def repo_path(request: LanguageParamRequest) -> Path:
    """Get the repository path for a specific language.

    This fixture requires a language parameter via pytest.mark.parametrize:

    Example:
    ```
    @pytest.mark.parametrize("repo_path", [Language.PYTHON], indirect=True)
    def test_python_repo(repo_path):
        assert (repo_path / "src").exists()
    ```

    """
    if not hasattr(request, "param"):
        raise ValueError("Language parameter must be provided via pytest.mark.parametrize")

    language = request.param
    return get_repo_path(language)


@pytest.fixture(scope="session")
def language_server(request: LanguageParamRequest):
    """Create a language server instance configured for the specified language.

    This fixture requires a language parameter via pytest.mark.parametrize:

    Example:
    ```
    @pytest.mark.parametrize("language_server", [Language.PYTHON], indirect=True)
    def test_python_server(language_server: SyncLanguageServer) -> None:
        # Use the Python language server
        pass
    ```

    You can also test multiple languages in a single test:
    ```
    @pytest.mark.parametrize("language_server", [Language.PYTHON, Language.TYPESCRIPT], indirect=True)
    def test_multiple_languages(language_server: SyncLanguageServer) -> None:
        # This test will run once for each language
        pass
    ```

    """
    if not hasattr(request, "param"):
        raise ValueError("Language parameter must be provided via pytest.mark.parametrize")

    language = request.param
    server = create_default_ls(language)
    server.start()
    try:
        yield server
    finally:
        server.stop()


@pytest.fixture(scope="session")
def project(request: LanguageParamRequest):
    """Create a Project for the specified language.

    This fixture requires a language parameter via pytest.mark.parametrize:

    Example:
    ```
    @pytest.mark.parametrize("project", [Language.PYTHON], indirect=True)
    def test_python_project(project: Project) -> None:
        # Use the Python project to test something
        pass
    ```

    You can also test multiple languages in a single test:
    ```
    @pytest.mark.parametrize("project", [Language.PYTHON, Language.TYPESCRIPT], indirect=True)
    def test_multiple_languages(project: SyncLanguageServer) -> None:
        # This test will run once for each language
        pass
    ```

    """
    if not hasattr(request, "param"):
        raise ValueError("Language parameter must be provided via pytest.mark.parametrize")

    language = request.param
    yield create_default_project(language)

```

--------------------------------------------------------------------------------
/src/serena/resources/serena_config.template.yml:
--------------------------------------------------------------------------------

```yaml
gui_log_window: False
# whether to open a graphical window with Serena's logs.
# This is mainly supported on Windows and (partly) on Linux; not available on macOS.
# If you want to see the logs in a web browser, use the `web_dashboard` option instead.
# Limitations: doesn't seem to work with the community version of Claude Desktop for Linux
# Might also cause problems with some MCP clients - if you have any issues, try disabling this

# Being able to inspect logs is useful both for troubleshooting and for monitoring the tool calls,
# especially when using the agno playground, since the tool calls are not always shown,
# and the input params are never shown in the agno UI.
# When used as MCP server for Claude Desktop, the logs are primarily for troubleshooting.
# Note: unfortunately, the various entities starting the Serena server or agent do so in
# mysterious ways, often starting multiple instances of the process without shutting down
# previous instances. This can lead to multiple log windows being opened, and only the last
# window being updated. Since we can't control how agno or Claude Desktop start Serena,
# we have to live with this limitation for now.

web_dashboard: True
# whether to open the Serena web dashboard (which will be accessible through your web browser) that
# shows Serena's current session logs - as an alternative to the GUI log window which
# is supported on all platforms.

web_dashboard_open_on_launch: True
# whether to open a browser window with the web dashboard when Serena starts (provided that web_dashboard
# is enabled). If set to False, you can still open the dashboard manually by navigating to
# http://localhost:24282/dashboard/ in your web browser (24282 = 0x5EDA, SErena DAshboard).
# If you have multiple instances running, a higher port will be used; try port 24283, 24284, etc.

log_level: 20
# the minimum log level for the GUI log window and the dashboard (10 = debug, 20 = info, 30 = warning, 40 = error)

trace_lsp_communication: False
# whether to trace the communication between Serena and the language servers.
# This is useful for debugging language server issues.

ls_specific_settings: {}
# Added on 23.08.2025
# Advanced configuration option allowing to configure language server implementation specific options. Maps the language
# (same entry as in project.yml) to the options.
# Have a look at the docstring of the constructors of the LS implementations within solidlsp (e.g., for C# or PHP) to see which options are available.
# No documentation on options means no options are available.
#

tool_timeout: 240
# timeout, in seconds, after which tool executions are terminated

excluded_tools: []
# list of tools to be globally excluded

included_optional_tools: []
# list of optional tools (which are disabled by default) to be included

jetbrains: False
# whether to enable JetBrains mode and use tools based on the Serena JetBrains IDE plugin
# instead of language server-based tools
# NOTE: The plugin is yet unreleased. This is for Serena developers only.


default_max_tool_answer_chars: 150000
# Used as default for tools where the apply method has a default maximal answer length.
# Even though the value of the max_answer_chars can be changed when calling the tool, it may make sense to adjust this default
# through the global configuration.

record_tool_usage_stats:  False
# whether to record tool usage statistics, they will be shown in the web dashboard if recording is active.

token_count_estimator: TIKTOKEN_GPT4O
# Only relevant if `record_tool_usage` is True; the name of the token count estimator to use for tool usage statistics.
# See the `RegisteredTokenCountEstimator` enum for available options.
#
# Note: some token estimators (like tiktoken) may require downloading data files
# on the first run, which can take some time and require internet access. Others, like the Anthropic ones, may require an API key
# and rate limits may apply.


# MANAGED BY SERENA, KEEP AT THE BOTTOM OF THE YAML AND DON'T EDIT WITHOUT NEED
# The list of registered projects.
# To add a project, within a chat, simply ask Serena to "activate the project /path/to/project" or,
# if the project was previously added, "activate the project <project name>".
# By default, the project's name will be the name of the directory containing the project, but you may change it
# by editing the (auto-generated) project configuration file `/path/project/project/.serena/project.yml` file.
# If you want to maintain full control of the project configuration, create the project.yml file manually and then
# instruct Serena to activate the project by its path for first-time activation.
# NOTE: Make sure there are no name collisions in the names of registered projects.
projects: []

```

--------------------------------------------------------------------------------
/test/resources/repos/python/test_repo/examples/user_management.py:
--------------------------------------------------------------------------------

```python
"""
Example demonstrating user management with the test_repo module.

This example showcases:
- Creating and managing users
- Using various object types and relationships
- Type annotations and complex Python patterns
"""

import logging
from dataclasses import dataclass
from typing import Any

from test_repo.models import User, create_user_object
from test_repo.services import UserService

# Set up logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)


@dataclass
class UserStats:
    """Statistics about user activity."""

    user_id: str
    login_count: int = 0
    last_active_days: int = 0
    engagement_score: float = 0.0

    def is_active(self) -> bool:
        """Check if the user is considered active."""
        return self.last_active_days < 30


class UserManager:
    """Example class demonstrating complex user management."""

    def __init__(self, service: UserService):
        self.service = service
        self.active_users: dict[str, User] = {}
        self.user_stats: dict[str, UserStats] = {}

    def register_user(self, name: str, email: str, roles: list[str] | None = None) -> User:
        """Register a new user."""
        logger.info(f"Registering new user: {name} ({email})")
        user = self.service.create_user(name=name, email=email, roles=roles)
        self.active_users[user.id] = user
        self.user_stats[user.id] = UserStats(user_id=user.id)
        return user

    def get_user(self, user_id: str) -> User | None:
        """Get a user by ID."""
        if user_id in self.active_users:
            return self.active_users[user_id]

        # Try to fetch from service
        user = self.service.get_user(user_id)
        if user:
            self.active_users[user.id] = user
        return user

    def update_user_stats(self, user_id: str, login_count: int, days_since_active: int) -> None:
        """Update statistics for a user."""
        if user_id not in self.user_stats:
            self.user_stats[user_id] = UserStats(user_id=user_id)

        stats = self.user_stats[user_id]
        stats.login_count = login_count
        stats.last_active_days = days_since_active

        # Calculate engagement score based on activity
        engagement = (100 - min(days_since_active, 100)) * 0.8
        engagement += min(login_count, 20) * 0.2
        stats.engagement_score = engagement

    def get_active_users(self) -> list[User]:
        """Get all active users."""
        active_user_ids = [user_id for user_id, stats in self.user_stats.items() if stats.is_active()]
        return [self.active_users[user_id] for user_id in active_user_ids if user_id in self.active_users]

    def get_user_by_email(self, email: str) -> User | None:
        """Find a user by their email address."""
        for user in self.active_users.values():
            if user.email == email:
                return user
        return None


# Example function demonstrating type annotations
def process_user_data(users: list[User], include_inactive: bool = False, transform_func: callable | None = None) -> dict[str, Any]:
    """Process user data with optional transformations."""
    result: dict[str, Any] = {"users": [], "total": 0, "admin_count": 0}

    for user in users:
        if transform_func:
            user_data = transform_func(user.to_dict())
        else:
            user_data = user.to_dict()

        result["users"].append(user_data)
        result["total"] += 1

        if "admin" in user.roles:
            result["admin_count"] += 1

    return result


def main():
    """Main function demonstrating the usage of UserManager."""
    # Initialize service and manager
    service = UserService()
    manager = UserManager(service)

    # Register some users
    admin = manager.register_user("Admin User", "[email protected]", ["admin"])
    user1 = manager.register_user("Regular User", "[email protected]", ["user"])
    user2 = manager.register_user("Another User", "[email protected]", ["user"])

    # Update some stats
    manager.update_user_stats(admin.id, 100, 5)
    manager.update_user_stats(user1.id, 50, 10)
    manager.update_user_stats(user2.id, 10, 45)  # Inactive user

    # Get active users
    active_users = manager.get_active_users()
    logger.info(f"Active users: {len(active_users)}")

    # Process user data
    user_data = process_user_data(active_users, transform_func=lambda u: {**u, "full_name": u.get("name", "")})

    logger.info(f"Processed {user_data['total']} users, {user_data['admin_count']} admins")

    # Example of calling create_user directly
    external_user = create_user_object(id="ext123", name="External User", email="[email protected]", roles=["external"])
    logger.info(f"Created external user: {external_user.name}")


if __name__ == "__main__":
    main()

```

--------------------------------------------------------------------------------
/test/resources/repos/python/test_repo/ignore_this_dir_with_postfix/ignored_module.py:
--------------------------------------------------------------------------------

```python
"""
Example demonstrating user management with the test_repo module.

This example showcases:
- Creating and managing users
- Using various object types and relationships
- Type annotations and complex Python patterns
"""

import logging
from dataclasses import dataclass
from typing import Any

from test_repo.models import User, create_user_object
from test_repo.services import UserService

# Set up logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)


@dataclass
class UserStats:
    """Statistics about user activity."""

    user_id: str
    login_count: int = 0
    last_active_days: int = 0
    engagement_score: float = 0.0

    def is_active(self) -> bool:
        """Check if the user is considered active."""
        return self.last_active_days < 30


class UserManager:
    """Example class demonstrating complex user management."""

    def __init__(self, service: UserService):
        self.service = service
        self.active_users: dict[str, User] = {}
        self.user_stats: dict[str, UserStats] = {}

    def register_user(self, name: str, email: str, roles: list[str] | None = None) -> User:
        """Register a new user."""
        logger.info(f"Registering new user: {name} ({email})")
        user = self.service.create_user(name=name, email=email, roles=roles)
        self.active_users[user.id] = user
        self.user_stats[user.id] = UserStats(user_id=user.id)
        return user

    def get_user(self, user_id: str) -> User | None:
        """Get a user by ID."""
        if user_id in self.active_users:
            return self.active_users[user_id]

        # Try to fetch from service
        user = self.service.get_user(user_id)
        if user:
            self.active_users[user.id] = user
        return user

    def update_user_stats(self, user_id: str, login_count: int, days_since_active: int) -> None:
        """Update statistics for a user."""
        if user_id not in self.user_stats:
            self.user_stats[user_id] = UserStats(user_id=user_id)

        stats = self.user_stats[user_id]
        stats.login_count = login_count
        stats.last_active_days = days_since_active

        # Calculate engagement score based on activity
        engagement = (100 - min(days_since_active, 100)) * 0.8
        engagement += min(login_count, 20) * 0.2
        stats.engagement_score = engagement

    def get_active_users(self) -> list[User]:
        """Get all active users."""
        active_user_ids = [user_id for user_id, stats in self.user_stats.items() if stats.is_active()]
        return [self.active_users[user_id] for user_id in active_user_ids if user_id in self.active_users]

    def get_user_by_email(self, email: str) -> User | None:
        """Find a user by their email address."""
        for user in self.active_users.values():
            if user.email == email:
                return user
        return None


# Example function demonstrating type annotations
def process_user_data(users: list[User], include_inactive: bool = False, transform_func: callable | None = None) -> dict[str, Any]:
    """Process user data with optional transformations."""
    result: dict[str, Any] = {"users": [], "total": 0, "admin_count": 0}

    for user in users:
        if transform_func:
            user_data = transform_func(user.to_dict())
        else:
            user_data = user.to_dict()

        result["users"].append(user_data)
        result["total"] += 1

        if "admin" in user.roles:
            result["admin_count"] += 1

    return result


def main():
    """Main function demonstrating the usage of UserManager."""
    # Initialize service and manager
    service = UserService()
    manager = UserManager(service)

    # Register some users
    admin = manager.register_user("Admin User", "[email protected]", ["admin"])
    user1 = manager.register_user("Regular User", "[email protected]", ["user"])
    user2 = manager.register_user("Another User", "[email protected]", ["user"])

    # Update some stats
    manager.update_user_stats(admin.id, 100, 5)
    manager.update_user_stats(user1.id, 50, 10)
    manager.update_user_stats(user2.id, 10, 45)  # Inactive user

    # Get active users
    active_users = manager.get_active_users()
    logger.info(f"Active users: {len(active_users)}")

    # Process user data
    user_data = process_user_data(active_users, transform_func=lambda u: {**u, "full_name": u.get("name", "")})

    logger.info(f"Processed {user_data['total']} users, {user_data['admin_count']} admins")

    # Example of calling create_user directly
    external_user = create_user_object(id="ext123", name="External User", email="[email protected]", roles=["external"])
    logger.info(f"Created external user: {external_user.name}")


if __name__ == "__main__":
    main()

```

--------------------------------------------------------------------------------
/test/solidlsp/r/test_r_basic.py:
--------------------------------------------------------------------------------

```python
"""
Basic tests for R Language Server integration
"""

import os
from pathlib import Path

import pytest

from solidlsp import SolidLanguageServer
from solidlsp.ls_config import Language


@pytest.mark.r
class TestRLanguageServer:
    """Test basic functionality of the R language server."""

    @pytest.mark.parametrize("language_server", [Language.R], indirect=True)
    @pytest.mark.parametrize("repo_path", [Language.R], indirect=True)
    def test_server_initialization(self, language_server: SolidLanguageServer, repo_path: Path):
        """Test that the R language server initializes properly."""
        assert language_server is not None
        assert language_server.language_id == "r"
        assert language_server.is_running()
        assert Path(language_server.language_server.repository_root_path).resolve() == repo_path.resolve()

    @pytest.mark.parametrize("language_server", [Language.R], indirect=True)
    def test_symbol_retrieval(self, language_server: SolidLanguageServer):
        """Test R document symbol extraction."""
        all_symbols, _root_symbols = language_server.request_document_symbols(os.path.join("R", "utils.R"))

        # Should find the three exported functions
        function_symbols = [s for s in all_symbols if s.get("kind") == 12]  # Function kind
        assert len(function_symbols) >= 3

        # Check that we found the expected functions
        function_names = {s.get("name") for s in function_symbols}
        expected_functions = {"calculate_mean", "process_data", "create_data_frame"}
        assert expected_functions.issubset(function_names), f"Expected functions {expected_functions} but found {function_names}"

    @pytest.mark.parametrize("language_server", [Language.R], indirect=True)
    def test_find_definition_across_files(self, language_server: SolidLanguageServer):
        """Test finding function definitions across files."""
        analysis_file = os.path.join("examples", "analysis.R")

        # In analysis.R line 7: create_data_frame(n = 50)
        # The function create_data_frame is defined in R/utils.R
        # Find definition of create_data_frame function call (0-indexed: line 6)
        definition_location_list = language_server.request_definition(analysis_file, 6, 17)  # cursor on 'create_data_frame'

        assert definition_location_list, f"Expected non-empty definition_location_list but got {definition_location_list=}"
        assert len(definition_location_list) >= 1
        definition_location = definition_location_list[0]
        assert definition_location["uri"].endswith("utils.R")
        # Definition should be around line 37 (0-indexed: 36) where create_data_frame is defined
        assert definition_location["range"]["start"]["line"] >= 35

    @pytest.mark.parametrize("language_server", [Language.R], indirect=True)
    def test_find_references_across_files(self, language_server: SolidLanguageServer):
        """Test finding function references across files."""
        analysis_file = os.path.join("examples", "analysis.R")

        # Test from usage side: find references to calculate_mean from its usage in analysis.R
        # In analysis.R line 13: calculate_mean(clean_data$value)
        # calculate_mean function call is at line 13 (0-indexed: line 12)
        references = language_server.request_references(analysis_file, 12, 15)  # cursor on 'calculate_mean'

        assert references, f"Expected non-empty references for calculate_mean but got {references=}"

        # Must find the definition in utils.R (cross-file reference)
        reference_files = [ref["uri"] for ref in references]
        assert any(uri.endswith("utils.R") for uri in reference_files), "Cross-file reference to definition in utils.R not found"

        # Verify we actually found the right location in utils.R
        utils_refs = [ref for ref in references if ref["uri"].endswith("utils.R")]
        assert len(utils_refs) >= 1, "Should find at least one reference in utils.R"
        utils_ref = utils_refs[0]
        # Should be around line 6 where calculate_mean is defined (0-indexed: line 5)
        assert (
            utils_ref["range"]["start"]["line"] == 5
        ), f"Expected reference at line 5 in utils.R, got line {utils_ref['range']['start']['line']}"

    def test_file_matching(self):
        """Test that R files are properly matched."""
        from solidlsp.ls_config import Language

        matcher = Language.R.get_source_fn_matcher()

        assert matcher.is_relevant_filename("script.R")
        assert matcher.is_relevant_filename("analysis.r")
        assert not matcher.is_relevant_filename("script.py")
        assert not matcher.is_relevant_filename("README.md")

    def test_r_language_enum(self):
        """Test R language enum value."""
        assert Language.R == "r"
        assert str(Language.R) == "r"

```

--------------------------------------------------------------------------------
/test/resources/repos/python/test_repo/scripts/run_app.py:
--------------------------------------------------------------------------------

```python
#!/usr/bin/env python
"""
Main entry point script for the test_repo application.

This script demonstrates how a typical application entry point would be structured,
with command-line arguments, configuration loading, and service initialization.
"""

import argparse
import json
import logging
import os
import sys
from typing import Any

# Add parent directory to path to make imports work
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))

from test_repo.models import Item, User
from test_repo.services import ItemService, UserService

# Configure logging
logging.basicConfig(level=logging.INFO, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s")
logger = logging.getLogger(__name__)


def parse_args():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Test Repo Application")

    parser.add_argument("--config", type=str, default="config.json", help="Path to configuration file")

    parser.add_argument("--mode", choices=["user", "item", "both"], default="both", help="Operation mode")

    parser.add_argument("--verbose", action="store_true", help="Enable verbose logging")

    return parser.parse_args()


def load_config(config_path: str) -> dict[str, Any]:
    """Load configuration from a JSON file."""
    if not os.path.exists(config_path):
        logger.warning(f"Configuration file not found: {config_path}")
        return {}

    try:
        with open(config_path, encoding="utf-8") as f:
            return json.load(f)
    except json.JSONDecodeError:
        logger.error(f"Invalid JSON in configuration file: {config_path}")
        return {}
    except Exception as e:
        logger.error(f"Error loading configuration: {e}")
        return {}


def create_sample_users(service: UserService, count: int = 3) -> list[User]:
    """Create sample users for demonstration."""
    users = []

    # Create admin user
    admin = service.create_user(name="Admin User", email="[email protected]", roles=["admin"])
    users.append(admin)

    # Create regular users
    for i in range(count - 1):
        user = service.create_user(name=f"User {i + 1}", email=f"user{i + 1}@example.com", roles=["user"])
        users.append(user)

    return users


def create_sample_items(service: ItemService, count: int = 5) -> list[Item]:
    """Create sample items for demonstration."""
    categories = ["Electronics", "Books", "Clothing", "Food", "Other"]
    items = []

    for i in range(count):
        category = categories[i % len(categories)]
        item = service.create_item(name=f"Item {i + 1}", price=10.0 * (i + 1), category=category)
        items.append(item)

    return items


def run_user_operations(service: UserService, config: dict[str, Any]) -> None:
    """Run operations related to users."""
    logger.info("Running user operations")

    # Get configuration
    user_count = config.get("user_count", 3)

    # Create users
    users = create_sample_users(service, user_count)
    logger.info(f"Created {len(users)} users")

    # Demonstrate some operations
    for user in users:
        logger.info(f"User: {user.name} (ID: {user.id})")

        # Access a method to demonstrate method calls
        if user.has_role("admin"):
            logger.info(f"{user.name} is an admin")

    # Lookup a user
    found_user = service.get_user(users[0].id)
    if found_user:
        logger.info(f"Found user: {found_user.name}")


def run_item_operations(service: ItemService, config: dict[str, Any]) -> None:
    """Run operations related to items."""
    logger.info("Running item operations")

    # Get configuration
    item_count = config.get("item_count", 5)

    # Create items
    items = create_sample_items(service, item_count)
    logger.info(f"Created {len(items)} items")

    # Demonstrate some operations
    total_price = 0.0
    for item in items:
        price_display = item.get_display_price()
        logger.info(f"Item: {item.name}, Price: {price_display}")
        total_price += item.price

    logger.info(f"Total price of all items: ${total_price:.2f}")


def main():
    """Main entry point for the application."""
    # Parse command line arguments
    args = parse_args()

    # Configure logging level
    if args.verbose:
        logging.getLogger().setLevel(logging.DEBUG)

    logger.info("Starting Test Repo Application")

    # Load configuration
    config = load_config(args.config)
    logger.debug(f"Loaded configuration: {config}")

    # Initialize services
    user_service = UserService()
    item_service = ItemService()

    # Run operations based on mode
    if args.mode in ("user", "both"):
        run_user_operations(user_service, config)

    if args.mode in ("item", "both"):
        run_item_operations(item_service, config)

    logger.info("Application completed successfully")


item_reference = Item(id="1", name="Item 1", price=10.0, category="Electronics")

if __name__ == "__main__":
    main()

```

--------------------------------------------------------------------------------
/src/serena/resources/config/prompt_templates/system_prompt.yml:
--------------------------------------------------------------------------------

```yaml
# The system prompt template. Note that many clients will not allow configuration of the actual system prompt,
# in which case this prompt will be given as a regular message on the call of a simple tool which the agent
# is encouraged (via the tool description) to call at the beginning of the conversation.
prompts:
  system_prompt: |
    You are a professional coding agent concerned with one particular codebase. You have 
    access to semantic coding tools on which you rely heavily for all your work, as well as collection of memory 
    files containing general information about the codebase. You operate in a resource-efficient and intelligent manner, always
    keeping in mind to not read or generate content that is not needed for the task at hand.

    When reading code in order to answer a user question or task, you should try reading only the necessary code. 
    Some tasks may require you to understand the architecture of large parts of the codebase, while for others,
    it may be enough to read a small set of symbols or a single file.
    Generally, you should avoid reading entire files unless it is absolutely necessary, instead relying on
    intelligent step-by-step acquisition of information. {% if 'ToolMarkerSymbolicRead' in available_markers %}However, if you already read a file, it does not make
    sense to further analyse it with the symbolic tools (except for the `find_referencing_symbols` tool), 
    as you already have the information.{% endif %}

    I WILL BE SERIOUSLY UPSET IF YOU READ ENTIRE FILES WITHOUT NEED!
    {% if 'ToolMarkerSymbolicRead' in available_markers %}
    CONSIDER INSTEAD USING THE OVERVIEW TOOL AND SYMBOLIC TOOLS TO READ ONLY THE NECESSARY CODE FIRST!
    I WILL BE EVEN MORE UPSET IF AFTER HAVING READ AN ENTIRE FILE YOU KEEP READING THE SAME CONTENT WITH THE SYMBOLIC TOOLS!
    THE PURPOSE OF THE SYMBOLIC TOOLS IS TO HAVE TO READ LESS CODE, NOT READ THE SAME CONTENT MULTIPLE TIMES!
    {% endif %}

    You can achieve the intelligent reading of code by using the symbolic tools for getting an overview of symbols and
    the relations between them, and then only reading the bodies of symbols that are necessary to answer the question 
    or complete the task. 
    You can use the standard tools like list_dir, find_file and search_for_pattern if you need to.
    When tools allow it, you pass the `relative_path` parameter to restrict the search to a specific file or directory.
    For some tools, `relative_path` can only be a file path, so make sure to properly read the tool descriptions.
    {% if 'search_for_pattern' in available_tools %}
    If you are unsure about a symbol's name or location{% if 'find_symbol' in available_tools %} (to the extent that substring_matching for the symbol name is not enough){% endif %}, you can use the `search_for_pattern` tool, which allows fast
    and flexible search for patterns in the codebase.{% if 'ToolMarkerSymbolicRead' in available_markers %}This way you can first find candidates for symbols or files,
    and then proceed with the symbolic tools.{% endif %}
    {% endif %}

    {% if 'ToolMarkerSymbolicRead' in available_markers %}
    Symbols are identified by their `name_path and `relative_path`, see the description of the `find_symbol` tool for more details
    on how the `name_path` matches symbols.
    You can get information about available symbols by using the `get_symbols_overview` tool for finding top-level symbols in a file,
    or by using `find_symbol` if you already know the symbol's name path. You generally try to read as little code as possible
    while still solving your task, meaning you only read the bodies when you need to, and after you have found the symbol you want to edit.
    For example, if you are working with python code and already know that you need to read the body of the constructor of the class Foo, you can directly
    use `find_symbol` with the name path `Foo/__init__` and `include_body=True`. If you don't know yet which methods in `Foo` you need to read or edit,
    you can use `find_symbol` with the name path `Foo`, `include_body=False` and `depth=1` to get all (top-level) methods of `Foo` before proceeding
    to read the desired methods with `include_body=True`
    You can understand relationships between symbols by using the `find_referencing_symbols` tool.
    {% endif %}

    {% if 'read_memory' in available_tools %}
    You generally have access to memories and it may be useful for you to read them, but also only if they help you
    to answer the question or complete the task. You can infer which memories are relevant to the current task by reading
    the memory names and descriptions.
    {% endif %}

    The context and modes of operation are described below. From them you can infer how to interact with your user
    and which tasks and kinds of interactions are expected of you.

    Context description:
    {{ context_system_prompt }}

    Modes descriptions:
    {% for prompt in mode_system_prompts %}
    - {{ prompt }}
    {% endfor %}

```

--------------------------------------------------------------------------------
/test/solidlsp/elixir/test_elixir_basic.py:
--------------------------------------------------------------------------------

```python
"""
Basic integration tests for the Elixir language server functionality.

These tests validate the functionality of the language server APIs
like request_references using the test repository.
"""

import os

import pytest

from solidlsp import SolidLanguageServer
from solidlsp.ls_config import Language

from . import NEXTLS_UNAVAILABLE, NEXTLS_UNAVAILABLE_REASON

# These marks will be applied to all tests in this module
pytestmark = [pytest.mark.elixir, pytest.mark.skipif(NEXTLS_UNAVAILABLE, reason=f"Next LS not available: {NEXTLS_UNAVAILABLE_REASON}")]


class TestElixirBasic:
    """Basic Elixir language server functionality tests."""

    @pytest.mark.parametrize("language_server", [Language.ELIXIR], indirect=True)
    def test_request_references_function_definition(self, language_server: SolidLanguageServer):
        """Test finding references to a function definition."""
        file_path = os.path.join("lib", "models.ex")
        symbols = language_server.request_document_symbols(file_path)

        # Find the User module's 'new' function
        user_new_symbol = None
        for symbol in symbols[0]:  # Top level symbols
            if symbol.get("name") == "User" and symbol.get("kind") == 2:  # Module
                for child in symbol.get("children", []):
                    if child.get("name", "").startswith("def new(") and child.get("kind") == 12:  # Function
                        user_new_symbol = child
                        break
                break

        if not user_new_symbol or "selectionRange" not in user_new_symbol:
            pytest.skip("User.new function or its selectionRange not found")

        sel_start = user_new_symbol["selectionRange"]["start"]
        references = language_server.request_references(file_path, sel_start["line"], sel_start["character"])

        assert references is not None
        assert len(references) > 0

        # Should find at least one reference (the definition itself)
        found_definition = any(ref["uri"].endswith("models.ex") for ref in references)
        assert found_definition, "Should find the function definition"

    @pytest.mark.parametrize("language_server", [Language.ELIXIR], indirect=True)
    def test_request_references_create_user_function(self, language_server: SolidLanguageServer):
        """Test finding references to create_user function."""
        file_path = os.path.join("lib", "services.ex")
        symbols = language_server.request_document_symbols(file_path)

        # Find the UserService module's 'create_user' function
        create_user_symbol = None
        for symbol in symbols[0]:  # Top level symbols
            if symbol.get("name") == "UserService" and symbol.get("kind") == 2:  # Module
                for child in symbol.get("children", []):
                    if child.get("name", "").startswith("def create_user(") and child.get("kind") == 12:  # Function
                        create_user_symbol = child
                        break
                break

        if not create_user_symbol or "selectionRange" not in create_user_symbol:
            pytest.skip("UserService.create_user function or its selectionRange not found")

        sel_start = create_user_symbol["selectionRange"]["start"]
        references = language_server.request_references(file_path, sel_start["line"], sel_start["character"])

        assert references is not None
        assert len(references) > 0

    @pytest.mark.parametrize("language_server", [Language.ELIXIR], indirect=True)
    def test_request_referencing_symbols_function(self, language_server: SolidLanguageServer):
        """Test finding symbols that reference a specific function."""
        file_path = os.path.join("lib", "models.ex")
        symbols = language_server.request_document_symbols(file_path)

        # Find the User module's 'new' function
        user_new_symbol = None
        for symbol in symbols[0]:  # Top level symbols
            if symbol.get("name") == "User" and symbol.get("kind") == 2:  # Module
                for child in symbol.get("children", []):
                    if child.get("name", "").startswith("def new(") and child.get("kind") == 12:  # Function
                        user_new_symbol = child
                        break
                break

        if not user_new_symbol or "selectionRange" not in user_new_symbol:
            pytest.skip("User.new function or its selectionRange not found")

        sel_start = user_new_symbol["selectionRange"]["start"]
        referencing_symbols = language_server.request_referencing_symbols(file_path, sel_start["line"], sel_start["character"])

        assert referencing_symbols is not None

    @pytest.mark.parametrize("language_server", [Language.ELIXIR], indirect=True)
    def test_timeout_enumeration_bug(self, language_server: SolidLanguageServer):
        """Test that enumeration doesn't timeout (regression test)."""
        # This should complete without timing out
        symbols = language_server.request_document_symbols("lib/models.ex")
        assert symbols is not None

        # Test multiple symbol requests in succession
        for _ in range(3):
            symbols = language_server.request_document_symbols("lib/services.ex")
            assert symbols is not None

```

--------------------------------------------------------------------------------
/src/serena/tools/workflow_tools.py:
--------------------------------------------------------------------------------

```python
"""
Tools supporting the general workflow of the agent
"""

import json
import platform

from serena.tools import Tool, ToolMarkerDoesNotRequireActiveProject, ToolMarkerOptional


class CheckOnboardingPerformedTool(Tool):
    """
    Checks whether project onboarding was already performed.
    """

    def apply(self) -> str:
        """
        Checks whether project onboarding was already performed.
        You should always call this tool before beginning to actually work on the project/after activating a project,
        but after calling the initial instructions tool.
        """
        from .memory_tools import ListMemoriesTool

        list_memories_tool = self.agent.get_tool(ListMemoriesTool)
        memories = json.loads(list_memories_tool.apply())
        if len(memories) == 0:
            return (
                "Onboarding not performed yet (no memories available). "
                + "You should perform onboarding by calling the `onboarding` tool before proceeding with the task."
            )
        else:
            return f"""The onboarding was already performed, below is the list of available memories.
            Do not read them immediately, just remember that they exist and that you can read them later, if it is necessary
            for the current task.
            Some memories may be based on previous conversations, others may be general for the current project.
            You should be able to tell which one you need based on the name of the memory.
            
            {memories}"""


class OnboardingTool(Tool):
    """
    Performs onboarding (identifying the project structure and essential tasks, e.g. for testing or building).
    """

    def apply(self) -> str:
        """
        Call this tool if onboarding was not performed yet.
        You will call this tool at most once per conversation.

        :return: instructions on how to create the onboarding information
        """
        system = platform.system()
        return self.prompt_factory.create_onboarding_prompt(system=system)


class ThinkAboutCollectedInformationTool(Tool):
    """
    Thinking tool for pondering the completeness of collected information.
    """

    def apply(self) -> str:
        """
        Think about the collected information and whether it is sufficient and relevant.
        This tool should ALWAYS be called after you have completed a non-trivial sequence of searching steps like
        find_symbol, find_referencing_symbols, search_files_for_pattern, read_file, etc.
        """
        return self.prompt_factory.create_think_about_collected_information()


class ThinkAboutTaskAdherenceTool(Tool):
    """
    Thinking tool for determining whether the agent is still on track with the current task.
    """

    def apply(self) -> str:
        """
        Think about the task at hand and whether you are still on track.
        Especially important if the conversation has been going on for a while and there
        has been a lot of back and forth.

        This tool should ALWAYS be called before you insert, replace, or delete code.
        """
        return self.prompt_factory.create_think_about_task_adherence()


class ThinkAboutWhetherYouAreDoneTool(Tool):
    """
    Thinking tool for determining whether the task is truly completed.
    """

    def apply(self) -> str:
        """
        Whenever you feel that you are done with what the user has asked for, it is important to call this tool.
        """
        return self.prompt_factory.create_think_about_whether_you_are_done()


class SummarizeChangesTool(Tool, ToolMarkerOptional):
    """
    Provides instructions for summarizing the changes made to the codebase.
    """

    def apply(self) -> str:
        """
        Summarize the changes you have made to the codebase.
        This tool should always be called after you have fully completed any non-trivial coding task,
        but only after the think_about_whether_you_are_done call.
        """
        return self.prompt_factory.create_summarize_changes()


class PrepareForNewConversationTool(Tool):
    """
    Provides instructions for preparing for a new conversation (in order to continue with the necessary context).
    """

    def apply(self) -> str:
        """
        Instructions for preparing for a new conversation. This tool should only be called on explicit user request.
        """
        return self.prompt_factory.create_prepare_for_new_conversation()


class InitialInstructionsTool(Tool, ToolMarkerDoesNotRequireActiveProject, ToolMarkerOptional):
    """
    Gets the initial instructions for the current project.
    Should only be used in settings where the system prompt cannot be set,
    e.g. in clients you have no control over, like Claude Desktop.
    """

    def apply(self) -> str:
        """
        Get the initial instructions for the current coding project.
        If you haven't received instructions on how to use Serena's tools in the system prompt,
        you should always call this tool before starting to work (including using any other tool) on any programming task,
        the only exception being when you are asked to call `activate_project`, which you should then call before.
        """
        return self.agent.create_system_prompt()

```

--------------------------------------------------------------------------------
/test/serena/util/test_exception.py:
--------------------------------------------------------------------------------

```python
import os
from unittest.mock import MagicMock, Mock, patch

import pytest

from serena.util.exception import is_headless_environment, show_fatal_exception_safe


class TestHeadlessEnvironmentDetection:
    """Test class for headless environment detection functionality."""

    def test_is_headless_no_display(self):
        """Test that environment without DISPLAY is detected as headless on Linux."""
        with patch("sys.platform", "linux"):
            with patch.dict(os.environ, {}, clear=True):
                assert is_headless_environment() is True

    def test_is_headless_ssh_connection(self):
        """Test that SSH sessions are detected as headless."""
        with patch("sys.platform", "linux"):
            with patch.dict(os.environ, {"SSH_CONNECTION": "192.168.1.1 22 192.168.1.2 22", "DISPLAY": ":0"}):
                assert is_headless_environment() is True

            with patch.dict(os.environ, {"SSH_CLIENT": "192.168.1.1 22 22", "DISPLAY": ":0"}):
                assert is_headless_environment() is True

    def test_is_headless_wsl(self):
        """Test that WSL environment is detected as headless."""
        # Skip this test on Windows since os.uname doesn't exist
        if not hasattr(os, "uname"):
            pytest.skip("os.uname not available on this platform")

        with patch("sys.platform", "linux"):
            with patch("os.uname") as mock_uname:
                mock_uname.return_value = Mock(release="5.15.153.1-microsoft-standard-WSL2")
                with patch.dict(os.environ, {"DISPLAY": ":0"}):
                    assert is_headless_environment() is True

    def test_is_headless_docker(self):
        """Test that Docker containers are detected as headless."""
        with patch("sys.platform", "linux"):
            # Test with CI environment variable
            with patch.dict(os.environ, {"CI": "true", "DISPLAY": ":0"}):
                assert is_headless_environment() is True

            # Test with CONTAINER environment variable
            with patch.dict(os.environ, {"CONTAINER": "docker", "DISPLAY": ":0"}):
                assert is_headless_environment() is True

            # Test with .dockerenv file
            with patch("os.path.exists") as mock_exists:
                mock_exists.return_value = True
                with patch.dict(os.environ, {"DISPLAY": ":0"}):
                    assert is_headless_environment() is True

    def test_is_not_headless_windows(self):
        """Test that Windows is never detected as headless."""
        with patch("sys.platform", "win32"):
            # Even without DISPLAY, Windows should not be headless
            with patch.dict(os.environ, {}, clear=True):
                assert is_headless_environment() is False


class TestShowFatalExceptionSafe:
    """Test class for safe fatal exception display functionality."""

    @patch("serena.util.exception.is_headless_environment", return_value=True)
    @patch("serena.util.exception.log")
    def test_show_fatal_exception_safe_headless(self, mock_log, mock_is_headless):
        """Test that GUI is not attempted in headless environment."""
        test_exception = ValueError("Test error")

        # The import should never happen in headless mode
        with patch("serena.gui_log_viewer.show_fatal_exception") as mock_show_gui:
            show_fatal_exception_safe(test_exception)
            mock_show_gui.assert_not_called()

        # Verify debug log about skipping GUI
        mock_log.debug.assert_called_once_with("Skipping GUI error display in headless environment")

    @patch("serena.util.exception.is_headless_environment", return_value=False)
    @patch("serena.util.exception.log")
    def test_show_fatal_exception_safe_with_gui(self, mock_log, mock_is_headless):
        """Test that GUI is attempted when not in headless environment."""
        test_exception = ValueError("Test error")

        # Mock the GUI function
        with patch("serena.gui_log_viewer.show_fatal_exception") as mock_show_gui:
            show_fatal_exception_safe(test_exception)
            mock_show_gui.assert_called_once_with(test_exception)

    @patch("serena.util.exception.is_headless_environment", return_value=False)
    @patch("serena.util.exception.log")
    def test_show_fatal_exception_safe_gui_failure(self, mock_log, mock_is_headless):
        """Test graceful handling when GUI display fails."""
        test_exception = ValueError("Test error")
        gui_error = ImportError("No module named 'tkinter'")

        # Mock the GUI function to raise an exception
        with patch("serena.gui_log_viewer.show_fatal_exception", side_effect=gui_error):
            show_fatal_exception_safe(test_exception)

        # Verify debug log about GUI failure
        mock_log.debug.assert_called_with(f"Failed to show GUI error dialog: {gui_error}")

    def test_show_fatal_exception_safe_prints_to_stderr(self):
        """Test that exceptions are always printed to stderr."""
        test_exception = ValueError("Test error message")

        with patch("sys.stderr", new_callable=MagicMock) as mock_stderr:
            with patch("serena.util.exception.is_headless_environment", return_value=True):
                with patch("serena.util.exception.log"):
                    show_fatal_exception_safe(test_exception)

        # Verify print was called with the correct arguments
        mock_stderr.write.assert_any_call("Fatal exception: Test error message")

```

--------------------------------------------------------------------------------
/src/serena/resources/config/prompt_templates/simple_tool_outputs.yml:
--------------------------------------------------------------------------------

```yaml
# Some of Serena's tools are just outputting a fixed text block without doing anything else.
# Such tools are meant to encourage the agent to think in a certain way, to stay on track
# and so on. The (templates for) outputs of these tools are contained here.
prompts:
  onboarding_prompt: |
    You are viewing the project for the first time.
    Your task is to assemble relevant high-level information about the project which
    will be saved to memory files in the following steps.
    The information should be sufficient to understand what the project is about,
    and the most important commands for developing code.
    The project is being developed on the system: {{ system }}.

    You need to identify at least the following information:
    * the project's purpose
    * the tech stack used
    * the code style and conventions used (including naming, type hints, docstrings, etc.)
    * which commands to run when a task is completed (linting, formatting, testing, etc.)
    * the rough structure of the codebase
    * the commands for testing, formatting, and linting
    * the commands for running the entrypoints of the project
    * the util commands for the system, like `git`, `ls`, `cd`, `grep`, `find`, etc. Keep in mind that the system is {{ system }},
      so the commands might be different than on a regular unix system.
    * whether there are particular guidelines, styles, design patterns, etc. that one should know about

    This list is not exhaustive, you can add more information if you think it is relevant.

    For doing that, you will need to acquire information about the project with the corresponding tools.
    Read only the necessary files and directories to avoid loading too much data into memory.
    If you cannot find everything you need from the project itself, you should ask the user for more information.

    After collecting all the information, you will use the `write_memory` tool (in multiple calls) to save it to various memory files.
    A particularly important memory file will be the `suggested_commands.md` file, which should contain
    a list of commands that the user should know about to develop code in this project.
    Moreover, you should create memory files for the style and conventions and a dedicated memory file for
    what should be done when a task is completed.
    **Important**: after done with the onboarding task, remember to call the `write_memory` to save the collected information!

  think_about_collected_information: |
    Have you collected all the information you need for solving the current task? If not, can the missing information be acquired by using the available tools,
    in particular the tools related to symbol discovery? Or do you need to ask the user for more information?
    Think about it step by step and give a summary of the missing information and how it could be acquired.

  think_about_task_adherence: |
    Are you deviating from the task at hand? Do you need any additional information to proceed?
    Have you loaded all relevant memory files to see whether your implementation is fully aligned with the
    code style, conventions, and guidelines of the project? If not, adjust your implementation accordingly
    before modifying any code into the codebase.
    Note that it is better to stop and ask the user for clarification
    than to perform large changes which might not be aligned with the user's intentions.
    If you feel like the conversation is deviating too much from the original task, apologize and suggest to the user
    how to proceed. If the conversation became too long, create a summary of the current progress and suggest to the user
    to start a new conversation based on that summary.

  think_about_whether_you_are_done: |
    Have you already performed all the steps required by the task? Is it appropriate to run tests and linting, and if so,
    have you done that already? Is it appropriate to adjust non-code files like documentation and config and have you done that already?
    Should new tests be written to cover the changes?
    Note that a task that is just about exploring the codebase does not require running tests or linting.
    Read the corresponding memory files to see what should be done when a task is completed. 

  summarize_changes: |
    Summarize all the changes you have made to the codebase over the course of the conversation.
    Explore the diff if needed (e.g. by using `git diff`) to ensure that you have not missed anything.
    Explain whether and how the changes are covered by tests. Explain how to best use the new code, how to understand it,
    which existing code it affects and interacts with. Are there any dangers (like potential breaking changes or potential new problems) 
    that the user should be aware of? Should any new documentation be written or existing documentation updated?
    You can use tools to explore the codebase prior to writing the summary, but don't write any new code in this step until
    the summary is complete.

  prepare_for_new_conversation: |
    You have not yet completed the current task but we are running out of context.
    {mode_prepare_for_new_conversation}
    Imagine that you are handing over the task to another person who has access to the
    same tools and memory files as you do, but has not been part of the conversation so far.
    Write a summary that can be used in the next conversation to a memory file using the `write_memory` tool.

```

--------------------------------------------------------------------------------
/DOCKER.md:
--------------------------------------------------------------------------------

```markdown
# Docker Setup for Serena (Experimental)

⚠️ **EXPERIMENTAL FEATURE**: The Docker setup for Serena is currently experimental and has several limitations. Please read this entire document before using Docker with Serena.

## Overview

Docker support allows you to run Serena in an isolated container environment, which provides better security isolation for the shell tool and consistent dependencies across different systems.

## Benefits

- **Safer shell tool execution**: Commands run in an isolated container environment
- **Consistent dependencies**: No need to manage language servers and dependencies on your host system
- **Cross-platform support**: Works consistently across Windows, macOS, and Linux

## Important Limitations and Caveats

### 1. Configuration File Conflicts

⚠️ **Critical**: Docker uses a separate configuration file (`serena_config.docker.yml`) to avoid path conflicts. When running in Docker:
- Container paths will be stored in the configuration (e.g., `/workspaces/serena/...`)
- These paths are incompatible with non-Docker usage
- After using Docker, you cannot directly switch back to non-Docker usage without manual configuration adjustment

### 2. Project Activation Limitations

- **Only mounted directories work**: Projects must be mounted as volumes to be accessible
- Projects outside the mounted directories cannot be activated or accessed
- Default setup only mounts the current directory

### 3. GUI Window Disabled

- The GUI log window option is automatically disabled in Docker environments
- Use the web dashboard instead (see below)

### 4. Dashboard Port Configuration

The web dashboard runs on port 24282 (0x5EDA) by default. You can configure this using environment variables:

```bash
# Use default ports
docker-compose up serena

# Use custom ports
SERENA_DASHBOARD_PORT=8080 docker-compose up serena
```

⚠️ **Note**: If the local port is occupied, you'll need to specify a different port using the environment variable.

### 5. Line Ending Issues on Windows

⚠️ **Windows Users**: Be aware of potential line ending inconsistencies:
- Files edited within the Docker container may use Unix line endings (LF)
- Your Windows system may expect Windows line endings (CRLF)
- This can cause issues with version control and text editors
- Configure your Git settings appropriately: `git config core.autocrlf true`

## Quick Start

### Using Docker Compose (Recommended)

1. **Production mode** (for using Serena as MCP server):
   ```bash
   docker-compose up serena
   ```

2. **Development mode** (with source code mounted):
   ```bash
   docker-compose up serena-dev
   ```

Note: Edit the `compose.yaml` file to customize volume mounts for your projects.

### Using Docker directly

```bash
# Build the image
docker build -t serena .

# Run with current directory mounted
docker run -it --rm \
  -v "$(pwd)":/workspace \
  -p 9121:9121 \
  -p 24282:24282 \
  -e SERENA_DOCKER=1 \
  serena
```

### Using Docker Compose with Merge Compose files

To use Docker Compose with merge files, you can create a `compose.override.yml` file to customize the configuration:

```yaml
services:
  serena:
    # To work with projects, you must mount them as volumes:
    volumes:
      - ./my-project:/workspace/my-project
      - /path/to/another/project:/workspace/another-project
    # Add the context for the IDE assistant option:
    command:
      - "uv run --directory . serena-mcp-server --transport sse --port 9121 --host 0.0.0.0 --context ide-assistant"
```

See the [Docker Merge Compose files documentation](https://docs.docker.com/compose/how-tos/multiple-compose-files/merge/) for more details on using merge files.

## Accessing the Dashboard

Once running, access the web dashboard at:
- Default: http://localhost:24282/dashboard
- Custom port: http://localhost:${SERENA_DASHBOARD_PORT}/dashboard

## Volume Mounting

To work with projects, you must mount them as volumes:

```yaml
# In compose.yaml
volumes:
  - ./my-project:/workspace/my-project
  - /path/to/another/project:/workspace/another-project
```

## Environment Variables

- `SERENA_DOCKER=1`: Set automatically to indicate Docker environment
- `SERENA_PORT`: MCP server port (default: 9121)
- `SERENA_DASHBOARD_PORT`: Web dashboard port (default: 24282)
- `INTELEPHENSE_LICENSE_KEY`: License key for Intelephense PHP LSP premium features (optional)

## Troubleshooting

### Port Already in Use

If you see "port already in use" errors:
```bash
# Check what's using the port
lsof -i :24282  # macOS/Linux
netstat -ano | findstr :24282  # Windows

# Use a different port
SERENA_DASHBOARD_PORT=8080 docker-compose up serena
```

### Configuration Issues

If you need to reset Docker configuration:
```bash
# Remove Docker-specific config
rm serena_config.docker.yml

# Serena will auto-generate a new one on next run
```

### Project Access Issues

Ensure projects are properly mounted:
- Check volume mounts in `docker-compose.yaml`
- Use absolute paths for external projects
- Verify permissions on mounted directories

## Migration Path

To switch between Docker and non-Docker usage:

1. **Docker to Non-Docker**:
   - Manually edit project paths in `serena_config.yml`
   - Change container paths to host paths
   - Or use separate config files for each environment

2. **Non-Docker to Docker**:
   - Projects will be re-registered with container paths
   - Original config remains unchanged

## Future Improvements

We're working on:
- Automatic config migration between environments
- Better project path handling
- Dynamic port allocation
- Windows line-ending handling.

```

--------------------------------------------------------------------------------
/src/serena/analytics.py:
--------------------------------------------------------------------------------

```python
from __future__ import annotations

import logging
import threading
from abc import ABC, abstractmethod
from collections import defaultdict
from copy import copy
from dataclasses import asdict, dataclass
from enum import Enum

from anthropic.types import MessageParam, MessageTokensCount
from dotenv import load_dotenv

log = logging.getLogger(__name__)


class TokenCountEstimator(ABC):
    @abstractmethod
    def estimate_token_count(self, text: str) -> int:
        """
        Estimate the number of tokens in the given text.
        This is an abstract method that should be implemented by subclasses.
        """


class TiktokenCountEstimator(TokenCountEstimator):
    """
    Approximate token count using tiktoken.
    """

    def __init__(self, model_name: str = "gpt-4o"):
        """
        The tokenizer will be downloaded on the first initialization, which may take some time.

        :param model_name: see `tiktoken.model` to see available models.
        """
        import tiktoken

        log.info(f"Loading tiktoken encoding for model {model_name}, this may take a while on the first run.")
        self._encoding = tiktoken.encoding_for_model(model_name)

    def estimate_token_count(self, text: str) -> int:
        return len(self._encoding.encode(text))


class AnthropicTokenCount(TokenCountEstimator):
    """
    The exact count using the Anthropic API.
    Counting is free, but has a rate limit and will require an API key,
    (typically, set through an env variable).
    See https://docs.anthropic.com/en/docs/build-with-claude/token-counting
    """

    def __init__(self, model_name: str = "claude-sonnet-4-20250514", api_key: str | None = None):
        import anthropic

        self._model_name = model_name
        if api_key is None:
            load_dotenv()
        self._anthropic_client = anthropic.Anthropic(api_key=api_key)

    def _send_count_tokens_request(self, text: str) -> MessageTokensCount:
        return self._anthropic_client.messages.count_tokens(
            model=self._model_name,
            messages=[MessageParam(role="user", content=text)],
        )

    def estimate_token_count(self, text: str) -> int:
        return self._send_count_tokens_request(text).input_tokens


_registered_token_estimator_instances_cache: dict[RegisteredTokenCountEstimator, TokenCountEstimator] = {}


class RegisteredTokenCountEstimator(Enum):
    TIKTOKEN_GPT4O = "TIKTOKEN_GPT4O"
    ANTHROPIC_CLAUDE_SONNET_4 = "ANTHROPIC_CLAUDE_SONNET_4"

    @classmethod
    def get_valid_names(cls) -> list[str]:
        """
        Get a list of all registered token count estimator names.
        """
        return [estimator.name for estimator in cls]

    def _create_estimator(self) -> TokenCountEstimator:
        match self:
            case RegisteredTokenCountEstimator.TIKTOKEN_GPT4O:
                return TiktokenCountEstimator(model_name="gpt-4o")
            case RegisteredTokenCountEstimator.ANTHROPIC_CLAUDE_SONNET_4:
                return AnthropicTokenCount(model_name="claude-sonnet-4-20250514")
            case _:
                raise ValueError(f"Unknown token count estimator: {self.value}")

    def load_estimator(self) -> TokenCountEstimator:
        estimator_instance = _registered_token_estimator_instances_cache.get(self)
        if estimator_instance is None:
            estimator_instance = self._create_estimator()
            _registered_token_estimator_instances_cache[self] = estimator_instance
        return estimator_instance


class ToolUsageStats:
    """
    A class to record and manage tool usage statistics.
    """

    def __init__(self, token_count_estimator: RegisteredTokenCountEstimator = RegisteredTokenCountEstimator.TIKTOKEN_GPT4O):
        self._token_count_estimator = token_count_estimator.load_estimator()
        self._token_estimator_name = token_count_estimator.value
        self._tool_stats: dict[str, ToolUsageStats.Entry] = defaultdict(ToolUsageStats.Entry)
        self._tool_stats_lock = threading.Lock()

    @property
    def token_estimator_name(self) -> str:
        """
        Get the name of the registered token count estimator used.
        """
        return self._token_estimator_name

    @dataclass(kw_only=True)
    class Entry:
        num_times_called: int = 0
        input_tokens: int = 0
        output_tokens: int = 0

        def update_on_call(self, input_tokens: int, output_tokens: int) -> None:
            """
            Update the entry with the number of tokens used for a single call.
            """
            self.num_times_called += 1
            self.input_tokens += input_tokens
            self.output_tokens += output_tokens

    def _estimate_token_count(self, text: str) -> int:
        return self._token_count_estimator.estimate_token_count(text)

    def get_stats(self, tool_name: str) -> ToolUsageStats.Entry:
        """
        Get (a copy of) the current usage statistics for a specific tool.
        """
        with self._tool_stats_lock:
            return copy(self._tool_stats[tool_name])

    def record_tool_usage(self, tool_name: str, input_str: str, output_str: str) -> None:
        input_tokens = self._estimate_token_count(input_str)
        output_tokens = self._estimate_token_count(output_str)
        with self._tool_stats_lock:
            entry = self._tool_stats[tool_name]
            entry.update_on_call(input_tokens, output_tokens)

    def get_tool_stats_dict(self) -> dict[str, dict[str, int]]:
        with self._tool_stats_lock:
            return {name: asdict(entry) for name, entry in self._tool_stats.items()}

    def clear(self) -> None:
        with self._tool_stats_lock:
            self._tool_stats.clear()

```

--------------------------------------------------------------------------------
/test/solidlsp/rust/test_rust_2024_edition.py:
--------------------------------------------------------------------------------

```python
import os
from pathlib import Path

import pytest

from solidlsp.ls_config import Language
from solidlsp.ls_utils import SymbolUtils
from test.conftest import create_ls


@pytest.mark.rust
class TestRust2024EditionLanguageServer:
    @classmethod
    def setup_class(cls):
        """Set up the test class with the Rust 2024 edition test repository."""
        cls.test_repo_2024_path = Path(__file__).parent.parent.parent / "resources" / "repos" / "rust" / "test_repo_2024"

        if not cls.test_repo_2024_path.exists():
            pytest.skip("Rust 2024 edition test repository not found")

        # Create and start the language server for the 2024 edition repo
        cls.language_server = create_ls(Language.RUST, str(cls.test_repo_2024_path))
        cls.language_server.start()

    @classmethod
    def teardown_class(cls):
        """Clean up the language server."""
        if hasattr(cls, "language_server"):
            cls.language_server.stop()

    def test_find_references_raw(self) -> None:
        # Test finding references to the 'add' function defined in main.rs
        file_path = os.path.join("src", "main.rs")
        symbols = self.language_server.request_document_symbols(file_path)
        add_symbol = None
        for sym in symbols[0]:
            if sym.get("name") == "add":
                add_symbol = sym
                break
        assert add_symbol is not None, "Could not find 'add' function symbol in main.rs"
        sel_start = add_symbol["selectionRange"]["start"]
        refs = self.language_server.request_references(file_path, sel_start["line"], sel_start["character"])
        # The add function should be referenced within main.rs itself (in the main function)
        assert any("main.rs" in ref.get("relativePath", "") for ref in refs), "main.rs should reference add function"

    def test_find_symbol(self) -> None:
        symbols = self.language_server.request_full_symbol_tree()
        assert SymbolUtils.symbol_tree_contains_name(symbols, "main"), "main function not found in symbol tree"
        assert SymbolUtils.symbol_tree_contains_name(symbols, "add"), "add function not found in symbol tree"
        assert SymbolUtils.symbol_tree_contains_name(symbols, "multiply"), "multiply function not found in symbol tree"
        assert SymbolUtils.symbol_tree_contains_name(symbols, "Calculator"), "Calculator struct not found in symbol tree"

    def test_find_referencing_symbols_multiply(self) -> None:
        # Find references to 'multiply' function defined in lib.rs
        file_path = os.path.join("src", "lib.rs")
        symbols = self.language_server.request_document_symbols(file_path)
        multiply_symbol = None
        for sym in symbols[0]:
            if sym.get("name") == "multiply":
                multiply_symbol = sym
                break
        assert multiply_symbol is not None, "Could not find 'multiply' function symbol in lib.rs"
        sel_start = multiply_symbol["selectionRange"]["start"]
        refs = self.language_server.request_references(file_path, sel_start["line"], sel_start["character"])
        # The multiply function exists but may not be referenced anywhere, which is fine
        # This test just verifies we can find the symbol and request references without error
        assert isinstance(refs, list), "Should return a list of references (even if empty)"

    def test_find_calculator_struct_and_impl(self) -> None:
        # Test finding the Calculator struct and its impl block
        file_path = os.path.join("src", "lib.rs")
        symbols = self.language_server.request_document_symbols(file_path)

        # Find the Calculator struct
        calculator_struct = None
        calculator_impl = None
        for sym in symbols[0]:
            if sym.get("name") == "Calculator" and sym.get("kind") == 23:  # Struct kind
                calculator_struct = sym
            elif sym.get("name") == "Calculator" and sym.get("kind") == 11:  # Interface/Impl kind
                calculator_impl = sym

        assert calculator_struct is not None, "Could not find 'Calculator' struct symbol in lib.rs"

        # The struct should have the 'result' field
        struct_children = calculator_struct.get("children", [])
        field_names = [child.get("name") for child in struct_children]
        assert "result" in field_names, "Calculator struct should have 'result' field"

        # Find the impl block and check its methods
        if calculator_impl is not None:
            impl_children = calculator_impl.get("children", [])
            method_names = [child.get("name") for child in impl_children]
            assert "new" in method_names, "Calculator impl should have 'new' method"
            assert "add" in method_names, "Calculator impl should have 'add' method"
            assert "get_result" in method_names, "Calculator impl should have 'get_result' method"

    def test_overview_methods(self) -> None:
        symbols = self.language_server.request_full_symbol_tree()
        assert SymbolUtils.symbol_tree_contains_name(symbols, "main"), "main missing from overview"
        assert SymbolUtils.symbol_tree_contains_name(symbols, "add"), "add missing from overview"
        assert SymbolUtils.symbol_tree_contains_name(symbols, "multiply"), "multiply missing from overview"
        assert SymbolUtils.symbol_tree_contains_name(symbols, "Calculator"), "Calculator missing from overview"

    def test_rust_2024_edition_specific(self) -> None:
        # Verify we're actually working with the 2024 edition repository
        cargo_toml_path = self.test_repo_2024_path / "Cargo.toml"
        assert cargo_toml_path.exists(), "Cargo.toml should exist in test repository"

        with open(cargo_toml_path) as f:
            content = f.read()
            assert 'edition = "2024"' in content, "Should be using Rust 2024 edition"

```

--------------------------------------------------------------------------------
/test/serena/config/test_serena_config.py:
--------------------------------------------------------------------------------

```python
import shutil
import tempfile
from pathlib import Path

import pytest

from serena.config.serena_config import ProjectConfig
from solidlsp.ls_config import Language


class TestProjectConfigAutogenerate:
    """Test class for ProjectConfig autogeneration functionality."""

    def setup_method(self):
        """Set up test environment before each test method."""
        # Create a temporary directory for testing
        self.test_dir = tempfile.mkdtemp()
        self.project_path = Path(self.test_dir)

    def teardown_method(self):
        """Clean up test environment after each test method."""
        # Remove the temporary directory
        shutil.rmtree(self.test_dir)

    def test_autogenerate_empty_directory(self):
        """Test that autogenerate raises ValueError with helpful message for empty directory."""
        with pytest.raises(ValueError) as exc_info:
            ProjectConfig.autogenerate(self.project_path, save_to_disk=False)

        error_message = str(exc_info.value)
        # Check that the error message contains all the key information
        assert "No source files found" in error_message
        assert str(self.project_path.resolve()) in error_message
        assert "To use Serena with this project" in error_message
        assert "Add source files in one of the supported languages" in error_message
        assert "Create a project configuration file manually" in error_message
        assert str(Path(".serena") / "project.yml") in error_message
        assert "Example project.yml:" in error_message
        assert f"project_name: {self.project_path.name}" in error_message
        assert "language: python" in error_message

    def test_autogenerate_with_python_files(self):
        """Test successful autogeneration with Python source files."""
        # Create a Python file
        python_file = self.project_path / "main.py"
        python_file.write_text("def hello():\n    print('Hello, world!')\n")

        # Run autogenerate
        config = ProjectConfig.autogenerate(self.project_path, save_to_disk=False)

        # Verify the configuration
        assert config.project_name == self.project_path.name
        assert config.language == Language.PYTHON

    def test_autogenerate_with_multiple_languages(self):
        """Test autogeneration picks dominant language when multiple are present."""
        # Create files for multiple languages
        (self.project_path / "main.py").write_text("print('Python')")
        (self.project_path / "util.py").write_text("def util(): pass")
        (self.project_path / "small.js").write_text("console.log('JS');")

        # Run autogenerate - should pick Python as dominant
        config = ProjectConfig.autogenerate(self.project_path, save_to_disk=False)

        assert config.language == Language.PYTHON

    def test_autogenerate_saves_to_disk(self):
        """Test that autogenerate can save the configuration to disk."""
        # Create a Go file
        go_file = self.project_path / "main.go"
        go_file.write_text("package main\n\nfunc main() {}\n")

        # Run autogenerate with save_to_disk=True
        config = ProjectConfig.autogenerate(self.project_path, save_to_disk=True)

        # Verify the configuration file was created
        config_path = self.project_path / ".serena" / "project.yml"
        assert config_path.exists()

        # Verify the content
        assert config.language == Language.GO

    def test_autogenerate_nonexistent_path(self):
        """Test that autogenerate raises FileNotFoundError for non-existent path."""
        non_existent = self.project_path / "does_not_exist"

        with pytest.raises(FileNotFoundError) as exc_info:
            ProjectConfig.autogenerate(non_existent, save_to_disk=False)

        assert "Project root not found" in str(exc_info.value)

    def test_autogenerate_with_gitignored_files_only(self):
        """Test autogenerate behavior when only gitignored files exist."""
        # Create a .gitignore that ignores all Python files
        gitignore = self.project_path / ".gitignore"
        gitignore.write_text("*.py\n")

        # Create Python files that will be ignored
        (self.project_path / "ignored.py").write_text("print('ignored')")

        # Should still raise ValueError as no source files are detected
        with pytest.raises(ValueError) as exc_info:
            ProjectConfig.autogenerate(self.project_path, save_to_disk=False)

        assert "No source files found" in str(exc_info.value)

    def test_autogenerate_custom_project_name(self):
        """Test autogenerate with custom project name."""
        # Create a TypeScript file
        ts_file = self.project_path / "index.ts"
        ts_file.write_text("const greeting: string = 'Hello';\n")

        # Run autogenerate with custom name
        custom_name = "my-custom-project"
        config = ProjectConfig.autogenerate(self.project_path, project_name=custom_name, save_to_disk=False)

        assert config.project_name == custom_name
        assert config.language == Language.TYPESCRIPT

    def test_autogenerate_error_message_format(self):
        """Test the specific format of the error message for better user experience."""
        with pytest.raises(ValueError) as exc_info:
            ProjectConfig.autogenerate(self.project_path, save_to_disk=False)

        error_lines = str(exc_info.value).split("\n")

        # Verify the structure of the error message
        assert len(error_lines) >= 8  # Should have multiple lines of helpful information

        # Check for numbered instructions
        assert any("1." in line for line in error_lines)
        assert any("2." in line for line in error_lines)

        # Check for supported languages list
        assert any("Python" in line and "TypeScript" in line for line in error_lines)

        # Check example includes comment about language options
        assert any("# or typescript, java, csharp" in line for line in error_lines)

```

--------------------------------------------------------------------------------
/src/serena/dashboard.py:
--------------------------------------------------------------------------------

```python
import os
import socket
import threading
from collections.abc import Callable
from typing import TYPE_CHECKING, Any

from flask import Flask, Response, request, send_from_directory
from pydantic import BaseModel
from sensai.util import logging

from serena.analytics import ToolUsageStats
from serena.constants import SERENA_DASHBOARD_DIR
from serena.util.logging import MemoryLogHandler

if TYPE_CHECKING:
    from serena.agent import SerenaAgent

log = logging.getLogger(__name__)

# disable Werkzeug's logging to avoid cluttering the output
logging.getLogger("werkzeug").setLevel(logging.WARNING)


class RequestLog(BaseModel):
    start_idx: int = 0


class ResponseLog(BaseModel):
    messages: list[str]
    max_idx: int
    active_project: str | None = None


class ResponseToolNames(BaseModel):
    tool_names: list[str]


class ResponseToolStats(BaseModel):
    stats: dict[str, dict[str, int]]


class SerenaDashboardAPI:
    log = logging.getLogger(__qualname__)

    def __init__(
        self,
        memory_log_handler: MemoryLogHandler,
        tool_names: list[str],
        agent: "SerenaAgent",
        shutdown_callback: Callable[[], None] | None = None,
        tool_usage_stats: ToolUsageStats | None = None,
    ) -> None:
        self._memory_log_handler = memory_log_handler
        self._tool_names = tool_names
        self._agent = agent
        self._shutdown_callback = shutdown_callback
        self._app = Flask(__name__)
        self._tool_usage_stats = tool_usage_stats
        self._setup_routes()

    @property
    def memory_log_handler(self) -> MemoryLogHandler:
        return self._memory_log_handler

    def _setup_routes(self) -> None:
        # Static files
        @self._app.route("/dashboard/<path:filename>")
        def serve_dashboard(filename: str) -> Response:
            return send_from_directory(SERENA_DASHBOARD_DIR, filename)

        @self._app.route("/dashboard/")
        def serve_dashboard_index() -> Response:
            return send_from_directory(SERENA_DASHBOARD_DIR, "index.html")

        # API routes
        @self._app.route("/get_log_messages", methods=["POST"])
        def get_log_messages() -> dict[str, Any]:
            request_data = request.get_json()
            if not request_data:
                request_log = RequestLog()
            else:
                request_log = RequestLog.model_validate(request_data)

            result = self._get_log_messages(request_log)
            return result.model_dump()

        @self._app.route("/get_tool_names", methods=["GET"])
        def get_tool_names() -> dict[str, Any]:
            result = self._get_tool_names()
            return result.model_dump()

        @self._app.route("/get_tool_stats", methods=["GET"])
        def get_tool_stats_route() -> dict[str, Any]:
            result = self._get_tool_stats()
            return result.model_dump()

        @self._app.route("/clear_tool_stats", methods=["POST"])
        def clear_tool_stats_route() -> dict[str, str]:
            self._clear_tool_stats()
            return {"status": "cleared"}

        @self._app.route("/get_token_count_estimator_name", methods=["GET"])
        def get_token_count_estimator_name() -> dict[str, str]:
            estimator_name = self._tool_usage_stats.token_estimator_name if self._tool_usage_stats else "unknown"
            return {"token_count_estimator_name": estimator_name}

        @self._app.route("/shutdown", methods=["PUT"])
        def shutdown() -> dict[str, str]:
            self._shutdown()
            return {"status": "shutting down"}

    def _get_log_messages(self, request_log: RequestLog) -> ResponseLog:
        all_messages = self._memory_log_handler.get_log_messages()
        requested_messages = all_messages[request_log.start_idx :] if request_log.start_idx <= len(all_messages) else []
        project = self._agent.get_active_project()
        project_name = project.project_name if project else None
        return ResponseLog(messages=requested_messages, max_idx=len(all_messages) - 1, active_project=project_name)

    def _get_tool_names(self) -> ResponseToolNames:
        return ResponseToolNames(tool_names=self._tool_names)

    def _get_tool_stats(self) -> ResponseToolStats:
        if self._tool_usage_stats is not None:
            return ResponseToolStats(stats=self._tool_usage_stats.get_tool_stats_dict())
        else:
            return ResponseToolStats(stats={})

    def _clear_tool_stats(self) -> None:
        if self._tool_usage_stats is not None:
            self._tool_usage_stats.clear()

    def _shutdown(self) -> None:
        log.info("Shutting down Serena")
        if self._shutdown_callback:
            self._shutdown_callback()
        else:
            # noinspection PyProtectedMember
            # noinspection PyUnresolvedReferences
            os._exit(0)

    @staticmethod
    def _find_first_free_port(start_port: int) -> int:
        port = start_port
        while port <= 65535:
            try:
                with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock:
                    sock.bind(("0.0.0.0", port))
                    return port
            except OSError:
                port += 1

        raise RuntimeError(f"No free ports found starting from {start_port}")

    def run(self, host: str = "0.0.0.0", port: int = 0x5EDA) -> int:
        """
        Runs the dashboard on the given host and port and returns the port number.
        """
        # patch flask.cli.show_server to avoid printing the server info
        from flask import cli

        cli.show_server_banner = lambda *args, **kwargs: None

        self._app.run(host=host, port=port, debug=False, use_reloader=False, threaded=True)
        return port

    def run_in_thread(self) -> tuple[threading.Thread, int]:
        port = self._find_first_free_port(0x5EDA)
        thread = threading.Thread(target=lambda: self.run(port=port), daemon=True)
        thread.start()
        return thread, port

```

--------------------------------------------------------------------------------
/src/solidlsp/language_servers/gopls.py:
--------------------------------------------------------------------------------

```python
import logging
import os
import pathlib
import subprocess
import threading

from overrides import override

from solidlsp.ls import SolidLanguageServer
from solidlsp.ls_config import LanguageServerConfig
from solidlsp.ls_logger import LanguageServerLogger
from solidlsp.lsp_protocol_handler.lsp_types import InitializeParams
from solidlsp.lsp_protocol_handler.server import ProcessLaunchInfo
from solidlsp.settings import SolidLSPSettings


class Gopls(SolidLanguageServer):
    """
    Provides Go specific instantiation of the LanguageServer class using gopls.
    """

    @override
    def is_ignored_dirname(self, dirname: str) -> bool:
        # For Go projects, we should ignore:
        # - vendor: third-party dependencies vendored into the project
        # - node_modules: if the project has JavaScript components
        # - dist/build: common output directories
        return super().is_ignored_dirname(dirname) or dirname in ["vendor", "node_modules", "dist", "build"]

    @staticmethod
    def _get_go_version():
        """Get the installed Go version or None if not found."""
        try:
            result = subprocess.run(["go", "version"], capture_output=True, text=True, check=False)
            if result.returncode == 0:
                return result.stdout.strip()
        except FileNotFoundError:
            return None
        return None

    @staticmethod
    def _get_gopls_version():
        """Get the installed gopls version or None if not found."""
        try:
            result = subprocess.run(["gopls", "version"], capture_output=True, text=True, check=False)
            if result.returncode == 0:
                return result.stdout.strip()
        except FileNotFoundError:
            return None
        return None

    @staticmethod
    def _setup_runtime_dependency():
        """
        Check if required Go runtime dependencies are available.
        Raises RuntimeError with helpful message if dependencies are missing.
        """
        go_version = Gopls._get_go_version()
        if not go_version:
            raise RuntimeError(
                "Go is not installed. Please install Go from https://golang.org/doc/install and make sure it is added to your PATH."
            )

        gopls_version = Gopls._get_gopls_version()
        if not gopls_version:
            raise RuntimeError(
                "Found a Go version but gopls is not installed.\n"
                "Please install gopls as described in https://pkg.go.dev/golang.org/x/tools/gopls#section-readme\n\n"
                "After installation, make sure it is added to your PATH (it might be installed in a different location than Go)."
            )

        return True

    def __init__(
        self, config: LanguageServerConfig, logger: LanguageServerLogger, repository_root_path: str, solidlsp_settings: SolidLSPSettings
    ):
        self._setup_runtime_dependency()

        super().__init__(
            config,
            logger,
            repository_root_path,
            ProcessLaunchInfo(cmd="gopls", cwd=repository_root_path),
            "go",
            solidlsp_settings,
        )
        self.server_ready = threading.Event()
        self.request_id = 0

    @staticmethod
    def _get_initialize_params(repository_absolute_path: str) -> InitializeParams:
        """
        Returns the initialize params for the Go Language Server.
        """
        root_uri = pathlib.Path(repository_absolute_path).as_uri()
        initialize_params = {
            "locale": "en",
            "capabilities": {
                "textDocument": {
                    "synchronization": {"didSave": True, "dynamicRegistration": True},
                    "definition": {"dynamicRegistration": True},
                    "documentSymbol": {
                        "dynamicRegistration": True,
                        "hierarchicalDocumentSymbolSupport": True,
                        "symbolKind": {"valueSet": list(range(1, 27))},
                    },
                },
                "workspace": {"workspaceFolders": True, "didChangeConfiguration": {"dynamicRegistration": True}},
            },
            "processId": os.getpid(),
            "rootPath": repository_absolute_path,
            "rootUri": root_uri,
            "workspaceFolders": [
                {
                    "uri": root_uri,
                    "name": os.path.basename(repository_absolute_path),
                }
            ],
        }
        return initialize_params

    def _start_server(self):
        """Start gopls server process"""

        def register_capability_handler(params):
            return

        def window_log_message(msg):
            self.logger.log(f"LSP: window/logMessage: {msg}", logging.INFO)

        def do_nothing(params):
            return

        self.server.on_request("client/registerCapability", register_capability_handler)
        self.server.on_notification("window/logMessage", window_log_message)
        self.server.on_notification("$/progress", do_nothing)
        self.server.on_notification("textDocument/publishDiagnostics", do_nothing)

        self.logger.log("Starting gopls server process", logging.INFO)
        self.server.start()
        initialize_params = self._get_initialize_params(self.repository_root_path)

        self.logger.log(
            "Sending initialize request from LSP client to LSP server and awaiting response",
            logging.INFO,
        )
        init_response = self.server.send.initialize(initialize_params)

        # Verify server capabilities
        assert "textDocumentSync" in init_response["capabilities"]
        assert "completionProvider" in init_response["capabilities"]
        assert "definitionProvider" in init_response["capabilities"]

        self.server.notify.initialized({})
        self.completions_available.set()

        # gopls server is typically ready immediately after initialization
        self.server_ready.set()
        self.server_ready.wait()

```

--------------------------------------------------------------------------------
/src/serena/agno.py:
--------------------------------------------------------------------------------

```python
import argparse
import logging
import os
import threading
from pathlib import Path
from typing import Any

from agno.agent import Agent
from agno.memory import AgentMemory
from agno.models.base import Model
from agno.storage.sqlite import SqliteStorage
from agno.tools.function import Function
from agno.tools.toolkit import Toolkit
from dotenv import load_dotenv
from sensai.util.logging import LogTime

from serena.agent import SerenaAgent, Tool
from serena.config.context_mode import SerenaAgentContext
from serena.constants import REPO_ROOT
from serena.util.exception import show_fatal_exception_safe

log = logging.getLogger(__name__)


class SerenaAgnoToolkit(Toolkit):
    def __init__(self, serena_agent: SerenaAgent):
        super().__init__("Serena")
        for tool in serena_agent.get_exposed_tool_instances():
            self.functions[tool.get_name_from_cls()] = self._create_agno_function(tool)
        log.info("Agno agent functions: %s", list(self.functions.keys()))

    @staticmethod
    def _create_agno_function(tool: Tool) -> Function:
        def entrypoint(**kwargs: Any) -> str:
            if "kwargs" in kwargs:
                # Agno sometimes passes a kwargs argument explicitly, so we merge it
                kwargs.update(kwargs["kwargs"])
                del kwargs["kwargs"]
            log.info(f"Calling tool {tool}")
            return tool.apply_ex(log_call=True, catch_exceptions=True, **kwargs)

        function = Function.from_callable(tool.get_apply_fn())
        function.name = tool.get_name_from_cls()
        function.entrypoint = entrypoint
        function.skip_entrypoint_processing = True
        return function


class SerenaAgnoAgentProvider:
    _agent: Agent | None = None
    _lock = threading.Lock()

    @classmethod
    def get_agent(cls, model: Model) -> Agent:
        """
        Returns the singleton instance of the Serena agent or creates it with the given parameters if it doesn't exist.

        NOTE: This is very ugly with poor separation of concerns, but the way in which the Agno UI works (reloading the
            module that defines the `app` variable) essentially forces us to do something like this.

        :param model: the large language model to use for the agent
        :return: the agent instance
        """
        with cls._lock:
            if cls._agent is not None:
                return cls._agent

            # change to Serena root
            os.chdir(REPO_ROOT)

            load_dotenv()

            parser = argparse.ArgumentParser(description="Serena coding assistant")

            # Create a mutually exclusive group
            group = parser.add_mutually_exclusive_group()

            # Add arguments to the group, both pointing to the same destination
            group.add_argument(
                "--project-file",
                required=False,
                help="Path to the project (or project.yml file).",
            )
            group.add_argument(
                "--project",
                required=False,
                help="Path to the project (or project.yml file).",
            )
            args = parser.parse_args()

            args_project_file = args.project or args.project_file

            if args_project_file:
                project_file = Path(args_project_file).resolve()
                # If project file path is relative, make it absolute by joining with project root
                if not project_file.is_absolute():
                    # Get the project root directory (parent of scripts directory)
                    project_root = Path(REPO_ROOT)
                    project_file = project_root / args_project_file

                # Ensure the path is normalized and absolute
                project_file = str(project_file.resolve())
            else:
                project_file = None

            with LogTime("Loading Serena agent"):
                try:
                    serena_agent = SerenaAgent(project_file, context=SerenaAgentContext.load("agent"))
                except Exception as e:
                    show_fatal_exception_safe(e)
                    raise

            # Even though we don't want to keep history between sessions,
            # for agno-ui to work as a conversation, we use a persistent storage on disk.
            # This storage should be deleted between sessions.
            # Note that this might collide with custom options for the agent, like adding vector-search based tools.
            # See here for an explanation: https://www.reddit.com/r/agno/comments/1jk6qea/regarding_the_built_in_memory/
            sql_db_path = (Path("temp") / "agno_agent_storage.db").absolute()
            sql_db_path.parent.mkdir(exist_ok=True)
            # delete the db file if it exists
            log.info(f"Deleting DB from PID {os.getpid()}")
            if sql_db_path.exists():
                sql_db_path.unlink()

            agno_agent = Agent(
                name="Serena",
                model=model,
                # See explanation above on why storage is needed
                storage=SqliteStorage(table_name="serena_agent_sessions", db_file=str(sql_db_path)),
                description="A fully-featured coding assistant",
                tools=[SerenaAgnoToolkit(serena_agent)],
                # The tool calls will be shown in the UI anyway since whether to show them is configurable per tool
                # To see detailed logs, you should use the serena logger (configure it in the project file path)
                show_tool_calls=False,
                markdown=True,
                system_message=serena_agent.create_system_prompt(),
                telemetry=False,
                memory=AgentMemory(),
                add_history_to_messages=True,
                num_history_responses=100,  # you might want to adjust this (expense vs. history awareness)
            )
            cls._agent = agno_agent
            log.info(f"Agent instantiated: {agno_agent}")

        return agno_agent

```

--------------------------------------------------------------------------------
/test/solidlsp/bash/test_bash_basic.py:
--------------------------------------------------------------------------------

```python
"""
Basic integration tests for the bash language server functionality.

These tests validate the functionality of the language server APIs
like request_document_symbols using the bash test repository.
"""

import pytest

from solidlsp import SolidLanguageServer
from solidlsp.ls_config import Language


@pytest.mark.bash
class TestBashLanguageServerBasics:
    """Test basic functionality of the bash language server."""

    @pytest.mark.parametrize("language_server", [Language.BASH], indirect=True)
    def test_bash_language_server_initialization(self, language_server: SolidLanguageServer) -> None:
        """Test that bash language server can be initialized successfully."""
        assert language_server is not None
        assert language_server.language == Language.BASH

    @pytest.mark.parametrize("language_server", [Language.BASH], indirect=True)
    def test_bash_request_document_symbols(self, language_server: SolidLanguageServer) -> None:
        """Test request_document_symbols for bash files."""
        # Test getting symbols from main.sh
        all_symbols, _root_symbols = language_server.request_document_symbols("main.sh", include_body=False)

        # Extract function symbols (LSP Symbol Kind 12)
        function_symbols = [symbol for symbol in all_symbols if symbol.get("kind") == 12]
        function_names = [symbol["name"] for symbol in function_symbols]

        # Should detect all 3 functions from main.sh
        assert "greet_user" in function_names, "Should find greet_user function"
        assert "process_items" in function_names, "Should find process_items function"
        assert "main" in function_names, "Should find main function"
        assert len(function_symbols) >= 3, f"Should find at least 3 functions, found {len(function_symbols)}"

    @pytest.mark.parametrize("language_server", [Language.BASH], indirect=True)
    def test_bash_request_document_symbols_with_body(self, language_server: SolidLanguageServer) -> None:
        """Test request_document_symbols with body extraction."""
        # Test with include_body=True
        all_symbols, _root_symbols = language_server.request_document_symbols("main.sh", include_body=True)

        function_symbols = [symbol for symbol in all_symbols if symbol.get("kind") == 12]

        # Find greet_user function and check it has body
        greet_user_symbol = next((sym for sym in function_symbols if sym["name"] == "greet_user"), None)
        assert greet_user_symbol is not None, "Should find greet_user function"

        if "body" in greet_user_symbol:
            body = greet_user_symbol["body"]
            assert "function greet_user()" in body, "Function body should contain function definition"
            assert "case" in body.lower(), "Function body should contain case statement"

    @pytest.mark.parametrize("language_server", [Language.BASH], indirect=True)
    def test_bash_utils_functions(self, language_server: SolidLanguageServer) -> None:
        """Test function detection in utils.sh file."""
        # Test with utils.sh as well
        utils_all_symbols, _utils_root_symbols = language_server.request_document_symbols("utils.sh", include_body=False)

        utils_function_symbols = [symbol for symbol in utils_all_symbols if symbol.get("kind") == 12]
        utils_function_names = [symbol["name"] for symbol in utils_function_symbols]

        # Should detect functions from utils.sh
        expected_utils_functions = [
            "to_uppercase",
            "to_lowercase",
            "trim_whitespace",
            "backup_file",
            "contains_element",
            "log_message",
            "is_valid_email",
            "is_number",
        ]

        for func_name in expected_utils_functions:
            assert func_name in utils_function_names, f"Should find {func_name} function in utils.sh"

        assert len(utils_function_symbols) >= 8, f"Should find at least 8 functions in utils.sh, found {len(utils_function_symbols)}"

    @pytest.mark.parametrize("language_server", [Language.BASH], indirect=True)
    def test_bash_function_syntax_patterns(self, language_server: SolidLanguageServer) -> None:
        """Test that LSP detects different bash function syntax patterns correctly."""
        # Test main.sh (has both 'function' keyword and traditional syntax)
        main_all_symbols, _main_root_symbols = language_server.request_document_symbols("main.sh", include_body=False)
        main_functions = [symbol for symbol in main_all_symbols if symbol.get("kind") == 12]
        main_function_names = [func["name"] for func in main_functions]

        # Test utils.sh (all use 'function' keyword)
        utils_all_symbols, _utils_root_symbols = language_server.request_document_symbols("utils.sh", include_body=False)
        utils_functions = [symbol for symbol in utils_all_symbols if symbol.get("kind") == 12]
        utils_function_names = [func["name"] for func in utils_functions]

        # Verify LSP detects both syntax patterns
        # main() uses traditional syntax: main() {
        assert "main" in main_function_names, "LSP should detect traditional function syntax"

        # Functions with 'function' keyword: function name() {
        assert "greet_user" in main_function_names, "LSP should detect function keyword syntax"
        assert "process_items" in main_function_names, "LSP should detect function keyword syntax"

        # Verify all expected utils functions are detected by LSP
        expected_utils = [
            "to_uppercase",
            "to_lowercase",
            "trim_whitespace",
            "backup_file",
            "contains_element",
            "log_message",
            "is_valid_email",
            "is_number",
        ]

        for expected_func in expected_utils:
            assert expected_func in utils_function_names, f"LSP should detect {expected_func} function"

        # Verify total counts match expectations
        assert len(main_functions) >= 3, f"Should find at least 3 functions in main.sh, found {len(main_functions)}"
        assert len(utils_functions) >= 8, f"Should find at least 8 functions in utils.sh, found {len(utils_functions)}"

```

--------------------------------------------------------------------------------
/test/solidlsp/elixir/conftest.py:
--------------------------------------------------------------------------------

```python
"""
Elixir-specific test configuration and fixtures.
"""

import os
import subprocess
import time
from pathlib import Path

import pytest


def ensure_elixir_test_repo_compiled(repo_path: str) -> None:
    """Ensure the Elixir test repository dependencies are installed and project is compiled.

    Next LS requires the project to be fully compiled and indexed before providing
    complete references and symbol resolution. This function:
    1. Installs dependencies via 'mix deps.get'
    2. Compiles the project via 'mix compile'

    This is essential in CI environments where dependencies aren't pre-installed.

    Args:
        repo_path: Path to the Elixir project root directory

    """
    # Check if this looks like an Elixir project
    mix_file = os.path.join(repo_path, "mix.exs")
    if not os.path.exists(mix_file):
        return

    # Check if already compiled (optimization for repeated runs)
    build_path = os.path.join(repo_path, "_build")
    deps_path = os.path.join(repo_path, "deps")

    if os.path.exists(build_path) and os.path.exists(deps_path):
        print(f"Elixir test repository already compiled in {repo_path}")
        return

    try:
        print("Installing dependencies and compiling Elixir test repository for optimal Next LS performance...")

        # First, install dependencies with increased timeout for CI
        print("=" * 60)
        print("Step 1/2: Installing Elixir dependencies...")
        print("=" * 60)
        start_time = time.time()

        deps_result = subprocess.run(
            ["mix", "deps.get"],
            cwd=repo_path,
            capture_output=True,
            text=True,
            timeout=180,
            check=False,  # 3 minutes for dependency installation (CI can be slow)
        )

        deps_duration = time.time() - start_time
        print(f"Dependencies installation completed in {deps_duration:.2f} seconds")

        # Always log the output for transparency
        if deps_result.stdout.strip():
            print("Dependencies stdout:")
            print("-" * 40)
            print(deps_result.stdout)
            print("-" * 40)

        if deps_result.stderr.strip():
            print("Dependencies stderr:")
            print("-" * 40)
            print(deps_result.stderr)
            print("-" * 40)

        if deps_result.returncode != 0:
            print(f"⚠️  Warning: Dependencies installation failed with exit code {deps_result.returncode}")
            # Continue anyway - some projects might not have dependencies
        else:
            print("✓ Dependencies installed successfully")

        # Then compile the project with increased timeout for CI
        print("=" * 60)
        print("Step 2/2: Compiling Elixir project...")
        print("=" * 60)
        start_time = time.time()

        compile_result = subprocess.run(
            ["mix", "compile"],
            cwd=repo_path,
            capture_output=True,
            text=True,
            timeout=300,
            check=False,  # 5 minutes for compilation (Credo compilation can be slow in CI)
        )

        compile_duration = time.time() - start_time
        print(f"Compilation completed in {compile_duration:.2f} seconds")

        # Always log the output for transparency
        if compile_result.stdout.strip():
            print("Compilation stdout:")
            print("-" * 40)
            print(compile_result.stdout)
            print("-" * 40)

        if compile_result.stderr.strip():
            print("Compilation stderr:")
            print("-" * 40)
            print(compile_result.stderr)
            print("-" * 40)

        if compile_result.returncode == 0:
            print(f"✓ Elixir test repository compiled successfully in {repo_path}")
        else:
            print(f"⚠️  Warning: Compilation completed with exit code {compile_result.returncode}")
            # Still continue - warnings are often non-fatal

        print("=" * 60)
        print(f"Total setup time: {time.time() - (start_time - compile_duration - deps_duration):.2f} seconds")
        print("=" * 60)

    except subprocess.TimeoutExpired as e:
        print("=" * 60)
        print(f"❌ TIMEOUT: Elixir setup timed out after {e.timeout} seconds")
        print(f"Command: {' '.join(e.cmd)}")
        print("This may indicate slow CI environment - Next LS may still work but with reduced functionality")

        # Try to get partial output if available
        if hasattr(e, "stdout") and e.stdout:
            print("Partial stdout before timeout:")
            print("-" * 40)
            print(e.stdout)
            print("-" * 40)
        if hasattr(e, "stderr") and e.stderr:
            print("Partial stderr before timeout:")
            print("-" * 40)
            print(e.stderr)
            print("-" * 40)
        print("=" * 60)

    except FileNotFoundError:
        print("❌ ERROR: 'mix' command not found - Elixir test repository may not be compiled")
        print("Please ensure Elixir is installed and available in PATH")
    except Exception as e:
        print(f"❌ ERROR: Failed to prepare Elixir test repository: {e}")


@pytest.fixture(scope="session", autouse=True)
def setup_elixir_test_environment():
    """Automatically prepare Elixir test environment for all Elixir tests.

    This fixture runs once per test session and automatically:
    1. Installs dependencies via 'mix deps.get'
    2. Compiles the Elixir test repository via 'mix compile'

    It uses autouse=True so it runs automatically without needing to be explicitly
    requested by tests. This ensures Next LS has a fully prepared project to work with.

    Uses generous timeouts (3-5 minutes) to accommodate slow CI environments.
    All output is logged for transparency and debugging.
    """
    # Get the test repo path relative to this conftest.py file
    test_repo_path = Path(__file__).parent.parent.parent / "resources" / "repos" / "elixir" / "test_repo"
    ensure_elixir_test_repo_compiled(str(test_repo_path))
    return str(test_repo_path)


@pytest.fixture(scope="session")
def elixir_test_repo_path(setup_elixir_test_environment):
    """Get the path to the prepared Elixir test repository.

    This fixture depends on setup_elixir_test_environment to ensure dependencies
    are installed and compilation has completed before returning the path.
    """
    return setup_elixir_test_environment

```

--------------------------------------------------------------------------------
/test/solidlsp/erlang/conftest.py:
--------------------------------------------------------------------------------

```python
"""
Erlang-specific test configuration and fixtures.
"""

import os
import subprocess
import time
from pathlib import Path

import pytest


def ensure_erlang_test_repo_compiled(repo_path: str) -> None:
    """Ensure the Erlang test repository dependencies are installed and project is compiled.

    Erlang LS requires the project to be fully compiled and indexed before providing
    complete references and symbol resolution. This function:
    1. Installs dependencies via 'rebar3 deps'
    2. Compiles the project via 'rebar3 compile'

    This is essential in CI environments where dependencies aren't pre-installed.

    Args:
        repo_path: Path to the Erlang project root directory

    """
    # Check if this looks like an Erlang project
    rebar_config = os.path.join(repo_path, "rebar.config")
    if not os.path.exists(rebar_config):
        return

    # Check if already compiled (optimization for repeated runs)
    build_path = os.path.join(repo_path, "_build")
    deps_path = os.path.join(repo_path, "deps")

    if os.path.exists(build_path) and os.path.exists(deps_path):
        print(f"Erlang test repository already compiled in {repo_path}")
        return

    try:
        print("Installing dependencies and compiling Erlang test repository for optimal Erlang LS performance...")

        # First, install dependencies with increased timeout for CI
        print("=" * 60)
        print("Step 1/2: Installing Erlang dependencies...")
        print("=" * 60)
        start_time = time.time()

        deps_result = subprocess.run(
            ["rebar3", "deps"],
            cwd=repo_path,
            capture_output=True,
            text=True,
            timeout=180,
            check=False,  # 3 minutes for dependency installation (CI can be slow)
        )

        deps_duration = time.time() - start_time
        print(f"Dependencies installation completed in {deps_duration:.2f} seconds")

        # Always log the output for transparency
        if deps_result.stdout.strip():
            print("Dependencies stdout:")
            print("-" * 40)
            print(deps_result.stdout)
            print("-" * 40)

        if deps_result.stderr.strip():
            print("Dependencies stderr:")
            print("-" * 40)
            print(deps_result.stderr)
            print("-" * 40)

        if deps_result.returncode != 0:
            print(f"⚠️  Warning: Dependencies installation failed with exit code {deps_result.returncode}")
            # Continue anyway - some projects might not have dependencies
        else:
            print("✓ Dependencies installed successfully")

        # Then compile the project with increased timeout for CI
        print("=" * 60)
        print("Step 2/2: Compiling Erlang project...")
        print("=" * 60)
        start_time = time.time()

        compile_result = subprocess.run(
            ["rebar3", "compile"],
            cwd=repo_path,
            capture_output=True,
            text=True,
            timeout=300,
            check=False,  # 5 minutes for compilation (Dialyzer can be slow in CI)
        )

        compile_duration = time.time() - start_time
        print(f"Compilation completed in {compile_duration:.2f} seconds")

        # Always log the output for transparency
        if compile_result.stdout.strip():
            print("Compilation stdout:")
            print("-" * 40)
            print(compile_result.stdout)
            print("-" * 40)

        if compile_result.stderr.strip():
            print("Compilation stderr:")
            print("-" * 40)
            print(compile_result.stderr)
            print("-" * 40)

        if compile_result.returncode == 0:
            print(f"✓ Erlang test repository compiled successfully in {repo_path}")
        else:
            print(f"⚠️  Warning: Compilation completed with exit code {compile_result.returncode}")
            # Still continue - warnings are often non-fatal

        print("=" * 60)
        print(f"Total setup time: {time.time() - (start_time - compile_duration - deps_duration):.2f} seconds")
        print("=" * 60)

    except subprocess.TimeoutExpired as e:
        print("=" * 60)
        print(f"❌ TIMEOUT: Erlang setup timed out after {e.timeout} seconds")
        print(f"Command: {' '.join(e.cmd)}")
        print("This may indicate slow CI environment - Erlang LS may still work but with reduced functionality")

        # Try to get partial output if available
        if hasattr(e, "stdout") and e.stdout:
            print("Partial stdout before timeout:")
            print("-" * 40)
            print(e.stdout)
            print("-" * 40)
        if hasattr(e, "stderr") and e.stderr:
            print("Partial stderr before timeout:")
            print("-" * 40)
            print(e.stderr)
            print("-" * 40)
        print("=" * 60)

    except FileNotFoundError:
        print("❌ ERROR: 'rebar3' command not found - Erlang test repository may not be compiled")
        print("Please ensure rebar3 is installed and available in PATH")
    except Exception as e:
        print(f"❌ ERROR: Failed to prepare Erlang test repository: {e}")


@pytest.fixture(scope="session", autouse=True)
def setup_erlang_test_environment():
    """Automatically prepare Erlang test environment for all Erlang tests.

    This fixture runs once per test session and automatically:
    1. Installs dependencies via 'rebar3 deps'
    2. Compiles the Erlang test repository via 'rebar3 compile'

    It uses autouse=True so it runs automatically without needing to be explicitly
    requested by tests. This ensures Erlang LS has a fully prepared project to work with.

    Uses generous timeouts (3-5 minutes) to accommodate slow CI environments.
    All output is logged for transparency and debugging.
    """
    # Get the test repo path relative to this conftest.py file
    test_repo_path = Path(__file__).parent.parent.parent / "resources" / "repos" / "erlang" / "test_repo"
    ensure_erlang_test_repo_compiled(str(test_repo_path))
    return str(test_repo_path)


@pytest.fixture(scope="session")
def erlang_test_repo_path(setup_erlang_test_environment):
    """Get the path to the prepared Erlang test repository.

    This fixture depends on setup_erlang_test_environment to ensure dependencies
    are installed and compilation has completed before returning the path.
    """
    return setup_erlang_test_environment

```

--------------------------------------------------------------------------------
/.serena/memories/serena_core_concepts_and_architecture.md:
--------------------------------------------------------------------------------

```markdown
# Serena Core Concepts and Architecture

## High-Level Architecture

Serena is built around a dual-layer architecture:

1. **SerenaAgent** - The main orchestrator that manages projects, tools, and user interactions
2. **SolidLanguageServer** - A unified wrapper around Language Server Protocol (LSP) implementations

## Core Components

### 1. SerenaAgent (`src/serena/agent.py`)

The central coordinator that:
- Manages active projects and their configurations
- Coordinates between different tools and contexts
- Handles language server lifecycle
- Manages memory persistence
- Provides MCP (Model Context Protocol) server interface

Key responsibilities:
- **Project Management** - Activating, switching between projects
- **Tool Registry** - Loading and managing available tools based on context/mode
- **Language Server Integration** - Starting/stopping language servers per project
- **Memory Management** - Persistent storage of project knowledge
- **Task Execution** - Coordinating complex multi-step operations

### 2. SolidLanguageServer (`src/solidlsp/ls.py`)

A unified abstraction over multiple language servers that provides:
- **Language-agnostic interface** for symbol operations
- **Caching layer** for performance optimization
- **Error handling and recovery** for unreliable language servers
- **Uniform API** regardless of underlying LSP implementation

Core capabilities:
- Symbol discovery and navigation
- Code completion and hover information
- Find references and definitions
- Document and workspace symbol search
- File watching and change notifications

### 3. Tool System (`src/serena/tools/`)

Modular tool architecture with several categories:

#### File Tools (`file_tools.py`)
- File system operations (read, write, list directories)
- Text search and pattern matching
- Regex-based replacements

#### Symbol Tools (`symbol_tools.py`)  
- Language-aware symbol finding and navigation
- Symbol body replacement and insertion
- Reference finding across codebase

#### Memory Tools (`memory_tools.py`)
- Project knowledge persistence
- Memory retrieval and management
- Onboarding information storage

#### Configuration Tools (`config_tools.py`)
- Project activation and switching
- Mode and context management
- Tool inclusion/exclusion

### 4. Configuration System (`src/serena/config/`)

Multi-layered configuration supporting:
- **Contexts** - Define available tools and their behavior
- **Modes** - Specify operational patterns (interactive, editing, etc.)
- **Projects** - Per-project settings and language server configs
- **Tool Sets** - Grouped tool collections for different use cases

## Language Server Integration

### Language Support Model

Each supported language has:
1. **Language Server Implementation** (`src/solidlsp/language_servers/`)
2. **Runtime Dependencies** - Managed downloads of language servers
3. **Test Repository** (`test/resources/repos/<language>/`)
4. **Test Suite** (`test/solidlsp/<language>/`)

### Language Server Lifecycle

1. **Discovery** - Find language servers or download them automatically
2. **Initialization** - Start server process and perform LSP handshake
3. **Project Setup** - Open workspace and configure language-specific settings
4. **Operation** - Handle requests/responses with caching and error recovery
5. **Shutdown** - Clean shutdown of server processes

### Supported Languages

Current language support includes:
- **C#** - Microsoft.CodeAnalysis.LanguageServer (.NET 9)
- **Python** - Pyright or Jedi
- **TypeScript/JavaScript** - TypeScript Language Server
- **Rust** - rust-analyzer
- **Go** - gopls
- **Java** - Eclipse JDT Language Server
- **Kotlin** - Kotlin Language Server
- **PHP** - Intelephense
- **Ruby** - Solargraph
- **Clojure** - clojure-lsp
- **Elixir** - ElixirLS
- **Dart** - Dart Language Server
- **C/C++** - clangd
- **Terraform** - terraform-ls

## Memory and Knowledge Management

### Memory System
- **Markdown-based storage** in `.serena/memories/` directory
- **Contextual retrieval** - memories loaded based on relevance
- **Project-specific** knowledge persistence
- **Onboarding support** - guided setup for new projects

### Knowledge Categories
- **Project Structure** - Directory layouts, build systems
- **Architecture Patterns** - How the codebase is organized
- **Development Workflows** - Testing, building, deployment
- **Domain Knowledge** - Business logic and requirements

## MCP Server Interface

Serena exposes its functionality through Model Context Protocol:
- **Tool Discovery** - AI agents can enumerate available tools
- **Context-Aware Operations** - Tools behave based on active project/mode
- **Stateful Sessions** - Maintains project state across interactions
- **Error Handling** - Graceful degradation when tools fail

## Error Handling and Resilience

### Language Server Reliability
- **Timeout Management** - Configurable timeouts for LSP requests
- **Process Recovery** - Automatic restart of crashed language servers
- **Fallback Behavior** - Graceful degradation when LSP unavailable
- **Caching Strategy** - Reduces impact of server failures

### Project Activation Safety
- **Validation** - Verify project structure before activation
- **Error Isolation** - Project failures don't affect other projects
- **Recovery Mechanisms** - Automatic cleanup and retry logic

## Performance Considerations

### Caching Strategy
- **Symbol Cache** - In-memory caching of expensive symbol operations
- **File System Cache** - Reduced disk I/O for repeated operations
- **Language Server Cache** - Persistent cache across sessions

### Resource Management
- **Language Server Pooling** - Reuse servers across projects when possible
- **Memory Management** - Automatic cleanup of unused resources
- **Background Operations** - Async operations don't block user interactions

## Extension Points

### Adding New Languages
1. Implement language server class in `src/solidlsp/language_servers/`
2. Add runtime dependencies configuration
3. Create test repository and test suite
4. Update language enumeration and configuration

### Adding New Tools
1. Inherit from `Tool` base class in `tools_base.py`
2. Implement required methods and parameter validation
3. Register tool in appropriate tool registry
4. Add to context/mode configurations as needed

### Custom Contexts and Modes
- Define new contexts in YAML configuration files
- Specify tool sets and operational patterns
- Configure for specific development workflows
```
Page 2/11FirstPrevNextLast