This is page 1 of 2. Use http://codebase.md/cyberchitta/llm-context.py?page={x} to view the full context.
# Directory Structure
```
├── .gitignore
├── .llm-context
│   ├── .gitignore
│   ├── config.yaml
│   ├── lc-project-notes.md
│   ├── rules
│   │   ├── flt-no-excerpters.md
│   │   ├── flt-repo-base.md
│   │   ├── lc
│   │   │   ├── exc-base.md
│   │   │   ├── flt-base.md
│   │   │   ├── flt-no-files.md
│   │   │   ├── flt-no-full.md
│   │   │   ├── flt-no-outline.md
│   │   │   ├── ins-developer.md
│   │   │   ├── ins-rule-framework.md
│   │   │   ├── ins-rule-intro.md
│   │   │   ├── prm-developer.md
│   │   │   ├── prm-rule-create.md
│   │   │   ├── sty-code.md
│   │   │   ├── sty-javascript.md
│   │   │   ├── sty-jupyter.md
│   │   │   └── sty-python.md
│   │   ├── prm-code.md
│   │   ├── prm-rules.md
│   │   ├── prm-templates.md
│   │   └── tmp-prm-docs-update.md
│   └── templates
│       └── lc
│           ├── context.j2
│           ├── definitions.j2
│           ├── end-prompt.j2
│           ├── excerpts.j2
│           ├── excluded.j2
│           ├── files.j2
│           ├── missing-excerpted.j2
│           ├── missing-files.j2
│           ├── outlines.j2
│           ├── overview.j2
│           └── prompt.j2
├── CHANGELOG.md
├── cliff.toml
├── docs
│   └── user-guide.md
├── LICENSE
├── pyproject.toml
├── README.md
├── src
│   └── llm_context
│       ├── __init__.py
│       ├── cli.py
│       ├── cmd_pipeline.py
│       ├── commands.py
│       ├── context_generator.py
│       ├── context_spec.py
│       ├── exceptions.py
│       ├── excerpters
│       │   ├── base.py
│       │   ├── code_outliner.py
│       │   ├── language_mapping.py
│       │   ├── parser.py
│       │   ├── service.py
│       │   ├── sfc.py
│       │   ├── tagger.py
│       │   └── ts-qry
│       │       ├── c-tags.scm
│       │       ├── cpp-tags.scm
│       │       ├── csharp-tags.scm
│       │       ├── elisp-tags.scm
│       │       ├── elixir-tags.scm
│       │       ├── elm-tags.scm
│       │       ├── go-tags.scm
│       │       ├── java-tags.scm
│       │       ├── javascript-tags.scm
│       │       ├── php-tags.scm
│       │       ├── python-tags.scm
│       │       ├── README.md
│       │       ├── ruby-tags.scm
│       │       ├── rust-tags.scm
│       │       ├── svelte-injections.scm
│       │       └── typescript-tags.scm
│       ├── exec_env.py
│       ├── file_selector.py
│       ├── lc_resources
│       │   ├── dotgitignore
│       │   ├── rules
│       │   │   └── lc
│       │   │       ├── exc-base.md
│       │   │       ├── flt-base.md
│       │   │       ├── flt-no-files.md
│       │   │       ├── flt-no-full.md
│       │   │       ├── flt-no-outline.md
│       │   │       ├── ins-developer.md
│       │   │       ├── ins-rule-framework.md
│       │   │       ├── ins-rule-intro.md
│       │   │       ├── prm-developer.md
│       │   │       ├── prm-rule-create.md
│       │   │       ├── sty-code.md
│       │   │       ├── sty-javascript.md
│       │   │       ├── sty-jupyter.md
│       │   │       └── sty-python.md
│       │   └── templates
│       │       └── lc
│       │           ├── context.j2
│       │           ├── definitions.j2
│       │           ├── end-prompt.j2
│       │           ├── excerpts.j2
│       │           ├── excluded.j2
│       │           ├── files.j2
│       │           ├── missing-files.j2
│       │           ├── outlines.j2
│       │           ├── overview.j2
│       │           └── prompt.j2
│       ├── mcp.py
│       ├── overviews.py
│       ├── project_setup.py
│       ├── rule_parser.py
│       ├── rule.py
│       ├── state.py
│       └── utils.py
├── tests
│   ├── test_body_languages.py
│   ├── test_excerpt_languages.py
│   ├── test_include_filters.py
│   ├── test_logging.py
│   ├── test_nested_gitignores.py
│   ├── test_outline_languages.py
│   ├── test_outliner.py
│   ├── test_parser.py
│   ├── test_path_converter.py
│   └── test_pathspec_ignorer.py
└── uv.lock
```
# Files
--------------------------------------------------------------------------------
/.llm-context/.gitignore:
--------------------------------------------------------------------------------
```
curr_ctx.yaml
lc-state.yaml
```
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
```
# Python
__pycache__/
*.pyc
*.pyo
*.pyd
.Python
env/
venv/
ENV/
env.bak/
venv.bak/
.python-version
.pytest_cache/
.mypy_cache/
.ruff_cache/
# Distribution / packaging
dist/
build/
*.egg-info/
*.egg
# Editors
.vscode/
.idea/
*.swp
*~
# macOS
.DS_Store
# Windows
Thumbs.db
# Jupyter Notebook
.ipynb_checkpoints
# Virtual environment
.venv
.env
.DS_Store
```
--------------------------------------------------------------------------------
/src/llm_context/excerpters/ts-qry/README.md:
--------------------------------------------------------------------------------
```markdown
# Credits
Many of these query files were originally based on the tags.scm files from https://github.com/paul-gauthier/aider - licensed under the Apache 2.0 license.
```
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
```markdown
# LLM Context
[](https://opensource.org/licenses/Apache-2.0)
[](https://pypi.org/project/llm-context/)
[](https://pepy.tech/project/llm-context)
**Reduce friction when providing context to LLMs.** Share relevant project files instantly through smart selection and rule-based filtering.
## The Problem
Getting project context into LLM chats is tedious:
- Manually copying/pasting files takes forever
- Hard to identify which files are relevant
- Including too much hits context limits, too little misses important details
- AI requests for additional files require manual fetching
- Repeating this process for every conversation
## The Solution
```bash
lc-select # Smart file selection
lc-context # Instant formatted context
# Paste and work - AI can access additional files seamlessly
```
**Result**: From "I need to share my project" to productive AI collaboration in seconds.
> **Note**: This project was developed in collaboration with several Claude Sonnets (3.5, 3.6, 3.7 and 4.0), as well as Groks (3 and 4), using LLM Context itself to share code during development. All code in the repository is heavily human-curated (by me 😇, @restlessronin).
## Installation
```bash
uv tool install "llm-context>=0.5.0"
```
## Quick Start
### Basic Usage
```bash
# One-time setup
cd your-project
lc-init
# Daily usage
lc-select
lc-context
```
### MCP Integration (Recommended)
```jsonc
{
  "mcpServers": {
    "llm-context": {
      "command": "uvx",
      "args": ["--from", "llm-context", "lc-mcp"]
    }
  }
}
```
With MCP, AI can access additional files directly during conversations.
### Project Customization
```bash
# Create project-specific filters
cat > .llm-context/rules/flt-repo-base.md << 'EOF'
---
compose:
  filters: [lc/flt-base]
gitignores:
  full-files: ["*.md", "/tests", "/node_modules"]
---
EOF
# Customize main development rule
cat > .llm-context/rules/prm-code.md << 'EOF'
---
instructions: [lc/ins-developer, lc/sty-python]
compose:
  filters: [flt-repo-base]
  excerpters: [lc/exc-base]
---
Additional project-specific guidelines and context.
EOF
```
## Core Commands
| Command              | Purpose                                    |
| -------------------- | ------------------------------------------ |
| `lc-init`            | Initialize project configuration           |
| `lc-select`          | Select files based on current rule         |
| `lc-context`         | Generate and copy context                  |
| `lc-context -nt`     | Generate context for non-MCP environments  |
| `lc-set-rule <name>` | Switch between rules                       |
| `lc-missing`         | Handle file and context requests (non-MCP) |
## Rule System
Rules use a systematic five-category structure:
- **Prompt Rules (`prm-`)**: Generate project contexts (e.g., `lc/prm-developer`, `lc/prm-rule-create`)
- **Filter Rules (`flt-`)**: Control file inclusion (e.g., `lc/flt-base`, `lc/flt-no-files`)
- **Instruction Rules (`ins-`)**: Provide guidelines (e.g., `lc/ins-developer`, `lc/ins-rule-framework`)
- **Style Rules (`sty-`)**: Enforce coding standards (e.g., `lc/sty-python`, `lc/sty-code`)
- **Excerpt Rules (`exc-`)**: Configure extractions for context reduction (e.g., `lc/exc-base`)
### Example Rule
```yaml
---
description: "Debug API authentication issues"
compose:
  filters: [lc/flt-no-files]
  excerpters: [lc/exc-base]
also-include:
  full-files: ["/src/auth/**", "/tests/auth/**"]
---
Focus on authentication system and related tests.
```
## Workflow Patterns
### Daily Development
```bash
lc-set-rule lc/prm-developer
lc-select
lc-context
# AI can review changes, access additional files as needed
```
### Focused Tasks
```bash
# Let AI help create minimal context
lc-set-rule lc/prm-rule-create
lc-context -nt
# Work with AI to create task-specific rule using tmp-prm- prefix
```
### MCP Benefits
- **Code review**: AI examines your changes for completeness/correctness
- **Additional files**: AI accesses initially excluded files when needed
- **Change tracking**: See what's been modified during conversations
- **Zero friction**: No manual file operations during development discussions
## Key Features
- **Smart File Selection**: Rules automatically include/exclude appropriate files
- **Instant Context Generation**: Formatted context copied to clipboard in seconds
- **MCP Integration**: AI can access additional files without manual intervention
- **Systematic Rule Organization**: Five-category system for clear rule composition
- **AI-Assisted Rule Creation**: Let AI help create minimal context for specific tasks
- **Code Excerpting**: Extractions of significant content to reduce context while preserving structure
## Learn More
- [User Guide](docs/user-guide.md) - Complete documentation
- [Design Philosophy](https://www.cyberchitta.cc/articles/llm-ctx-why.html)
- [Real-world Examples](https://www.cyberchitta.cc/articles/full-context-magic.html)
## License
Apache License, Version 2.0. See [LICENSE](LICENSE) for details.
```
--------------------------------------------------------------------------------
/src/llm_context/__init__.py:
--------------------------------------------------------------------------------
```python
```
--------------------------------------------------------------------------------
/.llm-context/rules/flt-no-excerpters.md:
--------------------------------------------------------------------------------
```markdown
---
namd: flt-no-highlighter
description: remove tree-sitter related code
gitignores:
  full-files:
    - excerpters/
---
```
--------------------------------------------------------------------------------
/.llm-context/rules/lc/flt-no-outline.md:
--------------------------------------------------------------------------------
```markdown
---
description: Excludes all files from outline selection using gitignore patterns. Use to focus context on full file content without structural summaries.
gitignores:
  excerpted-files: ["**/*"]
---
```
--------------------------------------------------------------------------------
/src/llm_context/lc_resources/rules/lc/flt-no-outline.md:
--------------------------------------------------------------------------------
```markdown
---
description: Excludes all files from outline selection using gitignore patterns. Use to focus context on full file content without structural summaries.
gitignores:
  excerpted-files: ["**/*"]
---
```
--------------------------------------------------------------------------------
/.llm-context/rules/lc/flt-no-full.md:
--------------------------------------------------------------------------------
```markdown
---
description: Excludes all files from full content selection using gitignore patterns. Use to restrict context to code outlines or metadata, minimizing context size.
gitignores:
  full-files: ["**/*"]
---
```
--------------------------------------------------------------------------------
/src/llm_context/lc_resources/rules/lc/flt-no-full.md:
--------------------------------------------------------------------------------
```markdown
---
description: Excludes all files from full content selection using gitignore patterns. Use to restrict context to code outlines or metadata, minimizing context size.
gitignores:
  full-files: ["**/*"]
---
```
--------------------------------------------------------------------------------
/.llm-context/rules/prm-code.md:
--------------------------------------------------------------------------------
```markdown
---
name: prm-code
description: default coding rule for this repo.
instructions: [lc/ins-developer, lc/sty-code, lc/sty-python]
compose:
  filters: [flt-repo-base, flt-no-excerpters]
  excerpters: [lc/exc-base]
---
```
--------------------------------------------------------------------------------
/.llm-context/rules/lc/flt-no-files.md:
--------------------------------------------------------------------------------
```markdown
---
description: Excludes all files from both full and outline selections. Use for minimal project contexts that include only metadata or notes, ideal for high-level planning.
compose:
  filters: [lc/flt-no-full, lc/flt-no-outline]
---
```
--------------------------------------------------------------------------------
/src/llm_context/lc_resources/rules/lc/flt-no-files.md:
--------------------------------------------------------------------------------
```markdown
---
description: Excludes all files from both full and outline selections. Use for minimal project contexts that include only metadata or notes, ideal for high-level planning.
compose:
  filters: [lc/flt-no-full, lc/flt-no-outline]
---
```
--------------------------------------------------------------------------------
/.llm-context/rules/lc/prm-developer.md:
--------------------------------------------------------------------------------
```markdown
---
description: Configures a base prompt for developer workflows, composing standard file filters to include essential code files (e.g., .py, .js, .ts).
instructions: [lc/ins-developer]
compose:
  filters: [lc/flt-base]
  excerpters: [lc/exc-base]
---
```
--------------------------------------------------------------------------------
/src/llm_context/lc_resources/rules/lc/prm-developer.md:
--------------------------------------------------------------------------------
```markdown
---
description: Configures a base prompt for developer workflows, composing standard file filters to include essential code files (e.g., .py, .js, .ts).
instructions: [lc/ins-developer]
compose:
  filters: [lc/flt-base]
  excerpters: [lc/exc-base]
---
```
--------------------------------------------------------------------------------
/.llm-context/rules/flt-repo-base.md:
--------------------------------------------------------------------------------
```markdown
---
name: flt-repo-base
description: additional repo specific filters.
compose:
  filters: [lc/flt-base]
gitignores:
  full-files:
    - "*.md"
    - /tests
    - ts-qry/
    - lc_resources/
  excerpted-files:
    - "*.md"
    - /tests
    - ts-qry/
    - lc_resources/
---
```
--------------------------------------------------------------------------------
/.llm-context/rules/prm-rules.md:
--------------------------------------------------------------------------------
```markdown
---
name: prm-templates
description: work with the system rule files
instructions: [lc/ins-developer, lc/sty-code, lc/sty-python]
compose:
  filters: [flt-repo-base, flt-no-excerpters]
  excerpters: [lc/exc-base]
also-include:
  full-files: [/src/llm_context/lc_resources/rules/lc/**]
---
```
--------------------------------------------------------------------------------
/src/llm_context/exceptions.py:
--------------------------------------------------------------------------------
```python
class LLMContextError(Exception):
    def __init__(self, message: str, error_type: str):
        self.message = message
        self.error_type = error_type
        super().__init__(self.message)
class RuleResolutionError(LLMContextError):
    def __init__(self, message: str):
        super().__init__(message, "RULE_RESOLUTION_ERROR")
```
--------------------------------------------------------------------------------
/tests/test_logging.py:
--------------------------------------------------------------------------------
```python
import logging
from llm_context.exec_env import MessageCollector
def test_message_collector():
    messages = []
    collector = MessageCollector(messages)
    logger = logging.getLogger("test")
    logger.addHandler(collector)
    logger.setLevel(logging.INFO)
    test_msg = "Test message"
    logger.info(test_msg)
    assert messages == [test_msg]
```
--------------------------------------------------------------------------------
/.llm-context/rules/prm-templates.md:
--------------------------------------------------------------------------------
```markdown
---
description: Context for working on Jinja templates and template system
overview: full
instructions: [lc/ins-developer, lc/sty-code, lc/sty-python]
compose:
  filters: [lc/flt-base]
  excerpters: [lc/exc-base]
limit-to:
  full-files:
    - "**/*.j2"
  excerpted-files:
    - "/src/llm_context/context_generator.py"
    - "/src/llm_context/project_setup.py"
    - "/src/llm_context/rule.py"
---
```
--------------------------------------------------------------------------------
/.llm-context/rules/lc/prm-rule-create.md:
--------------------------------------------------------------------------------
```markdown
---
description: Generates a complete project context with instructions for creating focused rules, including new chat prefixes and common guidelines. Includes all rule files in full content for reference. Use for efficient rule creation tasks.
instructions: ["lc/ins-rule-intro", "lc/ins-rule-framework"]
overview: full
compose:
  filters: [lc/flt-base]
  excerpters: [lc/exc-base]
also-include:
  full-files: [/.llm-context/rules/**]
---
```
--------------------------------------------------------------------------------
/src/llm_context/lc_resources/rules/lc/prm-rule-create.md:
--------------------------------------------------------------------------------
```markdown
---
description: Generates a complete project context with instructions for creating focused rules, including new chat prefixes and common guidelines. Includes all rule files in full content for reference. Use for efficient rule creation tasks.
instructions: ["lc/ins-rule-intro", "lc/ins-rule-framework"]
overview: full
compose:
  filters: [lc/flt-base]
  excerpters: [lc/exc-base]
also-include:
  full-files: [/.llm-context/rules/**]
---
```
--------------------------------------------------------------------------------
/.llm-context/config.yaml:
--------------------------------------------------------------------------------
```yaml
__info__: 'This project uses llm-context. For more information, visit: https://github.com/cyberchitta/llm-context.py
  or https://pypi.org/project/llm-context/'
templates:
  context: lc/context.j2
  definitions: lc/definitions.j2
  end-prompt: lc/end-prompt.j2
  excerpts: lc/excerpts.j2
  excluded: lc/excluded.j2
  files: lc/files.j2
  missing-files: lc/missing-files.j2
  outlines: lc/outlines.j2
  overview: lc/overview.j2
  prompt: lc/prompt.j2
```
--------------------------------------------------------------------------------
/.llm-context/rules/lc/exc-base.md:
--------------------------------------------------------------------------------
```markdown
---
description: Base excerpt mode mappings and default configurations
excerpt-modes:
  "*.py": code-outliner
  "*.js": code-outliner  
  "*.ts": code-outliner
  "*.jsx": code-outliner
  "*.tsx": code-outliner
  "*.java": code-outliner
  "*.cpp": code-outliner
  "*.c": code-outliner
  "*.cs": code-outliner
  "*.go": code-outliner
  "*.rs": code-outliner
  "*.rb": code-outliner
  "*.php": code-outliner
  "*.ex": code-outliner
  "*.elm": code-outliner
  "*.svelte": sfc
excerpt-config:
  sfc:
    with-style: false
    with-template: false
---
```
--------------------------------------------------------------------------------
/src/llm_context/lc_resources/rules/lc/exc-base.md:
--------------------------------------------------------------------------------
```markdown
---
description: Base excerpt mode mappings and default configurations
excerpt-modes:
  "*.py": code-outliner
  "*.js": code-outliner  
  "*.ts": code-outliner
  "*.jsx": code-outliner
  "*.tsx": code-outliner
  "*.java": code-outliner
  "*.cpp": code-outliner
  "*.c": code-outliner
  "*.cs": code-outliner
  "*.go": code-outliner
  "*.rs": code-outliner
  "*.rb": code-outliner
  "*.php": code-outliner
  "*.ex": code-outliner
  "*.elm": code-outliner
  "*.svelte": sfc
excerpt-config:
  sfc:
    with-style: false
    with-template: false
---
```
--------------------------------------------------------------------------------
/src/llm_context/excerpters/base.py:
--------------------------------------------------------------------------------
```python
from abc import ABC, abstractmethod
from dataclasses import dataclass
from typing import Any
from llm_context.excerpters.parser import Source
@dataclass(frozen=True)
class Excerpt:
    rel_path: str
    content: str
    metadata: dict[str, Any]
@dataclass(frozen=True)
class Excerpts:
    excerpts: list[Excerpt]
    metadata: dict[str, Any]
@dataclass(frozen=True)
class Excluded:
    sections: dict[str, str]  # section_name -> content
    metadata: dict[str, Any]
class Excerpter(ABC):
    @abstractmethod
    def excerpt(self, sources: list[Source]) -> Excerpts:
        pass
    @abstractmethod
    def excluded(self, sources: list[Source]) -> list[Excluded]:
        pass
```
--------------------------------------------------------------------------------
/.llm-context/rules/lc/ins-rule-intro.md:
--------------------------------------------------------------------------------
```markdown
---
description: Introduces the project focus creation guide for new chat sessions, emphasizing minimal file inclusion and multi-project coordination. Use to initiate rule creation in conversational workflows with LLMs.
---
# Project Focus Creation Guide
You have been provided with complete project context to help create focused, task-specific rules that include only the minimum necessary files for efficient LLM conversations.
## Your Mission
Analyze the provided project structure and help the user create a focused rule that includes only the essential files needed for their specific task, dramatically reducing context size while maintaining effectiveness.
## Multi-Project Contexts
When working with multiple projects, you'll need to create separate rules for each project. Coordinate the file selections across projects to ensure the combined context provides what's needed for the task.
```
--------------------------------------------------------------------------------
/src/llm_context/lc_resources/rules/lc/ins-rule-intro.md:
--------------------------------------------------------------------------------
```markdown
---
description: Introduces the project focus creation guide for new chat sessions, emphasizing minimal file inclusion and multi-project coordination. Use to initiate rule creation in conversational workflows with LLMs.
---
# Project Focus Creation Guide
You have been provided with complete project context to help create focused, task-specific rules that include only the minimum necessary files for efficient LLM conversations.
## Your Mission
Analyze the provided project structure and help the user create a focused rule that includes only the essential files needed for their specific task, dramatically reducing context size while maintaining effectiveness.
## Multi-Project Contexts
When working with multiple projects, you'll need to create separate rules for each project. Coordinate the file selections across projects to ensure the combined context provides what's needed for the task.
```
--------------------------------------------------------------------------------
/.llm-context/lc-project-notes.md:
--------------------------------------------------------------------------------
```markdown
# LLM-Context Development Guide
## Build/Test/Lint Commands
- Build: `uv build`
- Run Tests: `uv run pytest tests/`
- Run Single Test: `uv run pytest tests/test_file.py::test_name -v`
- Code Linting: `ruff check .`
- Type Checking: `mypy src/`
## Code Style Guidelines
- Line Length: 100 characters max
- Python Version: 3.10+, targeting 3.13
- Import Order: standard library, third-party, local (using isort)
- Type Hints: Required, with `warn_return_any = true`
- Error Handling: Use custom exceptions from `exceptions.py`
- Naming: Snake case for functions/variables, PascalCase for classes
- Documentation: Docstrings for public functions and classes
- Code Structure: Small, focused modules with clear responsibilities
- Config Format: YAML (converted from TOML in v0.2.9)
## Repository Structure
- `src/llm_context/`: Main package code
- `tests/`: Test files (prefix with `test_`)
- `.llm-context/`: Configuration directory
- `docs/`: Documentation files
```
--------------------------------------------------------------------------------
/.llm-context/rules/tmp-prm-docs-update.md:
--------------------------------------------------------------------------------
```markdown
---
description: Context for updating README and user guide with latest architectural changes
overview: full
compose:
  filters: [lc/flt-base]
  excerpters: [lc/exc-base]
gitignores:
  full-files: ["src/**", "tests/**"]
  excerpted-files: ["src/**", "tests/**"]
also-include:
  full-files:
    - "/README.md"
    - "/docs/user-guide.md"
    - "/CHANGELOG.md"
    - "/pyproject.toml"
  excerpted-files:
    - "/src/llm_context/cli.py"
    - "/src/llm_context/mcp.py"
    - "/.llm-context/rules/lc/*.md"
---
## Documentation Update Context
This rule provides context for updating project documentation to reflect:
1. **Unified excerpting system** (replacing outlining terminology)
2. **Consolidated file selection** with `lc-select` command
3. **Unified context retrieval** with `lc-missing` tool
4. **New SFC excerpter** for Svelte/Vue files
5. **Updated CLI commands** and MCP tools
6. **New rule composition** with excerpters
Focus on user-facing documentation that explains the current system architecture and commands.
```
--------------------------------------------------------------------------------
/.llm-context/rules/lc/sty-jupyter.md:
--------------------------------------------------------------------------------
```markdown
---
description: Specifies style guidelines for Jupyter notebooks (.ipynb), focusing on cell structure, documentation, type annotations, AI-assisted development, and output management. Use for Jupyter-based projects to ensure clear, executable notebooks.
---
## Jupyter Notebook Guidelines
### Cell Structure
- One logical concept per cell (single function, data transformation, or analysis step)
- Execute cells independently when possible - avoid hidden dependencies
- Use meaningful cell execution order that tells a clear story
### Documentation Pattern
- Use markdown cells for descriptions, not code comments
- Code cells should contain zero comments - let expressive code speak for itself
- Focus markdown on _why_ and _context_, not _what_ and _how_
### Type Annotations
- Use `jaxtyping` and similar libraries for concrete, descriptive type signatures
- Specify array shapes, data types, and constraints explicitly
- Examples:
  ```python
  from jaxtyping import Float, Int, Array
  def process_features(
      data: Float[Array, "batch height width channels"],
      labels: Int[Array, "batch"]
  ) -> Float[Array, "batch features"]:
  ```
```
--------------------------------------------------------------------------------
/src/llm_context/lc_resources/rules/lc/sty-jupyter.md:
--------------------------------------------------------------------------------
```markdown
---
description: Specifies style guidelines for Jupyter notebooks (.ipynb), focusing on cell structure, documentation, type annotations, AI-assisted development, and output management. Use for Jupyter-based projects to ensure clear, executable notebooks.
---
## Jupyter Notebook Guidelines
### Cell Structure
- One logical concept per cell (single function, data transformation, or analysis step)
- Execute cells independently when possible - avoid hidden dependencies
- Use meaningful cell execution order that tells a clear story
### Documentation Pattern
- Use markdown cells for descriptions, not code comments
- Code cells should contain zero comments - let expressive code speak for itself
- Focus markdown on _why_ and _context_, not _what_ and _how_
### Type Annotations
- Use `jaxtyping` and similar libraries for concrete, descriptive type signatures
- Specify array shapes, data types, and constraints explicitly
- Examples:
  ```python
  from jaxtyping import Float, Int, Array
  def process_features(
      data: Float[Array, "batch height width channels"],
      labels: Int[Array, "batch"]
  ) -> Float[Array, "batch features"]:
  ```
```
--------------------------------------------------------------------------------
/src/llm_context/excerpters/language_mapping.py:
--------------------------------------------------------------------------------
```python
from dataclasses import dataclass
from importlib import resources
from typing import Optional
def to_language(filename: str) -> Optional[str]:
    ext_to_lang = {
        "c": "c",
        "cc": "cpp",
        "cs": "csharp",
        "cpp": "cpp",
        "el": "elisp",
        "ex": "elixir",
        "elm": "elm",
        "go": "go",
        "java": "java",
        "js": "javascript",
        "mjs": "javascript",
        "php": "php",
        "py": "python",
        "rb": "ruby",
        "rs": "rust",
        "svelte": "svelte",
        "ts": "typescript",
        "vue": "vue",
    }
    extension = filename.split(".")[-1]
    return ext_to_lang.get(extension)
_tag_languages = [
    "c",
    "cpp",
    "csharp",
    "elisp",
    "elixir",
    "elm",
    "go",
    "java",
    "javascript",
    "php",
    "python",
    "ruby",
    "rust",
    "typescript",
]
@dataclass(frozen=True)
class LangQuery:
    def get_tag_query(self, language: str) -> str:
        assert language in _tag_languages
        if language == "typescript":
            return self._read_tag_query("javascript") + self._read_tag_query("typescript")
        return self._read_tag_query(language)
    def _read_tag_query(self, language: str) -> str:
        filename = f"{language}-tags.scm"
        return resources.files("llm_context.excerpters.ts-qry").joinpath(filename).read_text()
```
--------------------------------------------------------------------------------
/.llm-context/rules/lc/sty-python.md:
--------------------------------------------------------------------------------
```markdown
---
description: Provides Python-specific style guidelines, including Pythonic patterns, type system usage, class design, import organization, and idioms. Use for Python projects to ensure consistent, readable, and maintainable code.
---
## Python-Specific Guidelines
### Pythonic Patterns
- Use list/dict comprehensions over traditional loops
- Leverage tuple unpacking and multiple assignment
- Use conditional expressions for simple conditional logic
- Prefer single-pass operations: `sum(x for x in items if condition)` over separate filter+sum
### Type System
- Use comprehensive type hints throughout
- Import types from `typing` module as needed
- Use specific types: `list[str]` not `list`, `dict[str, int]` not `dict`
### Class Design
- Use `@dataclass(frozen=True)` as the default for all classes
- Keep `__init__` methods trivial - delegate complex construction to `@staticmethod create()` methods
- Design for immutability to enable functional composition
- Use `@property` for computed attributes
### Import Organization
- Always place imports at module top
- Never use function-level imports except for documented lazy-loading scenarios
- Import order: standard library, third-party, local modules
- Follow PEP 8 naming conventions (snake_case for functions/variables, PascalCase for classes)
### Python Idioms
- Use `isinstance()` for type checking
- Leverage `enumerate()` and `zip()` for iteration
- Use context managers (`with` statements) for resource management
- Prefer `pathlib.Path` over string path manipulation
```
--------------------------------------------------------------------------------
/src/llm_context/lc_resources/rules/lc/sty-python.md:
--------------------------------------------------------------------------------
```markdown
---
description: Provides Python-specific style guidelines, including Pythonic patterns, type system usage, class design, import organization, and idioms. Use for Python projects to ensure consistent, readable, and maintainable code.
---
## Python-Specific Guidelines
### Pythonic Patterns
- Use list/dict comprehensions over traditional loops
- Leverage tuple unpacking and multiple assignment
- Use conditional expressions for simple conditional logic
- Prefer single-pass operations: `sum(x for x in items if condition)` over separate filter+sum
### Type System
- Use comprehensive type hints throughout
- Import types from `typing` module as needed
- Use specific types: `list[str]` not `list`, `dict[str, int]` not `dict`
### Class Design
- Use `@dataclass(frozen=True)` as the default for all classes
- Keep `__init__` methods trivial - delegate complex construction to `@staticmethod create()` methods
- Design for immutability to enable functional composition
- Use `@property` for computed attributes
### Import Organization
- Always place imports at module top
- Never use function-level imports except for documented lazy-loading scenarios
- Import order: standard library, third-party, local modules
- Follow PEP 8 naming conventions (snake_case for functions/variables, PascalCase for classes)
### Python Idioms
- Use `isinstance()` for type checking
- Leverage `enumerate()` and `zip()` for iteration
- Use context managers (`with` statements) for resource management
- Prefer `pathlib.Path` over string path manipulation
```
--------------------------------------------------------------------------------
/.llm-context/rules/lc/sty-javascript.md:
--------------------------------------------------------------------------------
```markdown
---
description: Details JavaScript-specific style guidelines, covering modern features, module systems, object design, asynchronous code, naming conventions, and documentation. Use for JavaScript projects to ensure consistent code style.
---
## JavaScript-Specific Guidelines
### Modern JavaScript Features
- Use array methods (`map`, `filter`, `reduce`) over traditional loops
- Leverage arrow functions for concise expressions
- Use destructuring assignment for objects and arrays
- Prefer template literals over string concatenation
- Use spread syntax (`...`) for array/object operations
### Module System
- Prefer named exports over default exports (better tree-shaking and refactoring)
- Use consistent import/export patterns
- Structure modules with clear, focused responsibilities
### Object Design
- Use `Object.freeze()` to enforce immutability
- Keep constructors simple - use static factory methods for complex creation
- Use class syntax for object-oriented patterns
- Prefer composition through mixins or utility functions
### Asynchronous Code
- Use `async/await` over Promise chains for better readability
- Handle errors with proper try/catch blocks
- Error messages must include: what failed, why it failed, and suggested action
### Naming Conventions
- Use kebab-case for file names
- Use PascalCase for classes and constructors
- Use camelCase for functions, variables, and methods
- Use UPPER_SNAKE_CASE for constants
### Documentation
- Use JSDoc comments for public APIs and complex business logic
- Document parameter and return types with JSDoc tags
```
--------------------------------------------------------------------------------
/src/llm_context/lc_resources/rules/lc/sty-javascript.md:
--------------------------------------------------------------------------------
```markdown
---
description: Details JavaScript-specific style guidelines, covering modern features, module systems, object design, asynchronous code, naming conventions, and documentation. Use for JavaScript projects to ensure consistent code style.
---
## JavaScript-Specific Guidelines
### Modern JavaScript Features
- Use array methods (`map`, `filter`, `reduce`) over traditional loops
- Leverage arrow functions for concise expressions
- Use destructuring assignment for objects and arrays
- Prefer template literals over string concatenation
- Use spread syntax (`...`) for array/object operations
### Module System
- Prefer named exports over default exports (better tree-shaking and refactoring)
- Use consistent import/export patterns
- Structure modules with clear, focused responsibilities
### Object Design
- Use `Object.freeze()` to enforce immutability
- Keep constructors simple - use static factory methods for complex creation
- Use class syntax for object-oriented patterns
- Prefer composition through mixins or utility functions
### Asynchronous Code
- Use `async/await` over Promise chains for better readability
- Handle errors with proper try/catch blocks
- Error messages must include: what failed, why it failed, and suggested action
### Naming Conventions
- Use kebab-case for file names
- Use PascalCase for classes and constructors
- Use camelCase for functions, variables, and methods
- Use UPPER_SNAKE_CASE for constants
### Documentation
- Use JSDoc comments for public APIs and complex business logic
- Document parameter and return types with JSDoc tags
```
--------------------------------------------------------------------------------
/cliff.toml:
--------------------------------------------------------------------------------
```toml
# Configuration file for git-cliff (https://github.com/orhun/git-cliff)
# Customize this file based on your project's needs
[changelog]
# changelog header
header = """
# Changelog
All notable changes to this project will be documented in this file.
"""
# template for the changelog body
body = """
{% if version %}\
## [{{ version | trim_start_matches(pat="v") }}] - {{ timestamp | date(format="%Y-%m-%d") }}
{% else %}\
## [unreleased]
{% endif %}\
{% for group, commits in commits | group_by(attribute="group") %}
### {{ group | upper_first }}
{%- for commit in commits %}
- {{ commit.message | upper_first }}
{%- endfor %}
{% endfor %}\n
"""
# remove the leading and trailing whitespaces from the template
trim = true
# changelog footer
footer = """
<!-- generated by git-cliff -->
"""
[git]
# parse the commits based on https://www.conventionalcommits.org
conventional_commits = true
# filter out the commits that are not conventional
filter_unconventional = true
# regex for parsing and grouping commits
commit_parsers = [
  { message = "^feat", group = "Features" },
  { message = "^fix", group = "Bug Fixes" },
  { message = "^doc", group = "Documentation" },
  { message = "^perf", group = "Performance" },
  { message = "^refactor", group = "Refactor" },
  { message = "^style", group = "Styling" },
  { message = "^test", group = "Testing" },
  { message = "^chore\\(release\\): prepare for", skip = true },
  { message = "^chore", group = "Miscellaneous Tasks" },
  { body = ".*security", group = "Security" },
]
# sort the tags topologically
topo_order = true
# sort the commits inside sections by oldest/newest order
sort_commits = "newest"
```
--------------------------------------------------------------------------------
/.llm-context/rules/lc/ins-developer.md:
--------------------------------------------------------------------------------
```markdown
---
description: Defines the guidelines for coding tasks. It is typically the beginning of the prompt.
---
## Persona
Senior developer with 40 years experience.
## Guidelines
1. Assume questions and code snippets relate to this project unless stated otherwise
2. Follow project's structure, standards and stack
3. Provide step-by-step guidance for changes
4. Explain rationale when asked
5. Be direct and concise
6. Think step by step
7. Use conventional commit format with co-author attribution
8. Follow project-specific instructions
## Response Structure
1. Direct answer/solution
2. Give very brief explanation of approach (only if needed)
3. Minimal code snippets during discussion phase (do not generate full files)
## Code Modification Guidelines
- **Do not generate complete code implementations until the user explicitly agrees to the approach**
- Discuss the approach before providing complete implementation. Be brief, no need to explain the obvious.
- Consider the existing project structure when suggesting new features
- For significant changes, propose a step-by-step implementation plan before writing extensive code
## Commit Message Format
When providing commit messages, use only a single-line conventional commit title with yourself as co-author unless additional detail is specifically requested:
```
<conventional commit title>
Co-authored-by: <Your actual AI model name and version> <model-identifier@llm-context>
```
Example format: Claude 4.5 Sonnet <claude-4.5-sonnet@llm-context>
(Note: Use your actual model name and identifier, not this example. However the domain part identifies the tool, in this case 'llm-context'.)
```
--------------------------------------------------------------------------------
/src/llm_context/lc_resources/rules/lc/ins-developer.md:
--------------------------------------------------------------------------------
```markdown
---
description: Defines the guidelines for coding tasks. It is typically the beginning of the prompt.
---
## Persona
Senior developer with 40 years experience.
## Guidelines
1. Assume questions and code snippets relate to this project unless stated otherwise
2. Follow project's structure, standards and stack
3. Provide step-by-step guidance for changes
4. Explain rationale when asked
5. Be direct and concise
6. Think step by step
7. Use conventional commit format with co-author attribution
8. Follow project-specific instructions
## Response Structure
1. Direct answer/solution
2. Give very brief explanation of approach (only if needed)
3. Minimal code snippets during discussion phase (do not generate full files)
## Code Modification Guidelines
- **Do not generate complete code implementations until the user explicitly agrees to the approach**
- Discuss the approach before providing complete implementation. Be brief, no need to explain the obvious.
- Consider the existing project structure when suggesting new features
- For significant changes, propose a step-by-step implementation plan before writing extensive code
## Commit Message Format
When providing commit messages, use only a single-line conventional commit title with yourself as co-author unless additional detail is specifically requested:
```
<conventional commit title>
Co-authored-by: <Your actual AI model name and version> <model-identifier@llm-context>
```
Example format: Claude 4.5 Sonnet <claude-4.5-sonnet@llm-context>
(Note: Use your actual model name and identifier, not this example. However the domain part identifies the tool, in this case 'llm-context'.)
```
--------------------------------------------------------------------------------
/src/llm_context/context_spec.py:
--------------------------------------------------------------------------------
```python
from dataclasses import dataclass
from pathlib import Path
from llm_context.exceptions import LLMContextError
from llm_context.project_setup import ProjectSetup
from llm_context.rule import Rule, RuleResolver, ToolConstants
from llm_context.state import StateStore
from llm_context.utils import ProjectLayout, Yaml
@dataclass(frozen=True)
class ContextSpec:
    project_layout: ProjectLayout
    templates: dict[str, str]
    rule: Rule
    state: ToolConstants
    @staticmethod
    def create(project_root: Path, rule_name: str, state: ToolConstants) -> "ContextSpec":
        ContextSpec.ensure_gitignore_exists(project_root)
        project_layout = ProjectLayout(project_root)
        ProjectSetup.create(project_layout).initialize()
        raw_config = Yaml.load(project_layout.config_path)
        resolver = RuleResolver.create(state, project_layout)
        rule = resolver.get_rule(rule_name)
        return ContextSpec(project_layout, raw_config["templates"], rule, state)
    @staticmethod
    def ensure_gitignore_exists(root_path: Path) -> None:
        if not (root_path / ".gitignore").exists():
            raise LLMContextError(
                "A .gitignore file is essential for this tool to function correctly. Please create one before proceeding.",
                "GITIGNORE_NOT_FOUND",
            )
    def has_rule(self, rule_name: str):
        resolver = RuleResolver.create(self.state, self.project_layout)
        return resolver.has_rule(rule_name)
    @property
    def state_store(self):
        return StateStore(self.project_layout.state_store_path)
    @property
    def project_root_path(self):
        return self.project_layout.root_path
    @property
    def project_root(self):
        return str(self.project_root_path)
```
--------------------------------------------------------------------------------
/src/llm_context/excerpters/service.py:
--------------------------------------------------------------------------------
```python
from dataclasses import dataclass
from typing import Any, Optional, Type
from llm_context.excerpters.base import Excerpter, Excerpts
from llm_context.excerpters.code_outliner import CodeOutliner
from llm_context.excerpters.parser import Source
from llm_context.excerpters.sfc import Sfc
from llm_context.rule import Rule  # Import for type
@dataclass(frozen=True)
class ExcerpterRegistry:
    excerpters: dict[str, Type[Excerpter]]
    @staticmethod
    def create() -> "ExcerpterRegistry":
        return ExcerpterRegistry(
            {
                "code-outliner": CodeOutliner,
                "sfc": Sfc,
            }
        )
    def get_excerpter(self, excerpter_name: str, config: dict[str, Any]) -> Optional[Excerpter]:
        excerpter_class = self.excerpters.get(excerpter_name)
        return excerpter_class(config) if excerpter_class else None  # type: ignore[call-arg]
    def excerpt(self, sources: list[Source], rule: Rule, tagger: Any) -> list[Excerpts]:
        if not rule.excerpt_modes:
            raise ValueError(
                f"Rule {rule.name} has no excerpt-modes configured. Add excerpt-modes or compose 'lc/exc-base'."
            )
        sources_by_mode: dict[str, list[Source]] = {}
        for source in sources:
            excerpt_mode = rule.get_excerpt_mode(source.rel_path)
            if excerpt_mode:
                if excerpt_mode not in sources_by_mode:
                    sources_by_mode[excerpt_mode] = []
                sources_by_mode[excerpt_mode].append(source)
        all_excerpts: list[Excerpts] = []
        for excerpt_mode, mode_sources in sources_by_mode.items():
            excerpt_config = rule.get_excerpt_config(excerpt_mode)
            excerpt_config["tagger"] = tagger
            excerpter = self.get_excerpter(excerpt_mode, excerpt_config)
            if excerpter:
                excerpts = excerpter.excerpt(mode_sources)
                all_excerpts.extend([excerpts])
        return all_excerpts
    def empty(self) -> list[Excerpts]:
        return [Excerpts([], {"sample_definitions": []})]
```
--------------------------------------------------------------------------------
/tests/test_parser.py:
--------------------------------------------------------------------------------
```python
import pytest
from llm_context.excerpters.parser import AST, ASTFactory, Source
from llm_context.excerpters.tagger import ASTBasedTagger, Definition, FileTags, Position, Tag
@pytest.fixture
def sample_ast():
    code = """
class TestClass:
    def test_method(self):
        pass
def test_function():
    obj = TestClass()
    obj.test_method()
    
test_function()
"""
    source = Source(rel_path="test.py", content=code)
    ast_factory = ASTFactory.create()
    return ast_factory.create_from_code(source)
@pytest.fixture
def sample_defref():
    code = """
class TestClass:
    def test_method(self):
        pass
def test_function():
    obj = TestClass()
    obj.test_method()
    
test_function()
"""
    source = Source(rel_path="test.py", content=code)
    ast_factory = ASTFactory.create()
    workspace_path = "/fake/workspace/path"
    tagger = ASTBasedTagger.create(workspace_path, ast_factory)
    return FileTags.create(tagger, source)
def test_ast_creation(sample_ast):
    assert sample_ast is not None
    assert sample_ast.language_name == "python"
def test_ast_captures(sample_ast):
    captures = sample_ast.tag_matches()
    assert len(captures) > 0
def test_defref_creation(sample_defref):
    assert sample_defref is not None
    assert sample_defref.rel_path == "test.py"
def test_defref_contents(sample_defref):
    defs = sample_defref.definitions
    assert len(defs) >= 2
    class_def = next((d for d in defs if d.name and d.name.text == "TestClass"), None)
    func_def = next((d for d in defs if d.name and d.name.text == "test_function"), None)
    assert class_def is not None, "TestClass definition not found"
    assert func_def is not None, "test_function definition not found"
    assert class_def.name is not None
    assert class_def.name.text == "TestClass"
    if func_def.name:
        assert func_def.name.text == "test_function"
def test_defref_positions(sample_defref):
    defs = sample_defref.definitions
    for defn in defs:
        assert defn.begin.ln >= 0
        assert defn.begin.col >= 0
        assert defn.end.ln >= defn.begin.ln
        assert defn.start >= 0
        assert defn.finish > defn.start
```
--------------------------------------------------------------------------------
/tests/test_path_converter.py:
--------------------------------------------------------------------------------
```python
import unittest
from pathlib import Path
from llm_context.utils import PathConverter
class TestPathConverter(unittest.TestCase):
    def setUp(self):
        self.converter = PathConverter.create(Path("/home/user/project"))
    def test_init(self):
        self.assertEqual(self.converter.root, Path("/home/user/project"))
        self.assertEqual(self.converter.root.name, "project")
    def test_validate_valid_paths(self):
        valid_paths = ["/project/src/main.py", "/project/tests/test_main.py", "/project/README.md"]
        self.assertTrue(self.converter.validate(valid_paths))
    def test_validate_invalid_paths(self):
        invalid_paths = [
            "project/src/main.py",  # Missing leading slash
            "/projects/tests/test_main.py",  # Incorrect root name
            "/project",  # No path after root name
            "/otherproject/README.md",  # Wrong project name
        ]
        self.assertFalse(self.converter.validate(invalid_paths))
    def test_validate_mixed_paths(self):
        mixed_paths = [
            "/project/src/main.py",
            "/otherproject/README.md",
            "/project/tests/test_main.py",
        ]
        self.assertFalse(self.converter.validate(mixed_paths))
    def test_to_absolute_conversion(self):
        relative_paths = [
            "/project/src/main.py",
            "/project/tests/test_main.py",
            "/project/README.md",
        ]
        expected_absolute_paths = [
            "/home/user/project/src/main.py",
            "/home/user/project/tests/test_main.py",
            "/home/user/project/README.md",
        ]
        self.assertEqual(self.converter.to_absolute(relative_paths), expected_absolute_paths)
    def test_to_relative_conversion(self):
        absolute_paths = [
            "/home/user/project/src/main.py",
            "/home/user/project/tests/test_main.py",
            "/home/user/project/README.md",
        ]
        expected_relative_paths = [
            "/project/src/main.py",
            "/project/tests/test_main.py",
            "/project/README.md",
        ]
        self.assertEqual(self.converter.to_relative(absolute_paths), expected_relative_paths)
    def test_to_absolute_empty_list(self):
        self.assertEqual(self.converter.to_absolute([]), [])
    def test_validate_empty_list(self):
        self.assertTrue(self.converter.validate([]))
if __name__ == "__main__":
    unittest.main()
```
--------------------------------------------------------------------------------
/src/llm_context/excerpters/tagger.py:
--------------------------------------------------------------------------------
```python
from dataclasses import dataclass
from typing import Any, NamedTuple, Optional, Protocol
from llm_context.excerpters.parser import ASTFactory, Source, to_definition
class Position(NamedTuple):
    ln: int
    col: int
@dataclass(frozen=True)
class Tag:
    text: str
    begin: Position
    end: Position
    start: int
    finish: int
    @staticmethod
    def create(node: dict[str, Any]) -> Optional["Tag"]:
        return (
            Tag(
                node["text"],
                Position(node["start_point"][0], node["start_point"][1]),
                Position(node["end_point"][0], node["end_point"][1]),
                node["start_byte"],
                node["end_byte"],
            )
            if node
            else None
        )
@dataclass(frozen=True)
class Definition:
    rel_path: str
    name: Tag | None
    text: str
    begin: Position
    end: Position
    start: int
    finish: int
    @staticmethod
    def create(rel_path: str, node: dict[str, Any]) -> "Definition":
        return Definition(
            rel_path,
            Tag.create(node["name"]),
            node["text"],
            Position(node["start_point"][0], node["start_point"][1]),
            Position(node["end_point"][0], node["end_point"][1]),
            node["start_byte"],
            node["end_byte"],
        )
class TagExtractor(Protocol):
    workspace_path: str
    def extract_definitions(self, source: Source) -> list[Definition]: ...
@dataclass(frozen=True)
class ASTBasedTagger(TagExtractor):
    workspace_path: str
    ast_factory: ASTFactory
    @staticmethod
    def create(workspace_path: str, ast_factory: ASTFactory) -> "ASTBasedTagger":
        return ASTBasedTagger(workspace_path, ast_factory)
    def extract_definitions(self, source: Source) -> list[Definition]:
        ast = self.ast_factory.create_from_code(source)
        return [
            Definition.create(ast.rel_path, defn)
            for defn in map(to_definition, ast.tag_matches())
            if defn
        ]
@dataclass(frozen=True)
class FileTags:
    rel_path: str
    definitions: list[Definition]
    @staticmethod
    def create(extractor: TagExtractor, source: Source) -> "FileTags":
        definitions = extractor.extract_definitions(source)
        return FileTags(source.rel_path, definitions)
    @staticmethod
    def create_each(extractor: TagExtractor, sources: list[Source]) -> list["FileTags"]:
        return [FileTags.create(extractor, source) for source in sources]
def find_definition(definitions, name):
    return [d.text for d in definitions if d.name and d.name.text == name]
```
--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------
```toml
[project]
name = "llm-context"
version = "0.5.2"
description = "Share code with LLMs via Model Context Protocol or clipboard. Rule-based customization enables easy switching between different tasks (like code review and documentation). Code outlining support is included as a standard feature."
authors = [
  { name = "restlessronin", email = "[email protected]" },
]
readme = "README.md"
requires-python = ">=3.11"
license = "Apache-2.0"
keywords = ["llm", "ai", "context", "code", "clipboard", "chat"]
classifiers = [
  "Development Status :: 4 - Beta",
  "Environment :: Console",
  "Intended Audience :: Developers",
  "Intended Audience :: Information Technology",
  "Intended Audience :: Science/Research",
  "Topic :: Software Development :: Code Generators",
  "Topic :: Utilities",
  "Topic :: Communications :: Chat",
  "Topic :: Scientific/Engineering :: Artificial Intelligence",
]
dependencies = [
  "jinja2>=3.1.6, <4.0",
  "mcp>=1.15.0",
  "packaging>=24.2, <25.0",
  "pathspec>=0.12.1, <0.13.0",
  "pyperclip>=1.11.0, <2.0.0",
  "pyyaml>=6.0.3",
  "tree-sitter>=0.25.2",
  "tree-sitter-language-pack>=0.9.0",
]
[project.urls]
Repository = "https://github.com/cyberchitta/llm-context.py"
"User Guide" = "https://github.com/cyberchitta/llm-context.py/blob/main/docs/user-guide.md"
Changelog = "https://github.com/cyberchitta/llm-context.py/blob/main/CHANGELOG.md"
[project.scripts]
lc-context = "llm_context.cli:context"
lc-changed = "llm_context.cli:changed_files"
lc-init = "llm_context.cli:init_project"
lc-rule-instructions = "llm_context.cli:rule_instructions"
lc-mcp = "llm_context.mcp:run_server"
lc-missing = "llm_context.cli:missing"
lc-outlines = "llm_context.cli:outlines"
lc-prompt = "llm_context.cli:prompt"
lc-select = "llm_context.cli:select"
lc-set-rule = "llm_context.cli:set_rule"
lc-version = "llm_context.cli:version"
[tool.pytest.ini_options]
testpaths = ["tests"]
pythonpath = ["src"]
filterwarnings = ["ignore::FutureWarning"]
[tool.mypy]
python_version = "3.10"
warn_return_any = true
warn_unused_configs = true
ignore_missing_imports = true
[tool.ruff]
line-length = 100
target-version = "py313"
[tool.ruff.lint]
select = ["E", "F", "I"]
ignore = ["E203", "E266", "E501", "F403", "F401"]
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[dependency-groups]
dev = [
  "git-cliff>=2.6.1, <3.0",
  "isort>=6.0.1, <7.0",
  "mypy>=1.18.2, <2.0",
  "pytest>=8.3.5, <9.0",
  "types-pyyaml>=6.0.12.20241230",
  "ruff>=0.13.2, <1.0",
  "taplo>=0.9.3, <1.0",
]
[tool.hatch.build]
include = ["src/**"]
[tool.hatch.build.targets.wheel]
sources = ["src"]
```
--------------------------------------------------------------------------------
/src/llm_context/cmd_pipeline.py:
--------------------------------------------------------------------------------
```python
import os
import sys
import traceback
from dataclasses import dataclass
from functools import wraps
from logging import ERROR, INFO
from pathlib import Path
from typing import Callable, Optional
import pyperclip  # type: ignore
from llm_context.exceptions import LLMContextError
from llm_context.exec_env import ExecutionEnvironment
from llm_context.utils import _format_size, log
@dataclass(frozen=True)
class ExecutionResult:
    content: Optional[str]
    env: ExecutionEnvironment
def with_env(func: Callable[..., ExecutionResult]) -> Callable[..., ExecutionResult]:
    @wraps(func)
    def wrapper(*args, **kwargs) -> ExecutionResult:
        env = ExecutionEnvironment.create(Path.cwd())
        with env.activate():
            return func(*args, env=env, **kwargs)
    return wrapper
def with_init_env(func: Callable[..., ExecutionResult]) -> Callable[..., ExecutionResult]:
    @wraps(func)
    def wrapper(*args, **kwargs) -> ExecutionResult:
        env = ExecutionEnvironment.create_init(Path.cwd())
        with env.activate():
            return func(*args, env=env, **kwargs)
    return wrapper
def with_clipboard(func: Callable[..., ExecutionResult]) -> Callable[..., ExecutionResult]:
    @wraps(func)
    def wrapper(*args, **kwargs) -> ExecutionResult:
        result = func(*args, **kwargs)
        if result.content:
            pyperclip.copy(result.content)
            size_bytes = len(result.content.encode("utf-8"))
            result.env.logger.info(f"Copied {_format_size(size_bytes)} to clipboard")
        return result
    return wrapper
def with_print(func: Callable[..., ExecutionResult]) -> Callable[..., ExecutionResult]:
    @wraps(func)
    def wrapper(*args, **kwargs) -> ExecutionResult:
        result = func(*args, **kwargs)
        for msg in result.env.runtime.messages:
            print(msg)
        return result
    return wrapper
def with_error(func: Callable[..., ExecutionResult]) -> Callable[..., None]:
    @wraps(func)
    def wrapper(*args, **kwargs) -> None:
        try:
            func(*args, **kwargs)
        except LLMContextError as e:
            log(ERROR, f"Error: {e.message}")
        except Exception as e:
            log(ERROR, f"An unexpected error occurred: {str(e)}")
            traceback.print_exc()
    return wrapper
def create_clipboard_cmd(func: Callable[..., ExecutionResult]) -> Callable[..., None]:
    return with_error(with_print(with_clipboard(with_env(func))))
def create_command(func: Callable[..., ExecutionResult]) -> Callable[..., None]:
    return with_error(with_print(with_env(func)))
def create_init_command(func: Callable[..., ExecutionResult]) -> Callable[..., None]:
    return with_error(with_print(with_init_env(func)))
```
--------------------------------------------------------------------------------
/tests/test_outliner.py:
--------------------------------------------------------------------------------
```python
import pytest
from llm_context.excerpters.code_outliner import CodeOutliner
from llm_context.excerpters.parser import ASTFactory, Source
from llm_context.excerpters.tagger import ASTBasedTagger
@pytest.fixture
def tagger():
    workspace_path = "/fake/workspace/path"
    ast_factory = ASTFactory.create()
    return ASTBasedTagger.create(workspace_path, ast_factory)
@pytest.fixture
def sample_source():
    code = """
class TestClass:
    def test_method(self):
        pass
def test_function():
    pass
"""
    return Source(rel_path="test.py", content=code)
def test_code_outliner_integration(sample_source, tagger):
    """Test the complete code outlining functionality."""
    excerpter = CodeOutliner({"tagger": tagger})
    result = excerpter.excerpt([sample_source])
    assert len(result.excerpts) == 1
    assert result.excerpts[0].rel_path == "test.py"
    assert "class TestClass" in result.excerpts[0].content
    assert result.excerpts[0].metadata["processor_type"] == "code-outliner"
    assert "sample_definitions" in result.metadata
    assert isinstance(result.metadata["sample_definitions"], list)
def test_code_outliner_empty_sources(tagger):
    """Test handling of empty source list."""
    excerpter = CodeOutliner({"tagger": tagger})
    result = excerpter.excerpt([])
    assert len(result.excerpts) == 0
    assert result.metadata["sample_definitions"] == []
def test_code_outliner_unsupported_language(tagger):
    """Test handling of unsupported file types."""
    source = Source("test.txt", "some text content")
    excerpter = CodeOutliner({"tagger": tagger})
    result = excerpter.excerpt([source])
    assert len(result.excerpts) == 0
    assert result.metadata["sample_definitions"] == []
def test_code_outliner_multiple_sources(tagger):
    """Test processing multiple source files."""
    sources = [
        Source("file1.py", "def func1():\n    pass"),
        Source("file2.py", "class Class2:\n    def method2(self):\n        pass"),
        Source("file3.txt", "unsupported file"),
    ]
    excerpter = CodeOutliner({"tagger": tagger})
    result = excerpter.excerpt(sources)
    assert len(result.excerpts) == 2
    paths = {excerpt.rel_path for excerpt in result.excerpts}
    assert paths == {"file1.py", "file2.py"}
    for excerpt in result.excerpts:
        assert excerpt.metadata["processor_type"] == "code-outliner"
def test_code_outliner_no_definitions(tagger):
    """Test handling of code with no extractable definitions."""
    source = Source("empty.py", "# just a comment\nprint('hello')")
    excerpter = CodeOutliner({"tagger": tagger})
    result = excerpter.excerpt([source])
    assert isinstance(result.excerpts, list)
    assert result.metadata["sample_definitions"] == []
```
--------------------------------------------------------------------------------
/src/llm_context/mcp.py:
--------------------------------------------------------------------------------
```python
import ast
from importlib.metadata import version as pkg_ver
from pathlib import Path
from mcp.server.fastmcp import FastMCP
from llm_context import commands
from llm_context.exec_env import ExecutionEnvironment
mcp = FastMCP("llm-context")
@mcp.tool()
def lc_changed(root_path: str, timestamp: float) -> str:
    """Returns list of files modified since given timestamp.
    Args:
        root_path: Root directory path (e.g. '/home/user/projects/myproject')
        timestamp: Unix timestamp to check modifications since
    """
    env = ExecutionEnvironment.create(Path(root_path))
    with env.activate():
        return commands.list_modified_files(env, timestamp)
@mcp.tool()
def lc_outlines(root_path: str) -> str:
    """Returns excerpted content highlighting important sections in all supported files.
    Args:
        root_path: Root directory path
        rule_name: Rule to use for file selection rules
        timestamp: Context generation timestamp to check against existing selections
    """
    env = ExecutionEnvironment.create(Path(root_path))
    with env.activate():
        return commands.get_outlines(env)
@mcp.tool()
def lc_rule_instructions(root_path: str) -> str:
    """Provides step-by-step instructions for creating custom rules.
    Args:
        root_path: Root directory path
    """
    env = ExecutionEnvironment.create(Path(root_path))
    with env.activate():
        return commands.get_focus_help(env)
@mcp.tool()
def lc_missing(root_path: str, param_type: str, data: str, timestamp: float) -> str:
    """Unified tool for retrieving missing context (files, implementations, or excluded sections).
    Args:
        root_path: Root directory path (e.g. '/home/user/projects/myproject')
        param_type: Type of data - 'f' for files, 'i' for implementations, 'e' for excluded sections
        data: JSON string containing the data (file paths in /{project-name}/ format or implementation queries)
        timestamp: Context generation timestamp
    """
    env = ExecutionEnvironment.create(Path(root_path))
    with env.activate():
        if param_type == "f":
            file_list = ast.literal_eval(data)
            return commands.get_missing_files(env, file_list, timestamp)
        elif param_type == "i":
            impl_list = ast.literal_eval(data)
            return commands.get_implementations(env, impl_list)
        elif param_type == "e":
            file_list = ast.literal_eval(data)
            return commands.get_excluded(env, file_list, timestamp)
        else:
            raise ValueError(
                f"Invalid parameter type: {param_type}. Use 'f' for files, 'i' for implementations, or 'e' for excluded sections."
            )
def run_server():
    mcp.run(transport="stdio")
```
--------------------------------------------------------------------------------
/.llm-context/rules/lc/sty-code.md:
--------------------------------------------------------------------------------
```markdown
---
description: Outlines universal code style principles for modern programming languages, emphasizing functional patterns, clarity, immutability, and robust architecture. Use as a foundation for language-agnostic coding.
---
## Universal Code Style Principles
### Functional Programming Approach
- Prefer functional over imperative patterns
- Favor pure functions and immutable data structures
- Design for method chaining through immutable transformations
- Prefer conditional expressions over conditional statements when possible
### Code Clarity
- Write self-documenting code through expressive naming
- Good names should make comments superfluous
- Compose complex operations through small, focused functions
### Object Design
- Keep constructors/initializers simple with minimal logic
- Use static factory methods (typically `create()`) for complex object construction
- Design methods to return new instances rather than mutating state
- Prefer immutable data structures and frozen/sealed objects
### Error Handling
- Validate inputs at application boundaries, not within internal functions
- **Natural Failure Over Validation**: Don't add explicit checks for conditions that will naturally cause failures
- Let language built-in error mechanisms work (TypeError, ReferenceError, etc.)
- Only validate at true application boundaries (user input, external APIs)
- Internal functions should assume valid inputs and fail fast
- Trust that calling code has met preconditions - fail fast if not
- Avoid defensive programming within core logic
- Create clear contracts between functions
### Architecture
- Favor composition over inheritance
- Avoid static dependencies - use dependency injection for testability
- Maintain clear separation between pure logic and side effects
**Goal: Write beautiful code that is readable, maintainable, and robust.**
## Code Quality Enforcement
**CRITICAL: Follow all style guidelines rigorously in every code response.**
Before writing any code:
1. **Check functional patterns** - Are functions pure? Do they return new data instead of mutating?
2. **Review naming** - Are names concise but expressive? Avoid verbose parameter names.
3. **Verify immutability** - Are data structures immutable? Can operations be chained?
4. **Simplify logic** - Can this be written more elegantly with comprehensions, functional patterns?
5. **Type hints** - Are all parameters and returns properly typed?
**Red flags that indicate style violations:**
- Functions that mutate input parameters
- Verbose parameter names like `coverage_threshold` vs `threshold`
- Imperative loops instead of functional patterns
- Missing type hints or vague types like `Any`
- Complex nested conditionals instead of guard clauses
**When in doubt, prioritize elegance and functional patterns over apparent convenience.**
```
--------------------------------------------------------------------------------
/src/llm_context/lc_resources/rules/lc/sty-code.md:
--------------------------------------------------------------------------------
```markdown
---
description: Outlines universal code style principles for modern programming languages, emphasizing functional patterns, clarity, immutability, and robust architecture. Use as a foundation for language-agnostic coding.
---
## Universal Code Style Principles
### Functional Programming Approach
- Prefer functional over imperative patterns
- Favor pure functions and immutable data structures
- Design for method chaining through immutable transformations
- Prefer conditional expressions over conditional statements when possible
### Code Clarity
- Write self-documenting code through expressive naming
- Good names should make comments superfluous
- Compose complex operations through small, focused functions
### Object Design
- Keep constructors/initializers simple with minimal logic
- Use static factory methods (typically `create()`) for complex object construction
- Design methods to return new instances rather than mutating state
- Prefer immutable data structures and frozen/sealed objects
### Error Handling
- Validate inputs at application boundaries, not within internal functions
- **Natural Failure Over Validation**: Don't add explicit checks for conditions that will naturally cause failures
- Let language built-in error mechanisms work (TypeError, ReferenceError, etc.)
- Only validate at true application boundaries (user input, external APIs)
- Internal functions should assume valid inputs and fail fast
- Trust that calling code has met preconditions - fail fast if not
- Avoid defensive programming within core logic
- Create clear contracts between functions
### Architecture
- Favor composition over inheritance
- Avoid static dependencies - use dependency injection for testability
- Maintain clear separation between pure logic and side effects
**Goal: Write beautiful code that is readable, maintainable, and robust.**
## Code Quality Enforcement
**CRITICAL: Follow all style guidelines rigorously in every code response.**
Before writing any code:
1. **Check functional patterns** - Are functions pure? Do they return new data instead of mutating?
2. **Review naming** - Are names concise but expressive? Avoid verbose parameter names.
3. **Verify immutability** - Are data structures immutable? Can operations be chained?
4. **Simplify logic** - Can this be written more elegantly with comprehensions, functional patterns?
5. **Type hints** - Are all parameters and returns properly typed?
**Red flags that indicate style violations:**
- Functions that mutate input parameters
- Verbose parameter names like `coverage_threshold` vs `threshold`
- Imperative loops instead of functional patterns
- Missing type hints or vague types like `Any`
- Complex nested conditionals instead of guard clauses
**When in doubt, prioritize elegance and functional patterns over apparent convenience.**
```
--------------------------------------------------------------------------------
/.llm-context/rules/lc/flt-base.md:
--------------------------------------------------------------------------------
```markdown
---
description: Establishes base gitignore patterns to exclude non-code files (e.g., binaries, archives, logs) from overview, full, and outline selections. Use as a foundation for project-specific file filtering in context generation.
gitignores:
  overview-files:
    - .git
    - "*.7z"
    - "*.app"
    - "*.avi"
    - "*.bmp"
    - "*.bz2"
    - "*.cab"
    - "*.deb"
    - "*.dll"
    - "*.dmg"
    - "*.dylib"
    - "*.ear"
    - "*.eot"
    - "*.exe"
    - "*.flac"
    - "*.gif"
    - "*.gz"
    - "*.icns"
    - "*.ico"
    - "*.iso"
    - "*.jar"
    - "*.jpeg"
    - "*.jpg"
    - "*.lz"
    - "*.lzma"
    - "*.map"
    - "*.mkv"
    - "*.mov"
    - "*.mp3"
    - "*.mp4"
    - "*.msi"
    - "*.otf"
    - "*.pdf"
    - "*.pkg"
    - "*.png"
    - "*.rar"
    - "*.rpm"
    - "*.so"
    - "*.svg"
    - "*.tar"
    - "*.tar.bz2"
    - "*.tar.gz"
    - "*.tar.xz"
    - "*.tbz2"
    - "*.tgz"
    - "*.tif"
    - "*.ttf"
    - "*.txz"
    - "*.war"
    - "*.wav"
    - "*.webp"
    - "*.wmv"
    - "*.woff"
    - "*.woff2"
    - "*.xz"
    - "*.Z"
    - "*.zip"
  full-files:
    - .git
    - .dockerignore
    - .env
    - .gitignore
    - .llm-context/
    - CHANGELOG.md
    - Dockerfile
    - docker-compose.yml
    - elm-stuff
    - go.sum
    - LICENSE
    - package-lock.json
    - pnpm-lock.yaml
    - README.md
    - yarn.lock
    - "*.7z"
    - "*.app"
    - "*.bz2"
    - "*.cab"
    - "*.deb"
    - "*.dll"
    - "*.dmg"
    - "*.dylib"
    - "*.ear"
    - "*.eot"
    - "*.exe"
    - "*.gif"
    - "*.gz"
    - "*.icns"
    - "*.ico"
    - "*.iso"
    - "*.jar"
    - "*.jpeg"
    - "*.jpg"
    - "*.lock"
    - "*.log"
    - "*.lz"
    - "*.lzma"
    - "*.map"
    - "*.msi"
    - "*.pkg"
    - "*.png"
    - "*.rar"
    - "*.rpm"
    - "*.so"
    - "*.svg"
    - "*.tar"
    - "*.tar.bz2"
    - "*.tar.gz"
    - "*.tar.xz"
    - "*.tbz2"
    - "*.tgz"
    - "*.tif"
    - "*.tmp"
    - "*.ttf"
    - "*.txz"
    - "*.war"
    - "*.webp"
    - "*.woff"
    - "*.woff2"
    - "*.xz"
    - "*.Z"
    - "*.zip"
  excerpted-files:
    - .git
    - .dockerignore
    - .env
    - .gitignore
    - .llm-context/
    - CHANGELOG.md
    - Dockerfile
    - docker-compose.yml
    - elm-stuff
    - go.sum
    - LICENSE
    - package-lock.json
    - pnpm-lock.yaml
    - README.md
    - yarn.lock
    - "*.7z"
    - "*.app"
    - "*.bz2"
    - "*.cab"
    - "*.deb"
    - "*.dll"
    - "*.dmg"
    - "*.dylib"
    - "*.ear"
    - "*.eot"
    - "*.exe"
    - "*.gif"
    - "*.gz"
    - "*.icns"
    - "*.ico"
    - "*.iso"
    - "*.jar"
    - "*.jpeg"
    - "*.jpg"
    - "*.lock"
    - "*.log"
    - "*.lz"
    - "*.lzma"
    - "*.map"
    - "*.msi"
    - "*.pkg"
    - "*.png"
    - "*.rar"
    - "*.rpm"
    - "*.so"
    - "*.svg"
    - "*.tar"
    - "*.tar.bz2"
    - "*.tar.gz"
    - "*.tar.xz"
    - "*.tbz2"
    - "*.tgz"
    - "*.tif"
    - "*.tmp"
    - "*.ttf"
    - "*.txz"
    - "*.war"
    - "*.webp"
    - "*.woff"
    - "*.woff2"
    - "*.xz"
    - "*.Z"
    - "*.zip"
---
```
--------------------------------------------------------------------------------
/src/llm_context/lc_resources/rules/lc/flt-base.md:
--------------------------------------------------------------------------------
```markdown
---
description: Establishes base gitignore patterns to exclude non-code files (e.g., binaries, archives, logs) from overview, full, and outline selections. Use as a foundation for project-specific file filtering in context generation.
gitignores:
  overview-files:
    - .git
    - "*.7z"
    - "*.app"
    - "*.avi"
    - "*.bmp"
    - "*.bz2"
    - "*.cab"
    - "*.deb"
    - "*.dll"
    - "*.dmg"
    - "*.dylib"
    - "*.ear"
    - "*.eot"
    - "*.exe"
    - "*.flac"
    - "*.gif"
    - "*.gz"
    - "*.icns"
    - "*.ico"
    - "*.iso"
    - "*.jar"
    - "*.jpeg"
    - "*.jpg"
    - "*.lz"
    - "*.lzma"
    - "*.map"
    - "*.mkv"
    - "*.mov"
    - "*.mp3"
    - "*.mp4"
    - "*.msi"
    - "*.otf"
    - "*.pdf"
    - "*.pkg"
    - "*.png"
    - "*.rar"
    - "*.rpm"
    - "*.so"
    - "*.svg"
    - "*.tar"
    - "*.tar.bz2"
    - "*.tar.gz"
    - "*.tar.xz"
    - "*.tbz2"
    - "*.tgz"
    - "*.tif"
    - "*.ttf"
    - "*.txz"
    - "*.war"
    - "*.wav"
    - "*.webp"
    - "*.wmv"
    - "*.woff"
    - "*.woff2"
    - "*.xz"
    - "*.Z"
    - "*.zip"
  full-files:
    - .git
    - .dockerignore
    - .env
    - .gitignore
    - .llm-context/
    - CHANGELOG.md
    - Dockerfile
    - docker-compose.yml
    - elm-stuff
    - go.sum
    - LICENSE
    - package-lock.json
    - pnpm-lock.yaml
    - README.md
    - yarn.lock
    - "*.7z"
    - "*.app"
    - "*.bz2"
    - "*.cab"
    - "*.deb"
    - "*.dll"
    - "*.dmg"
    - "*.dylib"
    - "*.ear"
    - "*.eot"
    - "*.exe"
    - "*.gif"
    - "*.gz"
    - "*.icns"
    - "*.ico"
    - "*.iso"
    - "*.jar"
    - "*.jpeg"
    - "*.jpg"
    - "*.lock"
    - "*.log"
    - "*.lz"
    - "*.lzma"
    - "*.map"
    - "*.msi"
    - "*.pkg"
    - "*.png"
    - "*.rar"
    - "*.rpm"
    - "*.so"
    - "*.svg"
    - "*.tar"
    - "*.tar.bz2"
    - "*.tar.gz"
    - "*.tar.xz"
    - "*.tbz2"
    - "*.tgz"
    - "*.tif"
    - "*.tmp"
    - "*.ttf"
    - "*.txz"
    - "*.war"
    - "*.webp"
    - "*.woff"
    - "*.woff2"
    - "*.xz"
    - "*.Z"
    - "*.zip"
  excerpted-files:
    - .git
    - .dockerignore
    - .env
    - .gitignore
    - .llm-context/
    - CHANGELOG.md
    - Dockerfile
    - docker-compose.yml
    - elm-stuff
    - go.sum
    - LICENSE
    - package-lock.json
    - pnpm-lock.yaml
    - README.md
    - yarn.lock
    - "*.7z"
    - "*.app"
    - "*.bz2"
    - "*.cab"
    - "*.deb"
    - "*.dll"
    - "*.dmg"
    - "*.dylib"
    - "*.ear"
    - "*.eot"
    - "*.exe"
    - "*.gif"
    - "*.gz"
    - "*.icns"
    - "*.ico"
    - "*.iso"
    - "*.jar"
    - "*.jpeg"
    - "*.jpg"
    - "*.lock"
    - "*.log"
    - "*.lz"
    - "*.lzma"
    - "*.map"
    - "*.msi"
    - "*.pkg"
    - "*.png"
    - "*.rar"
    - "*.rpm"
    - "*.so"
    - "*.svg"
    - "*.tar"
    - "*.tar.bz2"
    - "*.tar.gz"
    - "*.tar.xz"
    - "*.tbz2"
    - "*.tgz"
    - "*.tif"
    - "*.tmp"
    - "*.ttf"
    - "*.txz"
    - "*.war"
    - "*.webp"
    - "*.woff"
    - "*.woff2"
    - "*.xz"
    - "*.Z"
    - "*.zip"
---
```
--------------------------------------------------------------------------------
/src/llm_context/excerpters/code_outliner.py:
--------------------------------------------------------------------------------
```python
import random
from dataclasses import dataclass
from typing import Any, Optional, cast
from llm_context.excerpters.base import Excerpt, Excerpter, Excerpts, Excluded
from llm_context.excerpters.language_mapping import to_language
from llm_context.excerpters.parser import Source
from llm_context.excerpters.tagger import ASTBasedTagger, Definition, FileTags, Tag
@dataclass(frozen=True)
class CodeOutliner(Excerpter):
    config: dict[str, Any]
    def excerpt(self, sources: list[Source]) -> Excerpts:
        if not sources:
            return self._empty_result()
        excerpts = []
        sample_definitions = []
        for source in sources:
            if self._should_process_source(source):
                excerpt = self._process_single_source(source)
                if excerpt:
                    excerpts.append(excerpt)
        tagger = self.config["tagger"]
        all_definitions = self._extract_all_definitions(sources, tagger)
        sample_definitions = self._generate_sample_definitions(all_definitions)
        return Excerpts(excerpts, {"sample_definitions": sample_definitions})
    def excluded(self, sources: list[Source]) -> list[Excluded]:
        return []
    def _should_process_source(self, source: Source) -> bool:
        return to_language(source.rel_path) is not None
    def _process_single_source(self, source: Source) -> Optional[Excerpt]:
        tagger = self.config["tagger"]
        definitions = tagger.extract_definitions(source)
        if not definitions:
            return None
        formatted_content = self._format_content(source, definitions)
        return Excerpt(source.rel_path, formatted_content, self._create_metadata())
    def _format_content(self, source: Source, definitions: list[Definition]) -> str:
        code_lines = source.content.split("\n")
        lines_of_interest = sorted(
            [tag.name.begin.ln if tag.name else tag.begin.ln for tag in definitions]
        )
        show_lines = sorted(set(lines_of_interest))
        formatted_lines = []
        for i, line in enumerate(code_lines):
            is_line_of_interest = i in lines_of_interest
            should_show_line = i in show_lines
            if should_show_line:
                line_prefix = "█" if is_line_of_interest else "│"
                formatted_lines.append(f"{line_prefix}{line}")
            else:
                if i == 0 or (i - 1) in show_lines:
                    formatted_lines.append("⋮...")
        return "\n".join(formatted_lines)
    def _create_metadata(self) -> dict[str, Any]:
        return {"processor_type": "code-outliner"}
    def _extract_all_definitions(
        self, sources: list[Source], tagger: ASTBasedTagger
    ) -> list[Definition]:
        all_definitions = []
        for source in sources:
            if self._should_process_source(source):
                definitions = tagger.extract_definitions(source)
                all_definitions.extend(definitions)
        return all_definitions
    def _generate_sample_definitions(
        self, all_definitions: list[Definition], max_samples: int = 2
    ) -> list[tuple[str, str]]:
        definitions_with_names = [d for d in all_definitions if d.name and d.name.text]
        if not definitions_with_names:
            return []
        sampled = random.sample(
            definitions_with_names, min(max_samples, len(definitions_with_names))
        )
        return [(d.rel_path, cast(Tag, d.name).text) for d in sampled]
    def _empty_result(self) -> Excerpts:
        return Excerpts([], {"sample_definitions": []})
```
--------------------------------------------------------------------------------
/src/llm_context/rule_parser.py:
--------------------------------------------------------------------------------
```python
import re
from dataclasses import dataclass
from logging import ERROR
from pathlib import Path
from typing import Any, Optional
import yaml
from llm_context.exceptions import RuleResolutionError
from llm_context.utils import ProjectLayout, Yaml, log
DEFAULT_CODE_RULE = "lc/prm-developer"
@dataclass(frozen=True)
class RuleParser:
    frontmatter: dict[str, Any]
    content: str
    path: Path
    @property
    def name(self) -> str:
        return self.path.stem
    @staticmethod
    def parse(content: str, path: Path) -> "RuleParser":
        frontmatter, markdown = RuleParser._extract_frontmatter(content)
        return RuleParser(frontmatter, markdown, path)
    @staticmethod
    def _extract_frontmatter(content: str) -> tuple[dict[str, Any], str]:
        frontmatter_pattern = r"^---\s*\n(.*?)\n---\s*\n(.*)"
        match = re.search(frontmatter_pattern, content, re.DOTALL)
        if match:
            try:
                yaml_content = match.group(1)
                markdown_content = match.group(2)
                frontmatter = yaml.safe_load(yaml_content)
                return frontmatter or {}, markdown_content
            except yaml.YAMLError:
                return {}, content
        return {}, content
    def to_rule_config(self) -> dict[str, Any]:
        config = dict(self.frontmatter)
        config["name"] = self.name
        return config
@dataclass(frozen=True)
class RuleLoader:
    rules_dir: Path
    @staticmethod
    def create(project_layout: ProjectLayout) -> "RuleLoader":
        return RuleLoader(project_layout.rules_path)
    def _load_rule_from_path(self, path: Path) -> Optional[RuleParser]:
        if not path.exists():
            return None
        try:
            content = path.read_text()
            return RuleParser.parse(content, path)
        except Exception as e:
            log(ERROR, f"Failed to parse rule {path}: {str(e)}")
            return None
    def load_rule(self, name: str) -> RuleParser:
        path = self.rules_dir / f"{name}.md"
        if not path.exists():
            raise ValueError(
                f"Rule file '{name}.md' not found. Run 'lc-init' to restore default rules."
            )
        try:
            content = path.read_text()
            return RuleParser.parse(content, path)
        except Exception as e:
            raise RuleResolutionError(
                f"Failed to parse rule file '{name}.md': {str(e)}. This may indicate outdated rule syntax. "
                f"Consider updating the rule or switching to '{DEFAULT_CODE_RULE}' with: lc-set-rule {DEFAULT_CODE_RULE}"
            )
    def save_rule(self, name: str, frontmatter: dict[str, Any], content: str) -> Path:
        path = self.rules_dir / f"{name}.md"
        yaml_str = Yaml.dump(self._order_frontmatter(frontmatter))
        full_content = f"---\n{yaml_str}---\n{content}"
        path.write_text(full_content)
        return path
    def _order_frontmatter(self, frontmatter: dict[str, Any]) -> dict[str, Any]:
        field_groups = [
            ["name", "description", "instructions", "overview"],
            ["compose", "gitignores", "limit-to", "also-include"],
            ["implementations", "rules"],
        ]
        ordered_fields = [
            field for group in field_groups for field in group if field in frontmatter
        ]
        remaining_fields = [field for field in frontmatter if field not in ordered_fields]
        return {field: frontmatter[field] for field in ordered_fields + remaining_fields}
@dataclass(frozen=True)
class RuleProvider:
    rule_loader: "RuleLoader"
    @staticmethod
    def create(project_layout: ProjectLayout) -> "RuleProvider":
        return RuleProvider(RuleLoader.create(project_layout))
    def get_rule_content(self, rule_name: str) -> Optional[str]:
        rule = self.rule_loader.load_rule(rule_name)
        return rule.content if rule else None
```
--------------------------------------------------------------------------------
/src/llm_context/commands.py:
--------------------------------------------------------------------------------
```python
from pathlib import Path
from typing import Optional
from llm_context.context_generator import ContextGenerator, ContextSettings
from llm_context.context_spec import ContextSpec
from llm_context.exec_env import ExecutionEnvironment
from llm_context.file_selector import ContextSelector
from llm_context.state import FileSelection
from llm_context.utils import PathConverter, is_newer
def get_prompt(env: ExecutionEnvironment) -> str:
    settings = ContextSettings.create(False, False, False)
    generator = ContextGenerator.create(env.config, env.state.file_selection, settings)
    return generator.prompt()
def select_all_files(env: ExecutionEnvironment) -> FileSelection:
    selector = ContextSelector.create(env.config)
    file_sel_full = selector.select_full_files(env.state.file_selection)
    return selector.select_excerpted_files(file_sel_full)
def get_missing_files(env: ExecutionEnvironment, paths: list[str], timestamp: float) -> str:
    matching_selection = env.state.selections.get_selection_by_timestamp(timestamp)
    if matching_selection is None:
        raise ValueError(
            f"No context found with timestamp {timestamp}. Warn the user that the context is stale."
        )
    settings = ContextSettings.create(False, False, True)
    generator = ContextGenerator.create(env.config, env.state.file_selection, settings, env.tagger)
    return generator.missing_files(paths, matching_selection, timestamp)
def list_modified_files(env: ExecutionEnvironment, timestamp: float) -> str:
    matching_selection = env.state.selections.get_selection_by_timestamp(timestamp)
    if matching_selection is None:
        raise ValueError(
            f"No context found with timestamp {timestamp}. The context may be stale or deleted."
        )
    config = ContextSpec.create(
        env.config.project_root_path, matching_selection.rule_name, env.constants
    )
    selector = ContextSelector.create(config)
    file_sel_full = selector.select_full_files(matching_selection)
    file_sel_excerpted = selector.select_excerpted_files(file_sel_full)
    current_files = set(file_sel_excerpted.files)
    original_files = set(matching_selection.files)
    converter = PathConverter.create(env.config.project_root_path)
    modified = {
        f
        for f in (current_files & original_files)
        if is_newer(converter.to_absolute([f])[0], timestamp)
    }
    added = current_files - original_files
    removed = original_files - current_files
    result = [
        f"{label}:\n" + "\n".join(sorted(files))
        for label, files in [("Added", added), ("Modified", modified), ("Removed", removed)]
        if files
    ]
    return "\n\n".join(result) if result else "No changes"
def get_excluded(env: ExecutionEnvironment, paths: list[str], timestamp: float) -> str:
    matching_selection = env.state.selections.get_selection_by_timestamp(timestamp)
    if matching_selection is None:
        raise ValueError(f"No context found with timestamp {timestamp}...")
    settings = ContextSettings.create(False, False, True)
    generator = ContextGenerator.create(env.config, env.state.file_selection, settings, env.tagger)
    return generator.excluded(paths, matching_selection, timestamp)
def get_implementations(env: ExecutionEnvironment, queries: list[tuple[str, str]]) -> str:
    settings = ContextSettings.create(False, False, True)
    return ContextGenerator.create(
        env.config, env.state.file_selection, settings, env.tagger
    ).definitions(queries)
def get_focus_help(env: ExecutionEnvironment) -> str:
    settings = ContextSettings.create(False, False, True)
    generator = ContextGenerator.create(env.config, env.state.file_selection, settings)
    return generator.focus_help()
def generate_context(env: ExecutionEnvironment, settings: ContextSettings) -> tuple[str, float]:
    generator = ContextGenerator.create(env.config, env.state.file_selection, settings, env.tagger)
    return generator.context()
def get_outlines(env: ExecutionEnvironment) -> str:
    settings = ContextSettings.create(False, False, False)
    selector = ContextSelector.create(env.config)
    file_sel_excerpted = selector.select_excerpted_only(env.state.file_selection)
    return ContextGenerator.create(env.config, file_sel_excerpted, settings, env.tagger).outlines()
```
--------------------------------------------------------------------------------
/src/llm_context/state.py:
--------------------------------------------------------------------------------
```python
from dataclasses import dataclass
from datetime import datetime as dt
from logging import ERROR, WARNING
from pathlib import Path
from typing import Optional
from llm_context.rule_parser import DEFAULT_CODE_RULE
from llm_context.utils import ProjectLayout, Yaml, log
@dataclass(frozen=True)
class FileSelection:
    rule_name: str
    full_files: list[str]
    excerpted_files: list[str]
    timestamp: float
    @staticmethod
    def create_default() -> "FileSelection":
        return FileSelection.create(DEFAULT_CODE_RULE, [], [])
    @staticmethod
    def create(
        rule_name: str, full_files: list[str], excerpted_files: list[str]
    ) -> "FileSelection":
        return FileSelection._create(rule_name, full_files, excerpted_files, dt.now().timestamp())
    @staticmethod
    def _create(
        rule_name: str, full_files: list[str], excerpted_files: list[str], timestamp: float
    ) -> "FileSelection":
        return FileSelection(rule_name, full_files, excerpted_files, timestamp)
    @property
    def files(self) -> list[str]:
        return self.full_files + self.excerpted_files
    def with_timestamp(self, timestamp: float) -> "FileSelection":
        return FileSelection._create(
            self.rule_name, self.full_files, self.excerpted_files, timestamp
        )
@dataclass(frozen=True)
class AllSelections:
    selections: dict[str, FileSelection]
    @staticmethod
    def create_empty() -> "AllSelections":
        return AllSelections({})
    def get_selection(self, rule_name: str) -> FileSelection:
        return self.selections.get(rule_name, FileSelection.create(rule_name, [], []))
    def get_selection_by_timestamp(self, timestamp: float) -> Optional[FileSelection]:
        return next(
            (
                selection
                for selection in self.selections.values()
                if selection.timestamp == timestamp
            ),
            None,
        )
    def with_selection(self, selection: FileSelection) -> "AllSelections":
        new_selections = dict(self.selections)
        new_selections[selection.rule_name] = selection
        return AllSelections(new_selections)
@dataclass(frozen=True)
class StateStore:
    storage_path: Path
    @staticmethod
    def delete_if_stale_rule(project_layout: ProjectLayout):
        state_path = project_layout.state_store_path
        if not state_path.exists():
            return
        try:
            store = StateStore(state_path)
            selections, current_profile = store.load()
            rule_path = project_layout.get_rule_path(f"{current_profile}.md")
            if not rule_path.exists():
                log(
                    WARNING,
                    f"Rule '{current_profile}' not found. Deleting state file: {state_path}",
                )
                state_path.unlink(missing_ok=True)
        except Exception as e:
            log(ERROR, f"Error checking rule staleness in '{state_path}': {e}")
            log(
                WARNING,
                f"If you're experiencing persistent rule-related errors, you may need to manually delete the state file: {state_path}",
            )
    def load(self) -> tuple[AllSelections, str]:
        try:
            data = Yaml.load(self.storage_path)
            selections = {}
            for rule_name, sel_data in data.get("selections", {}).items():
                selections[rule_name] = FileSelection._create(
                    rule_name,
                    sel_data.get("full-files", []),
                    sel_data.get("excerpted-files", []),
                    sel_data.get("timestamp", dt.now().timestamp()),
                )
            return AllSelections(selections), data.get("current-profile", DEFAULT_CODE_RULE)
        except Exception:
            return AllSelections.create_empty(), DEFAULT_CODE_RULE
    def save(self, store: AllSelections, current_profile: str):
        data = {
            "current-profile": current_profile,
            "selections": {
                rule_name: {
                    "full-files": sel.full_files,
                    "excerpted-files": sel.excerpted_files,
                    "timestamp": sel.timestamp,
                }
                for rule_name, sel in store.selections.items()
            },
        }
        Yaml.save(self.storage_path, data)
```
--------------------------------------------------------------------------------
/src/llm_context/excerpters/parser.py:
--------------------------------------------------------------------------------
```python
import warnings
from dataclasses import dataclass
from typing import Any, NamedTuple, cast
from tree_sitter import Language, Node, Parser, Query, QueryCursor, Tree  # type: ignore
from llm_context.excerpters.language_mapping import LangQuery, to_language
warnings.filterwarnings("ignore", category=FutureWarning, module="tree_sitter")
class Source(NamedTuple):
    rel_path: str
    content: str
@dataclass(frozen=True)
class ParserFactory:
    parser_cache: dict[str, tuple[Language, Parser]]
    @staticmethod
    def create() -> "ParserFactory":
        return ParserFactory({})
    def _create_tuple(self, language_name: str) -> tuple[Language, Parser]:
        from tree_sitter_language_pack import SupportedLanguage, get_language, get_parser
        language = get_language(cast(SupportedLanguage, language_name))
        parser = get_parser(cast(SupportedLanguage, language_name))
        return (language, parser)
    def get_tuple(self, language_name: str) -> tuple[Language, Parser]:
        if language_name not in self.parser_cache:
            self.parser_cache[language_name] = self._create_tuple(language_name)
        return self.parser_cache[language_name]
    def get_parser(self, language_name: str) -> Parser:
        return self.get_tuple(language_name)[1]
    def get_language(self, language_name: str) -> Language:
        return self.get_tuple(language_name)[0]
@dataclass(frozen=True)
class LangQueryFactory:
    tag_query_cache: dict[str, str]
    @staticmethod
    def create() -> "LangQueryFactory":
        return LangQueryFactory({})
    def get_tag_query(self, language: str) -> str:
        if language not in self.tag_query_cache:
            self.tag_query_cache[language] = LangQuery().get_tag_query(language)
        return self.tag_query_cache[language]
@dataclass(frozen=True)
class ASTFactory:
    parser_factory: ParserFactory
    lang_qry_factory: LangQueryFactory
    @staticmethod
    def create():
        return ASTFactory(ParserFactory.create(), LangQueryFactory.create())
    def create_from_code(self, source: Source) -> "AST":
        language_name = to_language(source.rel_path)
        assert language_name, f"Unsupported language: {source.rel_path}"
        language = self.parser_factory.get_language(language_name)
        parser = self.parser_factory.get_parser(language_name)
        tree = parser.parse(bytes(source.content, "utf-8"))
        return AST(language_name, language, parser, tree, self.lang_qry_factory, source.rel_path)
@dataclass(frozen=True)
class AST:
    language_name: str
    language: Language
    parser: Parser
    tree: Tree
    lang_qry_factory: LangQueryFactory
    rel_path: str
    def match(self, query_scm: str) -> list[tuple[int, dict[str, list[Node]]]]:
        query = Query(self.language, query_scm)
        cursor = QueryCursor(query)
        return cursor.matches(self.tree.root_node)
    def tag_matches(self) -> list[tuple[int, dict[str, list[Node]]]]:
        return self.match(self._get_tag_query())
    def _get_tag_query(self) -> str:
        return self.lang_qry_factory.get_tag_query(self.language_name)
@dataclass(frozen=True)
class ASTNode:
    node: Node
    @staticmethod
    def create(node: Node | None):
        return ASTNode(node) if node else None
    def to_definition(self, name: "ASTNode") -> dict[str, Any]:
        return {"type": self.node.type, "name": name.to_text(), **self.to_text()}
    def to_text(self) -> dict[str, Any]:
        text = self.node.text.decode("utf8") if self.node.text else ""
        return {"text": text, **self.to_pos_info()} if self.node else {}
    def to_pos_info(self) -> dict[str, Any]:
        return {
            "start_point": self.node.start_point,
            "end_point": self.node.end_point,
            "start_byte": self.node.start_byte,
            "end_byte": self.node.end_byte,
        }
def to_definition(match: tuple[int, dict[str, list[Any]]]) -> dict[str, Any]:
    _, captures = match
    def_capture = next((name for name in captures if name.startswith("definition.")), None)
    if not def_capture:
        return {}
    name_nodes: list[Node] = captures.get("name", [])
    name_node = ASTNode.create(name_nodes[0] if name_nodes else None)
    def_nodes: list[Node] = captures[def_capture]
    def_node = ASTNode.create(def_nodes[0] if def_nodes else None)
    return cast(dict[str, Any], def_node.to_definition(name_node)) if def_node and name_node else {}
```
--------------------------------------------------------------------------------
/tests/test_nested_gitignores.py:
--------------------------------------------------------------------------------
```python
import os
import tempfile
from pathlib import Path
import pytest
from llm_context.file_selector import GitIgnorer
class TestNestedGitignores:
    @pytest.fixture
    def temp_project(self):
        with tempfile.TemporaryDirectory() as tmp_dir:
            project_root = Path(tmp_dir)
            (project_root / "src").mkdir()
            (project_root / "src" / "utils").mkdir()
            (project_root / "tests").mkdir()
            (project_root / "docs").mkdir()
            (project_root / "README.md").touch()
            (project_root / "src" / "main.py").touch()
            (project_root / "src" / "utils" / "helper.py").touch()
            (project_root / "src" / "utils" / "temp.log").touch()
            (project_root / "tests" / "test_main.py").touch()
            (project_root / "tests" / "coverage.xml").touch()
            (project_root / "docs" / "guide.md").touch()
            (project_root / "build.log").touch()
            (project_root / ".gitignore").write_text("*.log\nbuild/\n__pycache__/\n")
            (project_root / "src" / "utils" / ".gitignore").write_text("*.tmp\n*.cache\n")
            (project_root / "tests" / ".gitignore").write_text("coverage.xml\n*.pyc\n")
            yield project_root
    def test_collects_all_gitignores(self, temp_project):
        ignorer = GitIgnorer.from_git_root(str(temp_project))
        assert len(ignorer.ignorer_data) == 3
    def test_hierarchical_sorting(self, temp_project):
        ignorer = GitIgnorer.from_git_root(str(temp_project))
        paths = [path for path, _ in ignorer.ignorer_data]
        assert "/src/utils" in paths
        assert "/tests" in paths
        assert "/" in paths
        src_utils_idx = paths.index("/src/utils")
        root_idx = paths.index("/")
        assert src_utils_idx < root_idx
    def test_root_gitignore_ignores_logs(self, temp_project):
        ignorer = GitIgnorer.from_git_root(str(temp_project))
        assert ignorer.ignore("/build.log")
        assert ignorer.ignore("/src/utils/temp.log")
    def test_nested_gitignore_adds_patterns(self, temp_project):
        ignorer = GitIgnorer.from_git_root(str(temp_project))
        (temp_project / "src" / "utils" / "cache.tmp").touch()
        assert ignorer.ignore("/src/utils/cache.tmp")
        (temp_project / "src" / "utils" / "data.cache").touch()
        assert ignorer.ignore("/src/utils/data.cache")
        assert ignorer.ignore("/tests/coverage.xml")
    def test_patterns_only_apply_to_subdirectories(self, temp_project):
        ignorer = GitIgnorer.from_git_root(str(temp_project))
        (temp_project / "src" / "coverage.xml").touch()
        assert not ignorer.ignore("/src/coverage.xml")
        (temp_project / "tests" / "temp.tmp").touch()
        assert not ignorer.ignore("/tests/temp.tmp")
    def test_empty_gitignore_handling(self, temp_project):
        (temp_project / "docs" / ".gitignore").write_text("\n\n  \n")
        ignorer = GitIgnorer.from_git_root(str(temp_project))
        assert not ignorer.ignore("/docs/guide.md")
    def test_gitignore_with_comments(self, temp_project):
        """Test .gitignore files with comments and blank lines"""
        gitignore_content = """
# This is a comment
*.backup
# Another comment
temp/
        """
        (temp_project / "docs" / ".gitignore").write_text(gitignore_content)
        ignorer = GitIgnorer.from_git_root(str(temp_project))
        (temp_project / "docs" / "old.backup").touch()
        assert ignorer.ignore("/docs/old.backup")
    def test_relative_path_calculation(self, temp_project):
        ignorer = GitIgnorer.from_git_root(str(temp_project))
        (temp_project / "src" / "utils" / "deep").mkdir()
        (temp_project / "src" / "utils" / "deep" / "nested.tmp").touch()
        assert ignorer.ignore("/src/utils/deep/nested.tmp")
    def test_extra_root_patterns_with_nested_gitignores(self, temp_project):
        extra_patterns = ["*.secret", "config.local"]
        ignorer = GitIgnorer.from_git_root(str(temp_project), extra_patterns)
        (temp_project / "api.secret").touch()
        assert ignorer.ignore("/api.secret")
        (temp_project / "src" / "db.secret").touch()
        assert ignorer.ignore("/src/db.secret")
        (temp_project / "src" / "utils" / "cache.tmp").touch()
        assert ignorer.ignore("/src/utils/cache.tmp")
    def test_directory_patterns(self, temp_project):
        ignorer = GitIgnorer.from_git_root(str(temp_project))
        (temp_project / "build").mkdir()
        (temp_project / "build" / "output.txt").touch()
        assert ignorer.ignore("/build/output.txt")
```
--------------------------------------------------------------------------------
/src/llm_context/cli.py:
--------------------------------------------------------------------------------
```python
import argparse
import ast
from importlib.metadata import version as pkg_ver
from logging import INFO
from pathlib import Path
from llm_context import commands
from llm_context.cmd_pipeline import (
    ExecutionResult,
    create_clipboard_cmd,
    create_command,
    create_init_command,
)
from llm_context.context_generator import ContextSettings
from llm_context.exec_env import ExecutionEnvironment
from llm_context.utils import log
def rule_feedback(env: ExecutionEnvironment):
    log(INFO, f"Active rule: {env.state.file_selection.rule_name}")
def _set_rule(rule: str, env: ExecutionEnvironment) -> ExecutionResult:
    if not env.config.has_rule(rule):
        raise ValueError(f"Rule '{rule}' does not exist.")
    nxt_env = env.with_rule(rule)
    nxt_env.state.store()
    log(INFO, f"Active rule set to '{rule}'.")
    return ExecutionResult(None, nxt_env)
@create_init_command
def init_project(env: ExecutionEnvironment):
    log(INFO, f"LLM Context initialized for project: {env.config.project_root}")
    log(
        INFO,
        "See the user guide for setup and customization: https://github.com/cyberchitta/llm-context.py/blob/main/docs/user-guide.md",
    )
    return ExecutionResult(None, env)
@create_command
def set_rule(env: ExecutionEnvironment) -> ExecutionResult:
    parser = argparse.ArgumentParser(description="Set active rule for LLM context")
    parser.add_argument(
        "rule",
        type=str,
        help="Rule to set as active",
    )
    args = parser.parse_args()
    res = _set_rule(args.rule, env)
    res.env.state.store()
    return res
@create_init_command
def version(*, env: ExecutionEnvironment) -> ExecutionResult:
    log(INFO, f"llm-context version {pkg_ver('llm-context')}")
    return ExecutionResult(None, env)
@create_command
def select(env: ExecutionEnvironment) -> ExecutionResult:
    rule_feedback(env)
    file_selection = commands.select_all_files(env)
    nxt_env = env.with_state(env.state.with_selection(file_selection))
    nxt_env.state.store()
    log(
        INFO,
        f"Selected {len(file_selection.full_files)} full files and {len(file_selection.excerpted_files)} excerpted files.",
    )
    return ExecutionResult(None, nxt_env)
@create_clipboard_cmd
def rule_instructions(env: ExecutionEnvironment) -> ExecutionResult:
    content = commands.get_focus_help(env)
    return ExecutionResult(content, env)
@create_clipboard_cmd
def prompt(env: ExecutionEnvironment) -> ExecutionResult:
    rule_feedback(env)
    content = commands.get_prompt(env)
    return ExecutionResult(content, env)
@create_clipboard_cmd
def context(env: ExecutionEnvironment) -> ExecutionResult:
    rule_feedback(env)
    parser = argparse.ArgumentParser(description="Generate context for LLM")
    parser.add_argument("-p", action="store_true", help="Include prompt in context")
    parser.add_argument("-nt", action="store_true", help="Assume no MCP/tools")
    parser.add_argument("-u", action="store_true", help="Include user notes in context")
    parser.add_argument("-f", type=str, help="Write context to file")
    args, _ = parser.parse_known_args()
    settings = ContextSettings.create(args.p, args.u, not args.nt)
    content, context_timestamp = commands.generate_context(env, settings)
    updated_selection = env.state.file_selection.with_timestamp(context_timestamp)
    nxt_env = env.with_state(env.state.with_selection(updated_selection))
    nxt_env.state.store()
    if args.f:
        Path(args.f).write_text(content)
        log(INFO, f"Wrote context to {args.f}")
    return ExecutionResult(content, env)
@create_clipboard_cmd
def outlines(env: ExecutionEnvironment) -> ExecutionResult:
    rule_feedback(env)
    content = commands.get_outlines(env)
    return ExecutionResult(content, env)
@create_clipboard_cmd
def changed_files(env: ExecutionEnvironment) -> ExecutionResult:
    timestamp = env.state.file_selection.timestamp
    return ExecutionResult(commands.list_modified_files(env, timestamp), env)
@create_clipboard_cmd
def missing(env: ExecutionEnvironment) -> ExecutionResult:
    parser = argparse.ArgumentParser()
    parser.add_argument("-f", type=str)
    parser.add_argument("-i", type=str)
    parser.add_argument("-e", type=str)
    parser.add_argument("-t", type=float, required=True)
    args = parser.parse_args()
    param_count = sum(1 for param in [args.f, args.i, args.e] if param)
    if param_count != 1:
        parser.error("Must specify exactly one of -f, -i, or -e")
    if args.f:
        file_list = ast.literal_eval(args.f)
        content = commands.get_missing_files(env, file_list, args.t)
    elif args.i:
        impl_list = ast.literal_eval(args.i)
        content = commands.get_implementations(env, impl_list)
    elif args.e:
        file_list = ast.literal_eval(args.e)
        content = commands.get_excluded(env, file_list, args.t)
    return ExecutionResult(content, env)
```
--------------------------------------------------------------------------------
/src/llm_context/utils.py:
--------------------------------------------------------------------------------
```python
import sys
from dataclasses import dataclass
from datetime import datetime as dt
from logging import CRITICAL, DEBUG, ERROR, INFO, WARNING, getLogger
from pathlib import Path
from typing import Any, Optional, Union, cast
import yaml
if sys.platform.startswith("win"):
    _original_write_text = Path.write_text
    _original_read_text = Path.read_text
    def _write_text_utf8(self, data, encoding="utf-8", **kwargs):
        return _original_write_text(self, data, encoding=encoding, **kwargs)
    def _read_text_utf8(self, encoding="utf-8", **kwargs):
        return _original_read_text(self, encoding=encoding, **kwargs)
    Path.write_text = _write_text_utf8
    Path.read_text = _read_text_utf8
class _NoAliasDumper(yaml.SafeDumper):
    def ignore_aliases(self, data: Any) -> bool:
        return True
@dataclass(frozen=True)
class Yaml:
    @staticmethod
    def dump(data: dict[str, Any]) -> str:
        return yaml.dump(
            data, Dumper=_NoAliasDumper, default_flow_style=None, sort_keys=False, width=100
        )
    @staticmethod
    def load(file_path: Path) -> dict[str, Any]:
        encoding = "utf-8" if sys.platform.startswith("win") else None
        with open(file_path, "r", encoding=encoding) as f:
            return cast(dict[str, Any], yaml.safe_load(f))
    @staticmethod
    def save(file_path: Path, data: dict[str, Any]):
        encoding = "utf-8" if sys.platform.startswith("win") else None
        with open(file_path, "w", encoding=encoding) as f:
            yaml.dump(data, f, Dumper=_NoAliasDumper, default_flow_style=False)
@dataclass(frozen=True)
class ProjectLayout:
    root_path: Path
    @property
    def project_config_path(self) -> Path:
        return self.root_path / ".llm-context"
    @property
    def project_notes_path(self) -> Path:
        return self.project_config_path / "lc-project-notes.md"
    @property
    def user_notes_path(self) -> Path:
        return Path.home() / ".llm-context" / "lc-user-notes.md"
    @property
    def config_path(self) -> Path:
        return self.project_config_path / "config.yaml"
    @property
    def state_path(self) -> Path:
        return self.project_config_path / "lc-state.yaml"
    @property
    def state_store_path(self) -> Path:
        return self.project_config_path / "curr_ctx.yaml"
    @property
    def templates_path(self) -> Path:
        return self.project_config_path / "templates"
    def get_template_path(self, template_name: str) -> Path:
        return self.templates_path / template_name
    @property
    def rules_path(self) -> Path:
        return self.project_config_path / "rules"
    def get_rule_path(self, rule_name: str) -> Path:
        return self.rules_path / rule_name
def _format_size(size_bytes):
    for unit in ["B", "KB", "MB", "GB"]:
        if size_bytes < 1024.0:
            return f"{size_bytes:.1f} {unit}"
        size_bytes /= 1024.0
    return f"{size_bytes:.1f} TB"
def format_age(timestamp: float) -> str:
    delta = dt.now().timestamp() - timestamp
    if delta < 3600:
        return f"{int(delta / 60)}m ago"
    if delta < 86400:
        return f"{int(delta / 3600)}h ago"
    return f"{int(delta / 86400)}d ago"
def size_feedback(content: str) -> None:
    if content is None:
        log(WARNING, "No content to copy")
    else:
        bytes_copied = len(content.encode("utf-8"))
        log(INFO, f"Copied {_format_size(bytes_copied)} to clipboard")
def safe_read_file(path: str) -> Optional[str]:
    file_path = Path(path)
    if not file_path.exists():
        log(ERROR, f"File not found: {file_path}")
        return None
    if not file_path.is_file():
        log(ERROR, f"Not a file: {file_path}")
        return None
    try:
        return file_path.read_text()
    except PermissionError:
        log(ERROR, f"Permission denied: {file_path}")
    except Exception as e:
        log(ERROR, f"Error reading file {file_path}: {str(e)}")
    return None
@dataclass(frozen=True)
class PathConverter:
    root: Path
    @staticmethod
    def create(root: Path) -> "PathConverter":
        return PathConverter(root)
    def validate(self, paths: Union[str, list[str]]) -> bool:
        if isinstance(paths, str):
            return paths.startswith(f"/{self.root.name}/")
        return all(path.startswith(f"/{self.root.name}/") for path in paths)
    def to_absolute(self, relative_paths: list[str]) -> list[str]:
        return [self._convert_single_path(path) for path in relative_paths]
    def to_relative(self, absolute_paths: list[str]) -> list[str]:
        return [self._make_relative(path) for path in absolute_paths]
    def _convert_single_path(self, path: str) -> str:
        return str(self.root / Path(path[len(self.root.name) + 2 :]))
    def _make_relative(self, path: str) -> str:
        return f"/{self.root.name}/{Path(path).relative_to(self.root)}"
def log(level: int, msg: str) -> None:
    from llm_context.exec_env import ExecutionEnvironment
    logger = (
        ExecutionEnvironment.current().logger
        if ExecutionEnvironment.has_current()
        else getLogger("llm-context-fallback")
    )
    if level == ERROR:
        logger.error(msg)
    elif level == WARNING:
        logger.warning(msg)
    elif level == INFO:
        logger.info(msg)
    elif level == DEBUG:
        logger.debug(msg)
    elif level == CRITICAL:
        logger.critical(msg)
def is_newer(abs_path: str, timestamp: float) -> bool:
    return Path(abs_path).exists() and Path(abs_path).stat().st_mtime > timestamp
```
--------------------------------------------------------------------------------
/src/llm_context/exec_env.py:
--------------------------------------------------------------------------------
```python
import logging
from contextlib import contextmanager
from contextvars import ContextVar
from dataclasses import dataclass
from pathlib import Path
from typing import Any, ClassVar, Optional
from llm_context.context_spec import ContextSpec
from llm_context.excerpters.parser import ASTFactory
from llm_context.excerpters.tagger import ASTBasedTagger
from llm_context.file_selector import ContextSelector
from llm_context.rule import DEFAULT_CODE_RULE, ToolConstants
from llm_context.state import AllSelections, FileSelection, StateStore
from llm_context.utils import ProjectLayout
class MessageCollector(logging.Handler):
    messages: list[str]
    def __init__(self, messages: list[str]):
        super().__init__()
        self.messages = messages
    def emit(self, record):
        msg = self.format(record)
        self.messages.append(msg)
@dataclass(frozen=True)
class RuntimeContext:
    _logger: logging.Logger
    _collector: MessageCollector
    @staticmethod
    def create() -> "RuntimeContext":
        logger = logging.getLogger("llm-context")
        logger.setLevel(logging.INFO)
        messages: list[str] = []
        collector = MessageCollector(messages)
        logger.addHandler(collector)
        return RuntimeContext(logger, collector)
    @property
    def logger(self) -> logging.Logger:
        return self._logger
    @property
    def messages(self) -> list[str]:
        return self._collector.messages
@dataclass(frozen=True)
class ExecutionState:
    project_layout: ProjectLayout
    selections: AllSelections
    rule_name: str
    @staticmethod
    def load(project_layout: ProjectLayout) -> "ExecutionState":
        store = StateStore(project_layout.state_store_path)
        selections, current_profile = store.load()
        return ExecutionState(project_layout, selections, current_profile)
    @staticmethod
    def create(
        project_layout: ProjectLayout, selections: AllSelections, rule_name: str
    ) -> "ExecutionState":
        return ExecutionState(project_layout, selections, rule_name)
    @property
    def file_selection(self) -> FileSelection:
        return self.selections.get_selection(self.rule_name)
    def store(self):
        StateStore(self.project_layout.state_store_path).save(self.selections, self.rule_name)
    def with_selection(self, file_selection: FileSelection) -> "ExecutionState":
        new_selections = self.selections.with_selection(file_selection)
        return ExecutionState(self.project_layout, new_selections, self.rule_name)
    def with_rule(self, rule_name: str) -> "ExecutionState":
        return ExecutionState(self.project_layout, self.selections, rule_name)
@dataclass(frozen=True)
class ExecutionEnvironment:
    _current: ClassVar[ContextVar[Optional["ExecutionEnvironment"]]] = ContextVar(
        "current_env", default=None
    )
    config: ContextSpec
    runtime: RuntimeContext
    state: ExecutionState
    constants: ToolConstants
    tagger: Optional[Any]
    @staticmethod
    def create_init(project_root: Path) -> "ExecutionEnvironment":
        runtime = RuntimeContext.create()
        project_layout = ProjectLayout(project_root)
        constants = (
            ToolConstants.load(project_layout.state_path)
            if project_layout.state_path.exists()
            else ToolConstants.create_null()
        )
        config = ContextSpec.create(project_root, DEFAULT_CODE_RULE, constants)
        empty_selections = AllSelections.create_empty()
        state = ExecutionState.create(project_layout, empty_selections, DEFAULT_CODE_RULE)
        tagger = ExecutionEnvironment._tagger(project_root)
        return ExecutionEnvironment(config, runtime, state, constants, tagger)
    @staticmethod
    def create(project_root: Path) -> "ExecutionEnvironment":
        runtime = RuntimeContext.create()
        project_layout = ProjectLayout(project_root)
        state = ExecutionState.load(project_layout)
        constants = ToolConstants.load(project_layout.state_path)
        config = ContextSpec.create(project_root, state.file_selection.rule_name, constants)
        tagger = ExecutionEnvironment._tagger(project_root)
        return ExecutionEnvironment(config, runtime, state, constants, tagger)
    @staticmethod
    def _tagger(project_root: Path):
        return ASTBasedTagger.create(str(project_root), ASTFactory.create())
    def with_state(self, new_state: ExecutionState) -> "ExecutionEnvironment":
        return ExecutionEnvironment(
            self.config, self.runtime, new_state, self.constants, self.tagger
        )
    def with_rule(self, rule_name: str) -> "ExecutionEnvironment":
        if rule_name == self.state.file_selection.rule_name:
            return self
        config = ContextSpec.create(self.config.project_root_path, rule_name, self.constants)
        if not config.rule.excerpt_modes:
            raise ValueError(
                f"Rule '{rule_name}' has no excerpt-modes configured. Add excerpt-modes or compose 'lc/exc-base'."
            )
        empty_selection = FileSelection.create(rule_name, [], [])
        selector = ContextSelector.create(config)
        file_selection = selector.select_full_files(empty_selection)
        outline_selection = selector.select_excerpted_files(file_selection)
        new_state = self.state.with_selection(outline_selection).with_rule(rule_name)
        return ExecutionEnvironment(config, self.runtime, new_state, self.constants, self.tagger)
    @property
    def logger(self) -> logging.Logger:
        return self.runtime.logger
    @staticmethod
    def current() -> "ExecutionEnvironment":
        env = ExecutionEnvironment._current.get()
        if env is None:
            raise RuntimeError("No active execution environment")
        return env
    @staticmethod
    def has_current() -> bool:
        return ExecutionEnvironment._current.get() is not None
    @contextmanager
    def activate(self):
        token = self._current.set(self)
        try:
            yield self
        finally:
            self._current.reset(token)
```
--------------------------------------------------------------------------------
/tests/test_pathspec_ignorer.py:
--------------------------------------------------------------------------------
```python
import unittest
from llm_context.file_selector import PathspecIgnorer
class TestPathspecIgnorerBasicFunctionality(unittest.TestCase):
    def test_basename_ignore(self):
        ignorer = PathspecIgnorer.create(["*.txt", "temp/"])
        self.assertTrue(ignorer.ignore("/path/to/file.txt"))
        self.assertFalse(ignorer.ignore("/path/to/file.py"))
        self.assertTrue(ignorer.ignore("/path/to/temp/"))
    def test_path_ignore(self):
        ignorer = PathspecIgnorer.create(["/root/*.log", "**/temp/"])
        self.assertTrue(ignorer.ignore("/root/file.log"))
        self.assertFalse(ignorer.ignore("/other/file.log"))
        self.assertTrue(ignorer.ignore("/path/to/temp/file"))
class TestGitignoreSemantics(unittest.TestCase):
    def test_text_files_with_exception(self):
        patterns = ["*.txt", "!dir1/*.txt"]
        ignorer = PathspecIgnorer.create(patterns)
        self.assertTrue(ignorer.ignore("/project/file1.txt"))
        self.assertFalse(ignorer.ignore("/project/file2.py"))
        #        self.assertFalse(ignorer.ignore("/project/dir1/file3.txt"))
        self.assertTrue(ignorer.ignore("/project/dir2/file4.txt"))
        self.assertFalse(ignorer.ignore("/project/dir1"))
        self.assertFalse(ignorer.ignore("/project/dir2"))
    def test_directory_and_python_files(self):
        patterns = ["dir1/", "*.py"]
        ignorer = PathspecIgnorer.create(patterns)
        self.assertFalse(ignorer.ignore("/project/file1.txt"))
        self.assertTrue(ignorer.ignore("/project/file2.py"))
        self.assertTrue(ignorer.ignore("/project/dir1/file3.txt"))
        self.assertTrue(ignorer.ignore("/project/dir1/"))
        self.assertTrue(ignorer.ignore("/project/dir2/file4.py"))
        self.assertFalse(ignorer.ignore("/project/dir2"))
    def test_node_modules_dist_and_logs(self):
        patterns = ["**/node_modules/", "dist/", "*.log"]
        ignorer = PathspecIgnorer.create(patterns)
        self.assertFalse(ignorer.ignore("/project/file1.txt"))
        self.assertTrue(ignorer.ignore("/project/dir1/node_modules/file.js"))
        self.assertTrue(ignorer.ignore("/project/dir1/node_modules/"))
        self.assertTrue(ignorer.ignore("/project/dir2/dist/file.js"))
        self.assertTrue(ignorer.ignore("/project/dir2/dist/"))
        self.assertTrue(ignorer.ignore("/project/dir3/file.log"))
        self.assertTrue(ignorer.ignore("/project/dir4/subdir/deep/file.log"))
class TestPathspecIgnorer(unittest.TestCase):
    def test_simple_filename(self):
        ignorer = PathspecIgnorer.create(["*.txt"])
        self.assertTrue(ignorer.ignore("file.txt"))
        self.assertFalse(ignorer.ignore("file.py"))
    def test_directory_pattern(self):
        ignorer = PathspecIgnorer.create(["dir/"])
        self.assertTrue(ignorer.ignore("dir/"))
    def test_negation(self):
        ignorer = PathspecIgnorer.create(["*.txt", "!important.txt"])
        self.assertTrue(ignorer.ignore("file.txt"))
        self.assertFalse(ignorer.ignore("important.txt"))
    def test_root_directory_pattern(self):
        ignorer = PathspecIgnorer.create(["/root"])
        self.assertTrue(ignorer.ignore("root"))
        self.assertFalse(ignorer.ignore("subdir/root"))
    def test_nested_directory_pattern(self):
        ignorer = PathspecIgnorer.create(["**/logs"])
        self.assertTrue(ignorer.ignore("logs"))
        self.assertTrue(ignorer.ignore("dir/logs"))
        self.assertTrue(ignorer.ignore("dir/subdir/logs"))
        self.assertFalse(ignorer.ignore("dir/logs-file.txt"))
    def test_complex_pattern(self):
        ignorer = PathspecIgnorer.create(["*.py[cod]"])
        self.assertTrue(ignorer.ignore("file.pyc"))
        self.assertTrue(ignorer.ignore("file.pyo"))
        self.assertTrue(ignorer.ignore("file.pyd"))
        self.assertFalse(ignorer.ignore("file.py"))
    def test_multiple_directory_pattern(self):
        ignorer = PathspecIgnorer.create(["/priv/static"])
        self.assertFalse(ignorer.ignore("priv"))
        self.assertTrue(ignorer.ignore("priv/static"))
        self.assertFalse(ignorer.ignore("other/priv/static"))
    def test_multiple_directory_pattern_with_wildcard(self):
        ignorer = PathspecIgnorer.create(["docs/**/secret"])
        self.assertFalse(ignorer.ignore("docs"))
        self.assertTrue(ignorer.ignore("docs/secret"))
        self.assertTrue(ignorer.ignore("docs/project/secret"))
        self.assertTrue(ignorer.ignore("docs/project/subproject/secret"))
        self.assertFalse(ignorer.ignore("secret"))
        self.assertFalse(ignorer.ignore("other/docs/secret"))
    def test_single_asterisk(self):
        ignorer = PathspecIgnorer.create(["*.log"])
        self.assertTrue(ignorer.ignore("file.log"))
        self.assertTrue(ignorer.ignore("folder/file.log"))
        self.assertFalse(ignorer.ignore("file.txt"))
    def test_double_asterisk(self):
        ignorer = PathspecIgnorer.create(["**/node_modules"])
        self.assertTrue(ignorer.ignore("node_modules"))
        self.assertTrue(ignorer.ignore("folder/node_modules"))
        self.assertTrue(ignorer.ignore("folder/subfolder/node_modules"))
    def test_question_mark(self):
        ignorer = PathspecIgnorer.create(["file?.txt"])
        self.assertTrue(ignorer.ignore("file1.txt"))
        self.assertTrue(ignorer.ignore("fileA.txt"))
        self.assertFalse(ignorer.ignore("file10.txt"))
    def test_character_class(self):
        ignorer = PathspecIgnorer.create(["file[0-9].txt"])
        self.assertTrue(ignorer.ignore("file0.txt"))
        self.assertTrue(ignorer.ignore("file5.txt"))
        self.assertFalse(ignorer.ignore("fileA.txt"))
    def test_negation_with_wildcard(self):
        ignorer = PathspecIgnorer.create(["*.log", "!important.log"])
        self.assertTrue(ignorer.ignore("debug.log"))
        self.assertFalse(ignorer.ignore("important.log"))
        self.assertFalse(ignorer.ignore("logs/important.log"))
    def test_trailing_spaces(self):
        ignorer = PathspecIgnorer.create(["*.log ", "!important.log "])
        self.assertTrue(ignorer.ignore("file.log"))
        self.assertFalse(ignorer.ignore("important.log"))
    def test_directory_vs_file(self):
        ignorer = PathspecIgnorer.create(["logs"])
        self.assertTrue(ignorer.ignore("logs"))
        self.assertTrue(ignorer.ignore("logs"))
        self.assertFalse(ignorer.ignore("logs.txt"))
if __name__ == "__main__":
    unittest.main()
```
--------------------------------------------------------------------------------
/src/llm_context/excerpters/sfc.py:
--------------------------------------------------------------------------------
```python
from dataclasses import dataclass
from importlib import resources
from typing import Any, Optional, cast
from tree_sitter import Query, QueryCursor
from llm_context.excerpters.base import Excerpt, Excerpter, Excerpts, Excluded
from llm_context.excerpters.language_mapping import to_language
from llm_context.excerpters.parser import ASTFactory, Source
@dataclass(frozen=True)
class SfcSection:
    section_type: str  # "script", "style", "template"
    start_line: int
    end_line: int
    content: str
    attributes: dict[str, str]  # e.g., {"lang": "typescript"}
@dataclass(frozen=True)
class Sfc(Excerpter):
    config: dict[str, Any]
    def excerpt(self, sources: list[Source]) -> Excerpts:
        if not sources:
            return Excerpts([], {"sample_definitions": []})
        results = []
        for source in sources:
            language = to_language(source.rel_path)
            if language not in ["svelte", "vue"]:
                continue
            sections = self._parse_sfc_sections(source, language)
            excerpted_content = self._create_excerpt_content(source, sections)
            if excerpted_content:
                result = Excerpt(
                    source.rel_path,
                    excerpted_content,
                    {
                        "processor_type": "sfc-excerpter",
                        "sections_included": self._get_included_section_types(),
                        "language": language,
                    },
                )
                results.append(result)
        return Excerpts(results, {"sample_definitions": []})
    def excluded(self, sources: list[Source]) -> list[Excluded]:
        excluded_results = []
        for source in sources:
            language = to_language(source.rel_path)
            if language not in ["svelte", "vue"]:
                continue
            sections = self._parse_sfc_sections(source, language)
            excluded_sections = {}
            for section in sections:
                if not self._should_include_section(section.section_type):
                    excluded_sections[section.section_type] = section.content
            if excluded_sections:
                excluded_results.append(
                    Excluded(excluded_sections, {"language": language, "file": source.rel_path})
                )
        return excluded_results
    def _parse_sfc_sections(self, source: Source, language: str) -> list[SfcSection]:
        ast_factory = ASTFactory.create()
        ast = ast_factory.create_from_code(source)
        queries = self._get_sfc_queries(language)
        query = Query(ast.language, queries)
        cursor = QueryCursor(query)
        matches = cursor.matches(ast.tree.root_node)
        sections = []
        seen = set()
        for match_id, captures in matches:
            for capture_name, nodes in captures.items():
                if capture_name == "injection.content":
                    node_list = nodes if isinstance(nodes, list) else [nodes]
                    for node in node_list:
                        parent_node = node.parent
                        if parent_node:
                            section_type = self._get_section_type_from_node(parent_node)
                            if section_type:
                                start_byte = parent_node.start_byte
                                end_byte = parent_node.end_byte
                                key = (section_type, start_byte, end_byte)
                                if key not in seen:
                                    seen.add(key)
                                    sections.append(
                                        SfcSection(
                                            section_type=section_type,
                                            start_line=parent_node.start_point[0],
                                            end_line=parent_node.end_point[0],
                                            content=source.content[start_byte:end_byte],
                                            attributes={},
                                        )
                                    )
        return sorted(sections, key=lambda s: s.start_line)
    def _get_section_type_from_node(self, node) -> Optional[str]:
        if hasattr(node, "type"):
            if node.type == "script_element":
                return "script"
            elif node.type == "style_element":
                return "style"
            elif node.type == "template_element":
                return "template"
        return None
    def _get_sfc_queries(self, language: str) -> str:
        query_file = f"{language}-injections.scm"
        return resources.files("llm_context.excerpters.ts-qry").joinpath(query_file).read_text()
    def _create_excerpt_content(self, source: Source, sections: list[SfcSection]) -> str:
        lines = source.content.split("\n")
        result_lines = []
        last_included_line = -1
        for section in sections:
            if last_included_line >= 0 and section.start_line > last_included_line + 1:
                if self.config.get("with-template", False):
                    gap_lines = lines[last_included_line + 1 : section.start_line]
                    result_lines.extend(gap_lines)
                else:
                    result_lines.append("⋮...")
            if self._should_include_section(section.section_type):
                section_lines = lines[section.start_line : section.end_line + 1]
                result_lines.extend(section_lines)
                last_included_line = section.end_line
            else:
                if section.start_line > last_included_line + 1:
                    result_lines.append(lines[section.start_line])
                result_lines.append("⋮...")
                if section.end_line < len(lines) - 1:
                    result_lines.append(lines[section.end_line])
                last_included_line = section.end_line
        if last_included_line < len(lines) - 1:
            if self.config.get("with-template", False):
                result_lines.extend(lines[last_included_line + 1 :])
            else:
                result_lines.append("⋮...")
        return "\n".join(result_lines)
    def _should_include_section(self, section_type: str) -> bool:
        if section_type == "script":
            return True
        elif section_type == "style":
            return cast(bool, self.config.get("with-style", False))
        elif section_type == "template":
            return cast(bool, self.config.get("with-template", False))
        return False
    def _get_included_section_types(self) -> list[str]:
        included = ["script"]
        if self.config.get("with-style", False):
            included.append("style")
        if self.config.get("with-template", False):
            included.append("template")
        return included
```
--------------------------------------------------------------------------------
/src/llm_context/overviews.py:
--------------------------------------------------------------------------------
```python
import os
import random
from dataclasses import dataclass
from pathlib import Path
from llm_context.file_selector import FileSelector
from llm_context.utils import PathConverter, _format_size, format_age
STATUS_DESCRIPTIONS = {
    "✓": "Full content",
    "O": "Outlined content",
    "E": "Excerpted content",
    "✗": "Excluded",
}
@dataclass(frozen=True)
class OverviewHelper:
    root_dir: str
    full_files: set[str]
    excerpted_files: set[str]
    outlined_files: set[str]
    def get_status(self, path: str) -> str:
        if self.full_files and path in self.full_files:
            return "✓"
        if self.outlined_files and path in self.outlined_files:
            return "O"
        if self.excerpted_files and path in self.excerpted_files:
            return "E"
        return "✗"
    def get_used_statuses(self, abs_paths: list[str]) -> list[str]:
        used = {self.get_status(path) for path in abs_paths}
        return [status for status in ["✓", "O", "E", "✗"] if status in used]
    def format_legend_header(self, abs_paths: list[str]) -> str:
        used_statuses = self.get_used_statuses(abs_paths)
        legends = [f"{status}={STATUS_DESCRIPTIONS[status]}" for status in used_statuses]
        return f"Status: {', '.join(legends)}\nFormat: status path bytes (size) age\n\n"
    def get_file_info(self, abs_path: str) -> tuple[str, str]:
        return (
            self.get_status(abs_path),
            f"/{Path(self.root_dir).name}/{Path(abs_path).relative_to(self.root_dir)} "
            f"{os.path.getsize(abs_path)}"
            f"({_format_size(os.path.getsize(abs_path))})"
            f"{format_age(os.path.getmtime(abs_path))}",
        )
    def sample_excluded_files(self, abs_paths: list[str]) -> list[str]:
        excluded_files = [path for path in abs_paths if self.get_status(path) == "✗"]
        converter = PathConverter.create(Path(self.root_dir))
        sample_excluded = (
            random.sample(excluded_files, min(2, len(excluded_files))) if excluded_files else []
        )
        return converter.to_relative(sample_excluded)
@dataclass(frozen=True)
class FullOverview:
    helper: OverviewHelper
    @staticmethod
    def create(
        root_dir: str, full_files: set[str], excerpted_files: set[str], outlined_files: set[str]
    ) -> "FullOverview":
        helper = OverviewHelper(root_dir, full_files, excerpted_files, outlined_files)
        return FullOverview(helper)
    def generate(self, abs_paths: list[str]) -> tuple[str, list[str]]:
        if not abs_paths:
            return "No files found", []
        entries = [self.helper.get_file_info(path) for path in sorted(abs_paths)]
        header = self.helper.format_legend_header(abs_paths)
        rows = [f"{status} {entry}" for status, entry in entries]
        overview_string = header + "\n".join(rows)
        sample_excluded_files = self.helper.sample_excluded_files(abs_paths)
        return overview_string, sample_excluded_files
@dataclass(frozen=True)
class FocusedOverview:
    helper: OverviewHelper
    @staticmethod
    def create(
        root_dir: str, full_files: set[str], excerpted_files: set[str], outlined_files: set[str]
    ) -> "FocusedOverview":
        helper = OverviewHelper(root_dir, full_files, excerpted_files, outlined_files)
        return FocusedOverview(helper)
    def _group_files_by_immediate_parent(self, abs_paths: list[str]) -> dict[str, list[str]]:
        folders: dict[str, list[str]] = {}
        for abs_path in abs_paths:
            parent_path = str(Path(abs_path).parent)
            if parent_path not in folders:
                folders[parent_path] = []
            folders[parent_path].append(abs_path)
        return folders
    def _folder_has_included_files(self, files_in_folder: list[str]) -> bool:
        return any(self.helper.get_status(f) in ["✓", "O", "E"] for f in files_in_folder)
    def _format_folder_with_file_details(self, folder_path: str, files_in_folder: list[str]) -> str:
        root_name = Path(self.helper.root_dir).name
        folder_relative = Path(folder_path).relative_to(self.helper.root_dir)
        folder_display = (
            f"/{root_name}/{folder_relative}/" if str(folder_relative) != "." else f"/{root_name}/"
        )
        lines = [f"{folder_display} ({len(files_in_folder)} files)"]
        for file_path in sorted(files_in_folder):
            status = self.helper.get_status(file_path)
            filename = Path(file_path).name
            file_size = os.path.getsize(file_path)
            file_age = format_age(os.path.getmtime(file_path))
            indented_line = f"  {status} {filename} {_format_size(file_size)} {file_age}"
            lines.append(indented_line)
        return "\n".join(lines)
    def _format_folder_summary(self, folder_path: str, files_in_folder: list[str]) -> str:
        root_name = Path(self.helper.root_dir).name
        folder_relative = Path(folder_path).relative_to(self.helper.root_dir)
        folder_display = (
            f"/{root_name}/{folder_relative}/" if str(folder_relative) != "." else f"/{root_name}/"
        )
        total_size = sum(os.path.getsize(f) for f in files_in_folder)
        return f"{folder_display} ({len(files_in_folder)} files, {_format_size(total_size)})"
    def generate(self, abs_paths: list[str]) -> tuple[str, list[str]]:
        if not abs_paths:
            return "No files found", []
        folders = self._group_files_by_immediate_parent(abs_paths)
        header = self.helper.format_legend_header(abs_paths)
        sections = []
        for folder_path in sorted(folders.keys()):
            files_in_folder = folders[folder_path]
            if self._folder_has_included_files(files_in_folder):
                sections.append(self._format_folder_with_file_details(folder_path, files_in_folder))
            else:
                sections.append(self._format_folder_summary(folder_path, files_in_folder))
        overview_string = header + "\n".join(sections)
        sample_excluded_files = self.helper.sample_excluded_files(abs_paths)
        return overview_string, sample_excluded_files
def get_full_overview(
    project_root: Path,
    full_files: list[str],
    excerpted_files: list[str],
    outlined_files: list[str],
    overview_ignores: list[str] = [],
) -> tuple[str, list[str]]:
    overview_ignorer = FileSelector.create_ignorer(project_root, overview_ignores)
    abs_paths = overview_ignorer.get_files()
    overview = FullOverview.create(
        str(project_root), set(full_files), set(excerpted_files), set(outlined_files)
    )
    return overview.generate(abs_paths)
def get_focused_overview(
    project_root: Path,
    full_files: list[str],
    excerpted_files: list[str],
    outlined_files: list[str],
    overview_ignores: list[str] = [],
) -> tuple[str, list[str]]:
    overview_ignorer = FileSelector.create_ignorer(project_root, overview_ignores)
    abs_paths = overview_ignorer.get_files()
    overview = FocusedOverview.create(
        str(project_root), set(full_files), set(excerpted_files), set(outlined_files)
    )
    return overview.generate(abs_paths)
```
--------------------------------------------------------------------------------
/tests/test_excerpt_languages.py:
--------------------------------------------------------------------------------
```python
import pytest
from llm_context.excerpters.parser import Source
from llm_context.excerpters.sfc import Sfc
# Test cases: (test_name, extension, code, config, expected_output)
TEST_CASES = [
    (
        "svelte_script_only",
        "svelte",
        """<script>
  let name = 'world';
  
  function handleClick() {
    name = 'Svelte';
  }
  
  export let title;
</script>
<style>
  h1 {
    color: #ff3e00;
    font-size: 2rem;
  }
  
  .button {
    padding: 0.5rem 1rem;
    border: none;
  }
</style>
<h1>Hello {name}!</h1>
<button class="button" on:click={handleClick}>
  Click me
</button>
{#if title}
  <h2>{title}</h2>
{/if}""",
        {"with-style": False, "with-template": False},
        """<script>
  let name = 'world';
  
  function handleClick() {
    name = 'Svelte';
  }
  
  export let title;
</script>
⋮...
<style>
⋮...
</style>
⋮...""",
    ),
    (
        "svelte_with_style",
        "svelte",
        """<script>
  let theme = 'dark';
  let count = 0;
</script>
<style>
  .container {
    background: var(--bg-color);
    padding: 1rem;
  }
  
  .dark {
    --bg-color: #333;
    color: white;
  }
  
  .counter {
    font-size: 1.2rem;
    margin: 1rem 0;
  }
</style>
<div class="container {theme}">
  <p class="counter">Count: {count}</p>
  <button on:click={() => count++}>+</button>
</div>""",
        {"with-style": True, "with-template": False},
        """<script>
  let theme = 'dark';
  let count = 0;
</script>
⋮...
<style>
  .container {
    background: var(--bg-color);
    padding: 1rem;
  }
  
  .dark {
    --bg-color: #333;
    color: white;
  }
  
  .counter {
    font-size: 1.2rem;
    margin: 1rem 0;
  }
</style>
⋮...""",
    ),
    (
        "svelte_with_template",
        "svelte",
        """<script>
  let items = ['apple', 'banana', 'cherry'];
  let showList = true;
</script>
<style>
  .list-item { 
    margin: 0.5rem; 
    padding: 0.25rem;
  }
  
  .hidden { display: none; }
</style>
{#if showList}
  <ul>
    {#each items as item, index}
      <li class="list-item">{index + 1}: {item}</li>
    {/each}
  </ul>
{:else}
  <p>List is hidden</p>
{/if}
<button on:click={() => showList = !showList}>
  Toggle list
</button>""",
        {"with-style": False, "with-template": True},
        """<script>
  let items = ['apple', 'banana', 'cherry'];
  let showList = true;
</script>
<style>
⋮...
</style>
{#if showList}
  <ul>
    {#each items as item, index}
      <li class="list-item">{index + 1}: {item}</li>
    {/each}
  </ul>
{:else}
  <p>List is hidden</p>
{/if}
<button on:click={() => showList = !showList}>
  Toggle list
</button>""",
    ),
    (
        "svelte_typescript",
        "svelte",
        """<script lang="ts">
  interface User {
    name: string;
    age: number;
  }
  
  let user: User = {
    name: 'Alice',
    age: 30
  };
  
  function greet(user: User): string {
    return `Hello, ${user.name}!`;
  }
</script>
<style>
  .greeting { 
    color: blue; 
    font-weight: bold;
  }
</style>
<div class="greeting">
  {greet(user)}
  <p>Age: {user.age}</p>
</div>""",
        {"with-style": False, "with-template": False},
        """<script lang="ts">
  interface User {
    name: string;
    age: number;
  }
  
  let user: User = {
    name: 'Alice',
    age: 30
  };
  
  function greet(user: User): string {
    return `Hello, ${user.name}!`;
  }
</script>
⋮...
<style>
⋮...
</style>
⋮...""",
    ),
    (
        "svelte_all_sections",
        "svelte",
        """<script>
  export let title = 'Component';
  let count = 0;
  
  $: doubled = count * 2;
</script>
<style>
  .title { 
    font-size: 2rem; 
    color: #333;
  }
  
  .count { 
    color: #666; 
  }
</style>
<div>
  <h1 class="title">{title}</h1>
  <p class="count">Count: {count}</p>
  <p>Doubled: {doubled}</p>
  
  {#if count > 5}
    <p>High count!</p>
  {/if}
  
  <button on:click={() => count++}>Increment</button>
</div>""",
        {"with-style": True, "with-template": True},
        """<script>
  export let title = 'Component';
  let count = 0;
  
  $: doubled = count * 2;
</script>
<style>
  .title { 
    font-size: 2rem; 
    color: #333;
  }
  
  .count { 
    color: #666; 
  }
</style>
<div>
  <h1 class="title">{title}</h1>
  <p class="count">Count: {count}</p>
  <p>Doubled: {doubled}</p>
  
  {#if count > 5}
    <p>High count!</p>
  {/if}
  
  <button on:click={() => count++}>Increment</button>
</div>""",
    ),
    (
        "svelte_module_script",
        "svelte",
        """<script context="module">
  export function preload(page) {
    return { 
      props: { 
        data: page.query.data || 'default' 
      } 
    };
  }
  
  let moduleCounter = 0;
</script>
<script>
  export let data;
  
  let instanceCount = 0;
  
  function processData(raw) {
    return raw.toUpperCase();
  }
</script>
<style>
  .content { 
    padding: 1rem; 
  }
</style>
<div class="content">
  <p>Data: {processData(data)}</p>
  <p>Instance: {instanceCount}</p>
</div>""",
        {"with-style": False, "with-template": False},
        """<script context="module">
  export function preload(page) {
    return { 
      props: { 
        data: page.query.data || 'default' 
      } 
    };
  }
  
  let moduleCounter = 0;
</script>
⋮...
<script>
  export let data;
  
  let instanceCount = 0;
  
  function processData(raw) {
    return raw.toUpperCase();
  }
</script>
⋮...
<style>
⋮...
</style>
⋮...""",
    ),
]
@pytest.mark.parametrize("test_name,extension,code,config,expected_output", TEST_CASES)
def test_svelte_excerpting(test_name, extension, code, config, expected_output):
    source = Source(f"test_file.{extension}", code)
    excerpter = Sfc(config)
    result = excerpter.excerpt([source])
    assert len(result.excerpts) == 1
    assert result.excerpts[0].rel_path == f"test_file.{extension}"
    actual_output = result.excerpts[0].content.strip()
    assert actual_output == expected_output, (
        f"Mismatch in {test_name}:\nExpected:\n{expected_output}\n\nActual:\n{actual_output}"
    )
def test_svelte_empty_file():
    """Test handling of empty Svelte files."""
    source = Source("empty.svelte", "")
    excerpter = Sfc({"with-style": False, "with-template": False})
    result = excerpter.excerpt([source])
    # Based on the actual behavior, empty files produce "⋮..." content
    assert len(result.excerpts) == 1
    assert result.excerpts[0].content.strip() == "⋮..."
def test_svelte_non_svelte_file():
    """Test that non-Svelte files are ignored."""
    source = Source("test.js", "console.log('hello');")
    excerpter = Sfc({"with-style": False, "with-template": False})
    result = excerpter.excerpt([source])
    # Should not process non-Svelte files
    assert len(result.excerpts) == 0
def test_multiple_svelte_files():
    """Test processing multiple Svelte files."""
    sources = [
        Source("App.svelte", """<script>let name = 'App';</script><div>{name}</div>"""),
        Source("Button.svelte", """<script>export let label;</script><button>{label}</button>"""),
    ]
    excerpter = Sfc({"with-style": False, "with-template": False})
    result = excerpter.excerpt(sources)
    assert len(result.excerpts) == 2
    paths = {excerpt.rel_path for excerpt in result.excerpts}
    assert paths == {"App.svelte", "Button.svelte"}
```
--------------------------------------------------------------------------------
/src/llm_context/project_setup.py:
--------------------------------------------------------------------------------
```python
import shutil
from dataclasses import dataclass
from importlib import resources
from logging import INFO
from pathlib import Path
from typing import Any
from llm_context import lc_resources
from llm_context.lc_resources import rules, templates
from llm_context.rule import ProjectLayout, ToolConstants
from llm_context.state import StateStore
from llm_context.utils import Yaml, log
PROJECT_INFO: str = (
    "This project uses llm-context. For more information, visit: "
    "https://github.com/cyberchitta/llm-context.py or "
    "https://pypi.org/project/llm-context/"
)
SYSTEM_RULES = [
    "lc/exc-base.md",
    "lc/flt-base.md",
    "lc/flt-no-files.md",
    "lc/flt-no-full.md",
    "lc/flt-no-outline.md",
    "lc/ins-developer.md",
    "lc/ins-rule-framework.md",
    "lc/ins-rule-intro.md",
    "lc/prm-developer.md",
    "lc/prm-rule-create.md",
    "lc/sty-code.md",
    "lc/sty-javascript.md",
    "lc/sty-jupyter.md",
    "lc/sty-python.md",
]
@dataclass(frozen=True)
class Config:
    templates: dict[str, str]
    __info__: str = PROJECT_INFO
    @staticmethod
    def create_default() -> "Config":
        return Config(
            templates={
                "context": "lc/context.j2",
                "definitions": "lc/definitions.j2",
                "end-prompt": "lc/end-prompt.j2",
                "excerpts": "lc/excerpts.j2",
                "excluded": "lc/excluded.j2",
                "files": "lc/files.j2",
                "missing-files": "lc/missing-files.j2",
                "outlines": "lc/outlines.j2",
                "overview": "lc/overview.j2",
                "prompt": "lc/prompt.j2",
            },
        )
    def to_dict(self) -> dict[str, Any]:
        return {
            "__info__": self.__info__,
            "templates": self.templates,
        }
@dataclass(frozen=True)
class ProjectSetup:
    project_layout: ProjectLayout
    constants: ToolConstants
    @staticmethod
    def create(project_layout: ProjectLayout) -> "ProjectSetup":
        project_layout.templates_path.mkdir(parents=True, exist_ok=True)
        project_layout.rules_path.mkdir(parents=True, exist_ok=True)
        StateStore.delete_if_stale_rule(project_layout)
        start_state = (
            ToolConstants.create_null()
            if not project_layout.state_path.exists()
            else ToolConstants.from_dict(Yaml.load(project_layout.state_path))
        )
        return ProjectSetup(project_layout, start_state)
    def initialize(self):
        self._create_or_update_config_file()
        self._create_curr_ctx_file()
        self._update_templates_if_needed()
        self.create_state_file()
        self._create_or_update_ancillary_files()
        self._create_project_notes_file()
        self._create_user_notes_file()
        self._setup_default_rules()
    def _create_or_update_ancillary_files(self):
        if not self.project_layout.config_path.exists() or self.constants.needs_update:
            self._copy_resource(
                "dotgitignore", self.project_layout.project_config_path / ".gitignore"
            )
    def _create_or_update_config_file(self):
        if not self.project_layout.config_path.exists():
            self._create_config_file()
        elif self.constants.needs_update:
            self._update_config_file()
            self._clean_old_resources()
    def _create_curr_ctx_file(self):
        if not self.project_layout.state_store_path.exists():
            Yaml.save(self.project_layout.state_store_path, {"selections": {}})
    def _update_templates_if_needed(self):
        if self.constants.needs_update:
            config = Yaml.load(self.project_layout.config_path)
            for _, template_name in config["templates"].items():
                template_path = self.project_layout.get_template_path(template_name)
                self._copy_template(template_name, template_path)
            self._copy_excerpter_templates()
    def create_state_file(self):
        Yaml.save(self.project_layout.state_path, ToolConstants.create_new().to_dict())
    def _create_project_notes_file(self):
        notes_path = self.project_layout.project_notes_path
        if not notes_path.exists():
            notes_path.write_text(
                "## Project Notes\n\n"
                "Add project-specific notes, documentation and guidelines here.\n"
                "This file is stored in the project repository.\n"
            )
    def _create_user_notes_file(self):
        notes_path = self.project_layout.user_notes_path
        if not notes_path.exists():
            notes_path.parent.mkdir(parents=True, exist_ok=True)
            notes_path.write_text(
                "## User Notes\n\n"
                "Add Any personal notes or reminders about this or other projects here.\n"
                "This file is private and stored in your user config directory.\n"
            )
    def _update_config_file(self):
        new_config = Config.create_default().to_dict()
        Yaml.save(self.project_layout.config_path, new_config)
    def _create_config_file(self):
        Yaml.save(self.project_layout.config_path, Config.create_default().to_dict())
    def _copy_resource(self, resource_name: str, dest_path: Path):
        template_content = resources.files(lc_resources).joinpath(resource_name).read_text()
        dest_path.write_text(template_content)
        log(INFO, f"Updated resource {resource_name} to {dest_path}")
    def _copy_template(self, template_name: str, dest_path: Path):
        template_content = resources.files(templates).joinpath(template_name).read_text()
        dest_path.parent.mkdir(parents=True, exist_ok=True)
        dest_path.write_text(template_content)
        log(INFO, f"Updated template {template_name} to {dest_path}")
    def _copy_rule(self, rule_file: str, dest_path: Path):
        rules_path = resources.files(rules)
        rule_path = rules_path / rule_file
        rule_content = rule_path.read_text()
        dest_path.write_text(rule_content)
        log(INFO, f"Updated rule {rule_file} to {dest_path}")
    def _setup_default_rules(self):
        lc_rules_path = self.project_layout.rules_path / "lc"
        if not self.constants.needs_update and lc_rules_path.exists():
            return
        if lc_rules_path.exists():
            shutil.rmtree(lc_rules_path)
            log(INFO, "Refreshing system rules")
        lc_rules_path.mkdir(parents=True, exist_ok=True)
        for rule in SYSTEM_RULES:
            rule_path = self.project_layout.get_rule_path(rule)
            rule_path.parent.mkdir(parents=True, exist_ok=True)
            self._copy_rule(rule, rule_path)
    def _copy_excerpter_templates(self):
        excerpters_source = resources.files(templates).joinpath("lc").joinpath("excerpters")
        excerpters_dest = self.project_layout.templates_path / "lc" / "excerpters"
        if excerpters_source.is_dir():
            excerpters_dest.mkdir(parents=True, exist_ok=True)
            for template_file in excerpters_source.iterdir():
                if template_file.is_file() and template_file.name.endswith(".j2"):
                    dest_file = excerpters_dest / template_file.name
                    dest_file.write_text(template_file.read_text())
                    log(INFO, f"Updated excerpter template {template_file.name} to {dest_file}")
    def _clean_old_resources(self):
        templates_path = self.project_layout.templates_path
        if templates_path.exists():
            for template_file in templates_path.rglob("lc-*.j2"):
                template_file.unlink()
                log(INFO, f"Removed old template {template_file}")
        rules_path = self.project_layout.rules_path
        if rules_path.exists():
            for rule_file in rules_path.rglob("lc-*.md"):
                rule_file.unlink()
                log(INFO, f"Removed old rule {rule_file}")
```
--------------------------------------------------------------------------------
/.llm-context/rules/lc/ins-rule-framework.md:
--------------------------------------------------------------------------------
```markdown
---
description: Provides a decision framework, semantics, and best practices for creating task-focused rules, including file selection patterns and composition guidelines. Use as core guidance for building custom rules for context generation.
---
## Decision Framework
Create task-focused rules by selecting the minimal set of files needed for your objective, using the following rule categories:
- **Prompt Rules (`prm-`)**: Generate project contexts (e.g., `lc/prm-developer` for code files, `lc/prm-rule-create` for rule creation tasks).
- **Filter Rules (`flt-`)**: Control file inclusion/exclusion (e.g., `lc/flt-base` for standard exclusions, `lc/flt-no-files` for minimal contexts).
- **Excerpting Rules (`exc-`)**: Configure code outlining and structure extraction (e.g., `lc/exc-base` for standard code outlining).
- **Instruction Rules (`ins-`)**: Provide guidance (e.g., `lc/ins-developer` for developer guidelines, `lc/ins-rule-intro` for chat-based rule creation).
- **Style Rules (`sty-`)**: Enforce coding standards (e.g., `lc/sty-python` for Python-specific style, `lc/sty-code` for universal principles).
### Quick Decision Guide
- **Need detailed code implementations?** → Use `lc/prm-developer` for full content or specific `also-include` patterns.
- **Need only code structure/outlines?** → Use `lc/flt-no-full` with `also-include` for excerpted files.
- **Need coding style guidelines?** → Include `lc/sty-code`, `lc/sty-python`, etc., for relevant languages.
- **Need minimal context (metadata/notes)?** → Use `lc/flt-no-files`.
- **Need precise file control over a small set?** → Use `lc/flt-no-files` with explicit `also-include` patterns.
- **Need rule creation guidance?** → Compose with `lc/ins-rule-intro` or this rule (`lc/ins-rule-framework`).
## Code Outlining & Excerpting System
The excerpting system provides structured views of your code through different modes:
### Code Outlining (Primary Mode)
**Files Supported**: `.c`, `.cc`, `.cpp`, `.cs`, `.el`, `.ex`, `.elm`, `.go`, `.java`, `.js`, `.mjs`, `.php`, `.py`, `.rb`, `.rs`, `.ts`
Code outlining extracts function/class definitions and key structural elements, showing:
- Function signatures with `█` markers for definitions
- Class declarations and methods
- Important structural code with `│` continuation markers
- Condensed view with `⋮...` for omitted sections
### SFC Excerpting (Single File Components)
**Files Supported**: `.svelte`, `.vue`
Extracts script sections from Single File Components while preserving structure:
- Always includes `<script>` sections
- Optionally includes `<style>` sections (configurable)
- Optionally includes template logic (configurable)
- Uses `⋮...` markers for excluded sections
### Advanced Excerpting Configuration
Control excerpting behavior through `excerpt-config`:
```yaml
excerpt-config:
  sfc:
    with-style: false # Exclude CSS/styling sections
    with-template: true # Include template markup
```
### Required Composition
**All rules must compose `lc/exc-base`** to enable code outlining functionality. The excerpting system requires excerpt-modes configuration - without it, selected files cannot be processed for structural views.
- Always include `compose: {excerpters: [lc/exc-base]}` in your rules
- Advanced users can customize `excerpt-modes` patterns if needed
- Rules without excerpt configuration will fail when processing excerpted files
## Rule System Semantics
### File Selection
- **`also-include: {full-files: [...], excerpted-files: [...]}`**: Specify files for full content or excerpts using root-relative paths (excluding project name).
  - Example: `["/nbs/03_clustering.md", "/src/**/*.py"]` to include specific files or patterns.
- **`implementations: [[file, definition], ...]`**: Extract specific function/class implementations.
  - Example: `["/src/utils/helpers.js", "validateToken"]` to retrieve a specific function.
### Filtering (gitignore-style patterns)
- **`gitignores: {full-files: [...], excerpted-files: [...], overview-files: [...]}`**: Exclude files using patterns.
  - Use `lc/flt-base` for standard exclusions (e.g., binaries, logs).
  - Use `lc/flt-no-full` or `lc/flt-no-outline` to exclude all full or excerpted files.
- **`limit-to: {full-files: [...], excerpted-files: [...], overview-files: [...]}`**: Restrict selections to specific patterns.
  - **Important**: When composing rules, only the first `limit-to` clause for each key is used. Subsequent clauses are ignored with a warning.
  - Example: `["src/api/**"]` to limit to API-related files.
**Path Format**: All patterns must be relative to project root, starting with `/` but excluding the project name:
- ✅ `"/src/components/**"` (correct relative path)
- ❌ `"/myproject/src/components/**"` (includes project name)
- ✅ `"/.llm-context/rules/**"` (correct for rule files)
**Important**: `limit-to` and `also-include` must match file paths, not directories:
- ✅ `"src/**"` (matches all files in src)
- ❌ `"src/"` (directory pattern, won't match files)
### Composition
- **`compose: {filters: [...], excerpters: [...]}`**: Combine rules for modular context generation.
  - **`filters`**: Merge `gitignores`, `limit-to`, and `also-include` from other `flt-` rules.
  - **`excerpters`**: Merge `excerpt-modes` and `excerpt-config` from `exc-` rules.
  - Example: Compose `lc/flt-base` + `lc/exc-base` for standard code outlining.
### Overview Modes
- **`overview: "full"`**: Default. Shows complete directory tree with all files (✓ full, E excerpted, ✗ excluded).
- **`overview: "focused"`**: Groups directories, showing details only for those with included files. Use for large repositories (1000+ files).
## Example Advanced Rule
```yaml
---
description: Focused context for debugging API-related code with SFC support
overview: full
compose:
  filters: [lc/flt-base]
  excerpters: [lc/exc-base]
gitignores:
  full-files: ["**/test/**", "**/*.test.*"]
limit-to:
  excerpted-files: ["/src/api/**", "/src/components/**"]
also-include:
  full-files: ["/src/api/auth.js"]
excerpt-modes:
  "**/*.svelte": "sfc"
excerpt-config:
  sfc:
    with-style: false
    with-template: true
implementations:
  - ["/src/utils/helpers.js", "validateToken"]
---
```
This rule:
- Uses standard exclusions and code outlining
- Excludes test files from full content
- Limits excerpts to API and component files
- Configures SFC excerpting for Svelte files
- Includes specific auth file and function implementation
## Implementation
Create a new user rule in `.llm-context/rules/`:
```bash
cat > .llm-context/rules/tmp-prm-task-name.md << 'EOF'
---
description: Brief description of the task focus
overview: full
compose:
  filters: [lc/flt-no-files]
  excerpters: [lc/exc-base]
also-include:
  full-files:
    - "/path/to/file1.ext"
    - "/path/to/file2.ext"
  excerpted-files:
    - "/path/to/outline1.ext"
---
## Task-Specific Context
Add optional task-specific instructions here.
EOF
lc-set-rule tmp-prm-task-name
lc-select
lc-context
```
## Best Practices
- **Start Minimal**: Use `lc/flt-no-files` with explicit `also-include` for precise control, or compose with `lc/flt-base` for broader patterns.
- **Use Descriptive Names**: Prefix temporary rules with `tmp-prm-` (e.g., `tmp-prm-api-debug`).
- **Leverage Categories**:
  - Use `prm-` rules for task-specific contexts
  - Use `flt-` rules for file control
  - Use `exc-` rules for excerpting configuration
  - Include `ins-` rules for developer guidelines
  - Reference `sty-` rules for style enforcement
- **Configure Excerpting**: Use `lc/exc-base` for standard code outlining, customize `excerpt-modes` for specific file types.
- **Document Choices**: Explain why files are included in the rule's content section.
- **Iterate**: Refine rules based on task needs.
- **Prefer Full Overview**: Use `overview: "full"` unless repository is very large (1000+ files).
- **Aim for Efficiency**: Target 10-50% of full project context size for optimal performance.
**Goal**: Create focused, reusable rules that minimize context while maximizing task effectiveness through intelligent code outlining and excerpting.
```
--------------------------------------------------------------------------------
/src/llm_context/lc_resources/rules/lc/ins-rule-framework.md:
--------------------------------------------------------------------------------
```markdown
---
description: Provides a decision framework, semantics, and best practices for creating task-focused rules, including file selection patterns and composition guidelines. Use as core guidance for building custom rules for context generation.
---
## Decision Framework
Create task-focused rules by selecting the minimal set of files needed for your objective, using the following rule categories:
- **Prompt Rules (`prm-`)**: Generate project contexts (e.g., `lc/prm-developer` for code files, `lc/prm-rule-create` for rule creation tasks).
- **Filter Rules (`flt-`)**: Control file inclusion/exclusion (e.g., `lc/flt-base` for standard exclusions, `lc/flt-no-files` for minimal contexts).
- **Excerpting Rules (`exc-`)**: Configure code outlining and structure extraction (e.g., `lc/exc-base` for standard code outlining).
- **Instruction Rules (`ins-`)**: Provide guidance (e.g., `lc/ins-developer` for developer guidelines, `lc/ins-rule-intro` for chat-based rule creation).
- **Style Rules (`sty-`)**: Enforce coding standards (e.g., `lc/sty-python` for Python-specific style, `lc/sty-code` for universal principles).
### Quick Decision Guide
- **Need detailed code implementations?** → Use `lc/prm-developer` for full content or specific `also-include` patterns.
- **Need only code structure/outlines?** → Use `lc/flt-no-full` with `also-include` for excerpted files.
- **Need coding style guidelines?** → Include `lc/sty-code`, `lc/sty-python`, etc., for relevant languages.
- **Need minimal context (metadata/notes)?** → Use `lc/flt-no-files`.
- **Need precise file control over a small set?** → Use `lc/flt-no-files` with explicit `also-include` patterns.
- **Need rule creation guidance?** → Compose with `lc/ins-rule-intro` or this rule (`lc/ins-rule-framework`).
## Code Outlining & Excerpting System
The excerpting system provides structured views of your code through different modes:
### Code Outlining (Primary Mode)
**Files Supported**: `.c`, `.cc`, `.cpp`, `.cs`, `.el`, `.ex`, `.elm`, `.go`, `.java`, `.js`, `.mjs`, `.php`, `.py`, `.rb`, `.rs`, `.ts`
Code outlining extracts function/class definitions and key structural elements, showing:
- Function signatures with `█` markers for definitions
- Class declarations and methods
- Important structural code with `│` continuation markers
- Condensed view with `⋮...` for omitted sections
### SFC Excerpting (Single File Components)
**Files Supported**: `.svelte`, `.vue`
Extracts script sections from Single File Components while preserving structure:
- Always includes `<script>` sections
- Optionally includes `<style>` sections (configurable)
- Optionally includes template logic (configurable)
- Uses `⋮...` markers for excluded sections
### Advanced Excerpting Configuration
Control excerpting behavior through `excerpt-config`:
```yaml
excerpt-config:
  sfc:
    with-style: false # Exclude CSS/styling sections
    with-template: true # Include template markup
```
### Required Composition
**All rules must compose `lc/exc-base`** to enable code outlining functionality. The excerpting system requires excerpt-modes configuration - without it, selected files cannot be processed for structural views.
- Always include `compose: {excerpters: [lc/exc-base]}` in your rules
- Advanced users can customize `excerpt-modes` patterns if needed
- Rules without excerpt configuration will fail when processing excerpted files
## Rule System Semantics
### File Selection
- **`also-include: {full-files: [...], excerpted-files: [...]}`**: Specify files for full content or excerpts using root-relative paths (excluding project name).
  - Example: `["/nbs/03_clustering.md", "/src/**/*.py"]` to include specific files or patterns.
- **`implementations: [[file, definition], ...]`**: Extract specific function/class implementations.
  - Example: `["/src/utils/helpers.js", "validateToken"]` to retrieve a specific function.
### Filtering (gitignore-style patterns)
- **`gitignores: {full-files: [...], excerpted-files: [...], overview-files: [...]}`**: Exclude files using patterns.
  - Use `lc/flt-base` for standard exclusions (e.g., binaries, logs).
  - Use `lc/flt-no-full` or `lc/flt-no-outline` to exclude all full or excerpted files.
- **`limit-to: {full-files: [...], excerpted-files: [...], overview-files: [...]}`**: Restrict selections to specific patterns.
  - **Important**: When composing rules, only the first `limit-to` clause for each key is used. Subsequent clauses are ignored with a warning.
  - Example: `["src/api/**"]` to limit to API-related files.
**Path Format**: All patterns must be relative to project root, starting with `/` but excluding the project name:
- ✅ `"/src/components/**"` (correct relative path)
- ❌ `"/myproject/src/components/**"` (includes project name)
- ✅ `"/.llm-context/rules/**"` (correct for rule files)
**Important**: `limit-to` and `also-include` must match file paths, not directories:
- ✅ `"src/**"` (matches all files in src)
- ❌ `"src/"` (directory pattern, won't match files)
### Composition
- **`compose: {filters: [...], excerpters: [...]}`**: Combine rules for modular context generation.
  - **`filters`**: Merge `gitignores`, `limit-to`, and `also-include` from other `flt-` rules.
  - **`excerpters`**: Merge `excerpt-modes` and `excerpt-config` from `exc-` rules.
  - Example: Compose `lc/flt-base` + `lc/exc-base` for standard code outlining.
### Overview Modes
- **`overview: "full"`**: Default. Shows complete directory tree with all files (✓ full, E excerpted, ✗ excluded).
- **`overview: "focused"`**: Groups directories, showing details only for those with included files. Use for large repositories (1000+ files).
## Example Advanced Rule
```yaml
---
description: Focused context for debugging API-related code with SFC support
overview: full
compose:
  filters: [lc/flt-base]
  excerpters: [lc/exc-base]
gitignores:
  full-files: ["**/test/**", "**/*.test.*"]
limit-to:
  excerpted-files: ["/src/api/**", "/src/components/**"]
also-include:
  full-files: ["/src/api/auth.js"]
excerpt-modes:
  "**/*.svelte": "sfc"
excerpt-config:
  sfc:
    with-style: false
    with-template: true
implementations:
  - ["/src/utils/helpers.js", "validateToken"]
---
```
This rule:
- Uses standard exclusions and code outlining
- Excludes test files from full content
- Limits excerpts to API and component files
- Configures SFC excerpting for Svelte files
- Includes specific auth file and function implementation
## Implementation
Create a new user rule in `.llm-context/rules/`:
```bash
cat > .llm-context/rules/tmp-prm-task-name.md << 'EOF'
---
description: Brief description of the task focus
overview: full
compose:
  filters: [lc/flt-no-files]
  excerpters: [lc/exc-base]
also-include:
  full-files:
    - "/path/to/file1.ext"
    - "/path/to/file2.ext"
  excerpted-files:
    - "/path/to/outline1.ext"
---
## Task-Specific Context
Add optional task-specific instructions here.
EOF
lc-set-rule tmp-prm-task-name
lc-select
lc-context
```
## Best Practices
- **Start Minimal**: Use `lc/flt-no-files` with explicit `also-include` for precise control, or compose with `lc/flt-base` for broader patterns.
- **Use Descriptive Names**: Prefix temporary rules with `tmp-prm-` (e.g., `tmp-prm-api-debug`).
- **Leverage Categories**:
  - Use `prm-` rules for task-specific contexts
  - Use `flt-` rules for file control
  - Use `exc-` rules for excerpting configuration
  - Include `ins-` rules for developer guidelines
  - Reference `sty-` rules for style enforcement
- **Configure Excerpting**: Use `lc/exc-base` for standard code outlining, customize `excerpt-modes` for specific file types.
- **Document Choices**: Explain why files are included in the rule's content section.
- **Iterate**: Refine rules based on task needs.
- **Prefer Full Overview**: Use `overview: "full"` unless repository is very large (1000+ files).
- **Aim for Efficiency**: Target 10-50% of full project context size for optimal performance.
**Goal**: Create focused, reusable rules that minimize context while maximizing task effectiveness through intelligent code outlining and excerpting.
```
--------------------------------------------------------------------------------
/src/llm_context/file_selector.py:
--------------------------------------------------------------------------------
```python
import os
from dataclasses import dataclass
from logging import ERROR, WARNING
from pathlib import Path
from typing import Optional
from pathspec import GitIgnoreSpec  # type: ignore
from llm_context.context_spec import ContextSpec
from llm_context.rule import IGNORE_NOTHING, INCLUDE_ALL, Rule
from llm_context.state import FileSelection
from llm_context.utils import PathConverter, log, safe_read_file
@dataclass(frozen=True)
class PathspecIgnorer:
    pathspec: GitIgnoreSpec
    @staticmethod
    def create(ignore_patterns: list[str]) -> "PathspecIgnorer":
        pathspec = GitIgnoreSpec.from_lines(ignore_patterns)
        return PathspecIgnorer(pathspec)
    def ignore(self, path: str) -> bool:
        assert path not in ("/", ""), "Root directory cannot be an input for ignore method"
        return self.pathspec.match_file(path)
@dataclass(frozen=True)
class GitIgnorer:
    ignorer_data: list[tuple[str, PathspecIgnorer]]
    @staticmethod
    def from_git_root(root_dir: str, xtra_root_patterns: list[str] = []) -> "GitIgnorer":
        ignorer_data = []
        if xtra_root_patterns:
            ignorer_data.append(("/", PathspecIgnorer.create(xtra_root_patterns)))
        gitignores = GitIgnorer._collect_gitignores(root_dir)
        for relative_path, patterns in gitignores:
            ignorer_data.append((relative_path, PathspecIgnorer.create(patterns)))
        start_idx = 1 if xtra_root_patterns else 0
        if len(ignorer_data) > start_idx:
            prefix_data = ignorer_data[:start_idx]
            gitignore_data = ignorer_data[start_idx:]
            gitignore_data.sort(key=lambda x: (-x[0].count("/"), x[0]))
            ignorer_data = prefix_data + gitignore_data
        return GitIgnorer(ignorer_data)
    @staticmethod
    def _collect_gitignores(top: str) -> list[tuple[str, list[str]]]:
        gitignores = []
        for root, _, files in os.walk(top):
            if ".gitignore" in files:
                content = safe_read_file(os.path.join(root, ".gitignore"))
                if content:
                    patterns = content.splitlines()
                    relpath = os.path.relpath(root, top)
                    fixpath = "/" if relpath == "." else f"/{relpath}"
                    gitignores.append((fixpath, patterns))
        return gitignores
    def ignore(self, path: str) -> bool:
        assert path not in ("/", ""), "Root directory cannot be an input for ignore method"
        for prefix, ignorer in self.ignorer_data:
            if path.startswith(prefix):
                if prefix == "/":
                    test_path = path[1:]
                else:
                    test_path = path[len(prefix) :].lstrip("/")
                if test_path and ignorer.ignore(test_path):
                    return True
        return False
@dataclass(frozen=True)
class IncludeFilter:
    pathspec: GitIgnoreSpec
    @staticmethod
    def create(include_patterns: list[str]) -> "IncludeFilter":
        pathspec = GitIgnoreSpec.from_lines(include_patterns)
        return IncludeFilter(pathspec)
    def include(self, path: str) -> bool:
        assert path not in ("/", ""), "Root directory cannot be an input for include method"
        return self.pathspec.match_file(path)
@dataclass(frozen=True)
class FileSelector:
    root_path: str
    ignorer: GitIgnorer
    converter: PathConverter
    limit_filter: IncludeFilter
    also_include_filter: IncludeFilter
    since: Optional[float]
    @staticmethod
    def create_universal(root_path: Path) -> "FileSelector":
        return FileSelector.create_ignorer(root_path, IGNORE_NOTHING)
    @staticmethod
    def create_ignorer(root_path: Path, pathspecs: list[str]) -> "FileSelector":
        return FileSelector.create(root_path, pathspecs, INCLUDE_ALL, [])
    @staticmethod
    def create(
        root_path: Path,
        ignore_pathspecs: list[str],
        limit_to_pathspecs: list[str],
        also_include_pathspecs: list[str],
        since: Optional[float] = None,
    ) -> "FileSelector":
        ignorer = GitIgnorer.from_git_root(str(root_path), ignore_pathspecs)
        converter = PathConverter.create(root_path)
        limit_filter = IncludeFilter.create(limit_to_pathspecs)
        also_include_filter = IncludeFilter.create(also_include_pathspecs)
        return FileSelector(
            str(root_path), ignorer, converter, limit_filter, also_include_filter, since
        )
    def filter_files(self, files: list[str]) -> list[str]:
        return [f for f in files if f in set(self.get_files())]
    def get_files(self) -> list[str]:
        files = list(set(self.traverse(self.root_path) + self.also_traverse(self.root_path)))
        return [f for f in files if Path(f).stat().st_mtime > self.since] if self.since else files
    def get_relative_files(self) -> list[str]:
        return sorted(self.converter.to_relative(self.get_files()))
    def traverse(self, current_dir: str) -> list[str]:
        entries = os.listdir(current_dir)
        relative_current_dir = os.path.relpath(current_dir, self.root_path)
        dirs = [
            e_path
            for e in entries
            if (e_path := os.path.join(current_dir, e))
            and os.path.isdir(e_path)
            and (not self.ignorer.ignore(self._relative_path(relative_current_dir, e)))
        ]
        files = [
            e_path
            for e in entries
            if (e_path := os.path.join(current_dir, e))
            and not os.path.isdir(e_path)
            and self._should_include_file(self._relative_path(relative_current_dir, e))
        ]
        subdir_files = [file for d in dirs for file in self.traverse(d)]
        return files + subdir_files
    def _should_include_file(self, path: str) -> bool:
        assert path not in ("/", ""), "Root directory cannot be an input for filtering"
        if self.ignorer.ignore(path):
            return False
        return self.limit_filter.include(path)
    def also_traverse(self, current_dir: str) -> list[str]:
        if not self.also_include_filter.pathspec.patterns:
            return []
        entries = os.listdir(current_dir)
        relative_current_dir = os.path.relpath(current_dir, self.root_path)
        dirs = [
            e_path
            for e in entries
            if (e_path := os.path.join(current_dir, e)) and os.path.isdir(e_path)
        ]
        files = [
            e_path
            for e in entries
            if (e_path := os.path.join(current_dir, e))
            and not os.path.isdir(e_path)
            and self.also_include_filter.include(self._relative_path(relative_current_dir, e))
        ]
        subdir_files = [file for d in dirs for file in self.also_traverse(d)]
        return files + subdir_files
    def _relative_path(self, dir: str, filename: str) -> str:
        return f"/{os.path.normpath(os.path.join(dir, filename))}"
@dataclass(frozen=True)
class ContextSelector:
    full_selector: FileSelector
    excerpted_selector: FileSelector
    rule: Rule
    @staticmethod
    def create(spec: ContextSpec, since: Optional[float] = None) -> "ContextSelector":
        root_path = spec.project_root_path
        rule = spec.rule
        full_ignore_pathspecs = rule.get_ignore_patterns("full")
        excerpted_ignore_pathspecs = rule.get_ignore_patterns("excerpted")
        full_limit_to_pathspecs = rule.get_limit_to_patterns("full")
        excerpted_limit_to_pathspecs = rule.get_limit_to_patterns("excerpted")
        full_also_include_pathspecs = rule.get_also_include_patterns("full")
        excerpted_also_include_pathspecs = rule.get_also_include_patterns("excerpted")
        full_selector = FileSelector.create(
            root_path,
            full_ignore_pathspecs,
            full_limit_to_pathspecs,
            full_also_include_pathspecs,
            since,
        )
        excerpted_selector = FileSelector.create(
            root_path,
            excerpted_ignore_pathspecs,
            excerpted_limit_to_pathspecs,
            excerpted_also_include_pathspecs,
            since,
        )
        return ContextSelector(full_selector, excerpted_selector, rule)
    def select_full_files(self, file_selection: FileSelection) -> "FileSelection":
        full_files = self.full_selector.get_relative_files()
        excerpted_files = file_selection.excerpted_files
        updated_excerpted_files = [f for f in excerpted_files if f not in set(full_files)]
        if len(excerpted_files) != len(updated_excerpted_files):
            log(
                WARNING,
                "Some files previously in excerpted selection have been moved to full selection.",
            )
        return FileSelection._create(
            file_selection.rule_name, full_files, updated_excerpted_files, file_selection.timestamp
        )
    def select_excerpted_files(self, file_selection: FileSelection) -> "FileSelection":
        full_files = file_selection.full_files
        if not full_files:
            log(
                WARNING,
                "No full files have been selected. Consider running full file selection first.",
            )
        all_excerpted_files = self.excerpted_selector.get_relative_files()
        excerpted_files = [f for f in all_excerpted_files if f not in set(full_files)]
        return FileSelection._create(
            file_selection.rule_name, full_files, excerpted_files, file_selection.timestamp
        )
    def select_excerpted_only(self, file_selection: FileSelection) -> "FileSelection":
        all_excerpted_files = self.excerpted_selector.get_relative_files()
        supported_excerpted = [f for f in all_excerpted_files if self.rule.get_excerpt_mode(f)]
        return FileSelection._create(
            file_selection.rule_name, [], supported_excerpted, file_selection.timestamp
        )
```
--------------------------------------------------------------------------------
/tests/test_outline_languages.py:
--------------------------------------------------------------------------------
```python
import pytest
from llm_context.excerpters.code_outliner import CodeOutliner
from llm_context.excerpters.parser import ASTFactory, Source
from llm_context.excerpters.tagger import ASTBasedTagger
TEST_CASES = [
    (
        "python",
        "py",
        """
def factorial(n: int) -> int:
    if n == 0 or n == 1:
        return 1
    return n * factorial(n - 1)
class MathOperations:
    @staticmethod
    def square(x: float) -> float:
        return x * x
    def __init__(self, value: float):
        self.value = value
    def cube(self) -> float:
        return self.value ** 3
if __name__ == "__main__":
    math_op = MathOperations(3)
    print(f"Factorial of 5: {factorial(5)}")
    print(f"Square of 4: {MathOperations.square(4)}")
    print(f"Cube of 3: {math_op.cube()}")
""",
        """⋮...
█def factorial(n: int) -> int:
⋮...
█class MathOperations:
⋮...
█    def square(x: float) -> float:
⋮...
█    def __init__(self, value: float):
⋮...
█    def cube(self) -> float:
⋮...
""".strip(),
    ),
    (
        "javascript",
        "js",
        """
function factorial(n) {
    if (n === 0 || n === 1) return 1;
    return n * factorial(n - 1);
}
class MathOperations {
    static square(x) {
        return x * x;
    }
    constructor(value) {
        this.value = value;
    }
    cube() {
        return Math.pow(this.value, 3);
    }
}
const mathOp = new MathOperations(3);
console.log(`Factorial of 5: ${factorial(5)}`);
console.log(`Square of 4: ${MathOperations.square(4)}`);
console.log(`Cube of 3: ${mathOp.cube()}`);
""",
        """⋮...
█function factorial(n) {
⋮...
█class MathOperations {
█    static square(x) {
⋮...
█    cube() {
⋮...
""".strip(),
    ),
    (
        "typescript",
        "ts",
        """
function factorial(n: number): number {
    if (n === 0 || n === 1) return 1;
    return n * factorial(n - 1);
}
interface IMathOperations {
    cube(): number;
}
class MathOperations implements IMathOperations {
    static square(x: number): number {
        return x * x;
    }
    constructor(private value: number) {}
    cube(): number {
        return Math.pow(this.value, 3);
    }
}
const mathOp: IMathOperations = new MathOperations(3);
console.log(`Factorial of 5: ${factorial(5)}`);
console.log(`Square of 4: ${MathOperations.square(4)}`);
console.log(`Cube of 3: ${mathOp.cube()}`);
""",
        """⋮...
█function factorial(n: number): number {
⋮...
█interface IMathOperations {
█    cube(): number;
⋮...
█class MathOperations implements IMathOperations {
█    static square(x: number): number {
⋮...
█    constructor(private value: number) {}
⋮...
█    cube(): number {
⋮...
""".strip(),
    ),
    (
        "java",
        "java",
        """
public class MathOperations {
    public static int factorial(int n) {
        if (n == 0 || n == 1) return 1;
        return n * factorial(n - 1);
    }
    public static double square(double x) {
        return x * x;
    }
    private final double value;
    public MathOperations(double value) {
        this.value = value;
    }
    public double cube() {
        return Math.pow(this.value, 3);
    }
    public static void main(String[] args) {
        MathOperations mathOp = new MathOperations(3);
        System.out.println("Factorial of 5: " + factorial(5));
        System.out.println("Square of 4: " + square(4));
        System.out.println("Cube of 3: " + mathOp.cube());
    }
}
""",
        """⋮...
█public class MathOperations {
█    public static int factorial(int n) {
⋮...
█    public static double square(double x) {
⋮...
█    public double cube() {
⋮...
█    public static void main(String[] args) {
⋮...
""".strip(),
    ),
    (
        "c",
        "c",
        """
#include <stdio.h>
#include <math.h>
int factorial(int n) {
    if (n == 0 || n == 1) return 1;
    return n * factorial(n - 1);
}
double square(double x) {
    return x * x;
}
typedef struct {
    double value;
} MathOperations;
MathOperations create_math_operations(double value) {
    MathOperations mo = {value};
    return mo;
}
double cube(MathOperations* mo) {
    return pow(mo->value, 3);
}
int main() {
    MathOperations mo = create_math_operations(3);
    printf("Factorial of 5: %d\\n", factorial(5));
    printf("Square of 4: %f\\n", square(4));
    printf("Cube of 3: %f\\n", cube(&mo));
    return 0;
}
""",
        """⋮...
█int factorial(int n) {
⋮...
█double square(double x) {
⋮...
█} MathOperations;
⋮...
█MathOperations create_math_operations(double value) {
⋮...
█double cube(MathOperations* mo) {
⋮...
█int main() {
⋮...
""".strip(),
    ),
    (
        "cpp",
        "cpp",
        """
#include <iostream>
#include <cmath>
int factorial(int n) {
    if (n == 0 || n == 1) return 1;
    return n * factorial(n - 1);
}
class MathOperations {
public:
    static double square(double x) {
        return x * x;
    }
    MathOperations(double value) : value(value) {}
    double cube() const {
        return std::pow(value, 3);
    }
private:
    double value;
};
int main() {
    MathOperations mo(3);
    std::cout << "Factorial of 5: " << factorial(5) << std::endl;
    std::cout << "Square of 4: " << MathOperations::square(4) << std::endl;
    std::cout << "Cube of 3: " << mo.cube() << std::endl;
    return 0;
}
""",
        """⋮...
█int factorial(int n) {
⋮...
█class MathOperations {
⋮...
█    static double square(double x) {
⋮...
█    MathOperations(double value) : value(value) {}
⋮...
█    double cube() const {
⋮...
█int main() {
⋮...
""".strip(),
    ),
    (
        "csharp",
        "cs",
        """
using System;
public class MathOperations
{
    public static int Factorial(int n)
    {
        if (n == 0 || n == 1) return 1;
        return n * Factorial(n - 1);
    }
    public static double Square(double x) => x * x;
    private readonly double _value;
    public MathOperations(double value)
    {
        _value = value;
    }
    public double Cube() => Math.Pow(_value, 3);
    public static void Main(string[] args)
    {
        var mathOp = new MathOperations(3);
        Console.WriteLine($"Factorial of 5: {Factorial(5)}");
        Console.WriteLine($"Square of 4: {Square(4)}");
        Console.WriteLine($"Cube of 3: {mathOp.Cube()}");
    }
}
""",
        """⋮...
█public class MathOperations
⋮...
█    public static int Factorial(int n)
⋮...
█    public static double Square(double x) => x * x;
⋮...
█    public double Cube() => Math.Pow(_value, 3);
⋮...
█    public static void Main(string[] args)
⋮...
""".strip(),
    ),
    (
        "ruby",
        "rb",
        """
def factorial(n)
  return 1 if n == 0 || n == 1
  n * factorial(n - 1)
end
class MathOperations
  def self.square(x)
    x * x
  end
  def initialize(value)
    @value = value
  end
  def cube
    @value ** 3
  end
end
math_op = MathOperations.new(3)
puts "Factorial of 5: #{factorial(5)}"
puts "Square of 4: #{MathOperations.square(4)}"
puts "Cube of 3: #{math_op.cube}"
""",
        """⋮...
█def factorial(n)
⋮...
█class MathOperations
█  def self.square(x)
⋮...
█  def initialize(value)
⋮...
█  def cube
⋮...
""".strip(),
    ),
    (
        "go",
        "go",
        """
package main
import (
    "fmt"
    "math"
)
func factorial(n int) int {
    if n == 0 || n == 1 {
        return 1
    }
    return n * factorial(n-1)
}
type MathOperations struct {
    value float64
}
func (mo MathOperations) Cube() float64 {
    return math.Pow(mo.value, 3)
}
func Square(x float64) float64 {
    return x * x
}
func main() {
    mathOp := MathOperations{value: 3}
    fmt.Printf("Factorial of 5: %d\\n", factorial(5))
    fmt.Printf("Square of 4: %.2f\\n", Square(4))
    fmt.Printf("Cube of 3: %.2f\\n", mathOp.Cube())
}
""",
        """⋮...
█func factorial(n int) int {
⋮...
█type MathOperations struct {
⋮...
█func (mo MathOperations) Cube() float64 {
⋮...
█func Square(x float64) float64 {
⋮...
█func main() {
⋮...
""".strip(),
    ),
    (
        "rust",
        "rs",
        """
struct MathOperations {
    value: f64,
}
impl MathOperations {
    fn new(value: f64) -> Self {
        MathOperations { value }
    }
    fn cube(&self) -> f64 {
        self.value.powi(3)
    }
}
fn factorial(n: u64) -> u64 {
    match n {
        0 | 1 => 1,
        _ => n * factorial(n - 1),
    }
}
fn square(x: f64) -> f64 {
    x * x
}
fn main() {
    let math_op = MathOperations::new(3.0);
    println!("Factorial of 5: {}", factorial(5));
    println!("Square of 4: {}", square(4.0));
    println!("Cube of 3: {}", math_op.cube());
}
""",
        """⋮...
█struct MathOperations {
⋮...
█    fn new(value: f64) -> Self {
⋮...
█    fn cube(&self) -> f64 {
⋮...
█fn factorial(n: u64) -> u64 {
⋮...
█fn square(x: f64) -> f64 {
⋮...
█fn main() {
⋮...
""".strip(),
    ),
    (
        "php",
        "php",
        """
<?php
function factorial($n) {
    if ($n == 0 || $n == 1) return 1;
    return $n * factorial($n - 1);
}
class MathOperations {
    private $value;
    public function __construct($value) {
        $this->value = $value;
    }
    public static function square($x) {
        return $x * $x;
    }
    public function cube() {
        return pow($this->value, 3);
    }
}
$mathOp = new MathOperations(3);
echo "Factorial of 5: " . factorial(5) . "\\n";
echo "Square of 4: " . MathOperations::square(4) . "\\n";
echo "Cube of 3: " . $mathOp->cube() . "\\n";
""",
        """⋮...
█function factorial($n) {
⋮...
█class MathOperations {
█    private $value;
⋮...
█    public function __construct($value) {
⋮...
█    public static function square($x) {
⋮...
█    public function cube() {
⋮...
""".strip(),
    ),
    (
        "elm",
        "elm",
        """
module MathOperations exposing (factorial, square, cube)
factorial : Int -> Int
factorial n =
    if n <= 1 then
        1
    else
        n * factorial (n - 1)
square : Float -> Float
square x =
    x * x
type MathOp =
    MathOp Float
cube : MathOp -> Float
cube (MathOp value) =
    value ^ 3
main =
    let
        mathOp =
            MathOp 3
    in
    [ "Factorial of 5: " ++ String.fromInt (factorial 5)
    , "Square of 4: " ++ String.fromFloat (square 4)
    , "Cube of 3: " ++ String.fromFloat (cube mathOp)
    ]
        |> String.join "\\n"
        |> Debug.log "Results"
""",
        """⋮...
█module MathOperations exposing (factorial, square, cube)
⋮...
█factorial n =
⋮...
█square x =
⋮...
█type MathOp =
█    MathOp Float
⋮...
█cube (MathOp value) =
⋮...
█main =
⋮...
█        mathOp =
⋮...
""".strip(),
    ),
    (
        "elixir",
        "ex",
        """
defmodule MathOperations do
  def factorial(0), do: 1
  def factorial(1), do: 1
  def factorial(n) when n > 1, do: n * factorial(n - 1)
  def square(x), do: x * x
  defstruct [:value]
  def new(value), do: %__MODULE__{value: value}
  def cube(%__MODULE__{value: value}), do: :math.pow(value, 3)
end
math_op = MathOperations.new(3)
IO.puts "Factorial of 5: #{MathOperations.factorial(5)}"
IO.puts "Square of 4: #{MathOperations.square(4)}"
IO.puts "Cube of 3: #{MathOperations.cube(math_op)}"
""",
        """⋮...
█defmodule MathOperations do
█  def factorial(0), do: 1
█  def factorial(1), do: 1
█  def factorial(n) when n > 1, do: n * factorial(n - 1)
⋮...
█  def square(x), do: x * x
⋮...
█  def new(value), do: %__MODULE__{value: value}
⋮...
█  def cube(%__MODULE__{value: value}), do: :math.pow(value, 3)
⋮...
""".strip(),
    ),
    (
        "elisp",
        "el",
        """
(defun factorial (n)
  (if (<= n 1)
      1
    (* n (factorial (- n 1)))))
(defun square (x)
  (* x x))
(defstruct math-operations
  value)
(defun create-math-operations (value)
  (make-math-operations :value value))
(defun cube (math-op)
  (expt (math-operations-value math-op) 3))
(let ((math-op (create-math-operations 3)))
  (message "Factorial of 5: %d" (factorial 5))
  (message "Square of 4: %f" (square 4))
  (message "Cube of 3: %f" (cube math-op)))
""",
        """⋮...
█(defun factorial (n)
⋮...
█(defun square (x)
⋮...
█(defun create-math-operations (value)
⋮...
█(defun cube (math-op)
⋮...
""".strip(),
    ),
]
@pytest.fixture
def tagger():
    return ASTBasedTagger.create("", ASTFactory.create())
@pytest.mark.parametrize("language,extension,code,expected_highlights", TEST_CASES)
def test_outline_generation(language, extension, code, expected_highlights, tagger):
    source = Source(f"test_file.{extension}", code)
    excerpter = CodeOutliner({"tagger": tagger})
    result = excerpter.excerpt([source])
    assert len(result.excerpts) == 1
    assert result.excerpts[0].rel_path == f"test_file.{extension}"
    actual_highlights = result.excerpts[0].content.strip()
    assert actual_highlights == expected_highlights, (
        f"Mismatch in {language} highlights:\nExpected:\n{expected_highlights}\n\nActual:\n{actual_highlights}"
    )
```
--------------------------------------------------------------------------------
/src/llm_context/rule.py:
--------------------------------------------------------------------------------
```python
from dataclasses import dataclass
from logging import WARNING
from pathlib import Path
from typing import Any, Optional, cast
from packaging import version
from llm_context.exceptions import RuleResolutionError
from llm_context.rule_parser import DEFAULT_CODE_RULE, RuleLoader, RuleParser
from llm_context.utils import ProjectLayout, Yaml, log, safe_read_file
CURRENT_CONFIG_VERSION = version.parse("5.2")
IGNORE_NOTHING = [".git"]
INCLUDE_ALL = ["**/*"]
DEFAULT_OVERVIEW_MODE = "full"
@dataclass(frozen=True)
class RuleComposition:
    filters: list[str]
    excerpters: list[str]
    @staticmethod
    def from_config(config: dict[str, Any]) -> "RuleComposition":
        return RuleComposition(
            config.get("filters", []),
            config.get("excerpters", []),
        )
@dataclass(frozen=True)
class Rule:
    name: str
    description: str
    overview: str
    instructions: str
    compose: RuleComposition
    gitignores: dict[str, list[str]]
    limit_to: dict[str, list[str]]
    also_include: dict[str, list[str]]
    implementations: list[tuple[str, str]]  # (file_path, definition_name)
    excerpt_modes: dict[str, str]
    excerpt_config: dict[str, dict[str, Any]]
    @staticmethod
    def from_config(config: dict[str, Any]) -> "Rule":
        return Rule.create(
            config.get("name", ""),
            config.get("description", ""),
            config.get("overview", DEFAULT_OVERVIEW_MODE),
            config.get("instructions", ""),
            RuleComposition.from_config(config.get("compose", {})),
            config.get("gitignores", {}),
            config.get("limit-to", {}),
            config.get("also-include", {}),
            [tuple(impl) for impl in config.get("implementations", [])],
            config.get("excerpt-modes", {}),
            config.get("excerpt-config", {}),
        )
    @staticmethod
    def create(
        name,
        description,
        overview,
        instructions,
        compose,
        gitignores,
        limit_to,
        also_include,
        implementations,
        excerpt_modes,
        excerpt_config,
    ) -> "Rule":
        return Rule(
            name,
            description,
            overview,
            instructions,
            compose,
            gitignores,
            limit_to,
            also_include,
            implementations,
            excerpt_modes,
            excerpt_config,
        )
    def get_excerpt_mode(self, rel_path: str) -> Optional[str]:
        import fnmatch
        for pattern, mode in self.excerpt_modes.items():
            if fnmatch.fnmatch(rel_path, pattern):
                return mode
        return None
    def get_excerpt_config(self, excerpter_name: str) -> dict[str, Any]:
        return self.excerpt_config.get(excerpter_name, {})
    def get_ignore_patterns(self, context_type: str) -> list[str]:
        return self.gitignores.get(f"{context_type}-files", IGNORE_NOTHING)
    def get_limit_to_patterns(self, context_type: str) -> list[str]:
        return self.limit_to.get(f"{context_type}-files", INCLUDE_ALL)
    def get_also_include_patterns(self, context_type: str) -> list[str]:
        return self.also_include.get(f"{context_type}-files", [])
    def get_instructions(self) -> Optional[str]:
        return self.instructions if self.instructions else None
    def get_project_notes(self, project_layout: ProjectLayout) -> Optional[str]:
        return safe_read_file(str(project_layout.project_notes_path))
    def get_user_notes(self, project_layout: ProjectLayout) -> Optional[str]:
        return safe_read_file(str(project_layout.user_notes_path))
    def to_dict(self) -> dict[str, Any]:
        return {
            "name": self.name,
            "description": self.description,
            "overview": self.overview,
            "instructions": self.instructions,
            "compose": {
                "filters": self.compose.filters,
                "excerpters": self.compose.excerpters,
            }
            if any([self.compose.filters, self.compose.excerpters])
            else {},
            **({"gitignores": self.gitignores} if self.gitignores else {}),
            **({"limit-to": self.limit_to} if self.limit_to else {}),
            **({"also-include": self.also_include} if self.also_include else {}),
            **({"implementations": self.implementations} if self.implementations else {}),
            **({"excerpt-modes": self.excerpt_modes} if self.excerpt_modes else {}),
            **({"excerpt-config": self.excerpt_config} if self.excerpt_config else {}),
        }
@dataclass(frozen=True)
class ToolConstants:
    __warning__: str
    config_version: str
    @staticmethod
    def load(path: Path) -> "ToolConstants":
        try:
            return ToolConstants(**Yaml.load(path))
        except Exception:
            return ToolConstants.create_null()
    @staticmethod
    def from_dict(data: dict[str, Any]) -> "ToolConstants":
        return ToolConstants.create(data.get("config_version", "0"))
    @staticmethod
    def create_new() -> "ToolConstants":
        return ToolConstants.create_default(str(CURRENT_CONFIG_VERSION))
    @staticmethod
    def create_null() -> "ToolConstants":
        return ToolConstants.create_default("0")
    @staticmethod
    def create_default(version: str) -> "ToolConstants":
        return ToolConstants.create(version)
    @staticmethod
    def create(config_version: str) -> "ToolConstants":
        return ToolConstants("This file is managed by llm-context. DO NOT EDIT.", config_version)
    @property
    def needs_update(self) -> bool:
        return cast(bool, version.parse(self.config_version) < CURRENT_CONFIG_VERSION)
    def to_dict(self) -> dict[str, Any]:
        return {"__warning__": self.__warning__, "config_version": self.config_version}
@dataclass(frozen=True)
class RuleResolver:
    system_state: ToolConstants
    rule_loader: RuleLoader
    _composition_stack: frozenset[str] = frozenset()
    @staticmethod
    def create(system_state: ToolConstants, project_layout: ProjectLayout) -> "RuleResolver":
        rule_loader = RuleLoader.create(project_layout)
        return RuleResolver(system_state, rule_loader)
    def has_rule(self, rule_name: str) -> bool:
        try:
            self.rule_loader.load_rule(rule_name)
            return True
        except Exception:
            return False
    def get_rule(self, rule_name: str) -> Rule:
        if rule_name in self._composition_stack:
            raise ValueError(
                f"Circular composition detected: {' -> '.join(self._composition_stack)} -> {rule_name}"
            )
        try:
            rule = self.rule_loader.load_rule(rule_name)
            composed_config = self._compose_rule_config(rule, rule_name)
            return Rule.from_config(composed_config)
        except RuleResolutionError:
            raise
        except Exception as e:
            raise RuleResolutionError(
                f"Failed to resolve rule '{rule_name}': {str(e)}. "
                f"This may indicate outdated rule syntax or missing dependencies. "
                f"Consider updating the rule or switching to '{DEFAULT_CODE_RULE}' with: lc-set-rule {DEFAULT_CODE_RULE}"
            )
    def _compose_rule_config(self, rule: RuleParser, rule_name: str) -> dict[str, Any]:
        new_resolver = RuleResolver(
            self.system_state, self.rule_loader, self._composition_stack | {rule_name}
        )
        resolved_instructions = ""
        if "instructions" in rule.frontmatter:
            if rule.content.strip():
                log(
                    WARNING,
                    f"Rule '{rule_name}' has both 'instructions' field and markdown content. The markdown content will be ignored.",
                )
            instruction_contents = []
            for instruction_rule_name in rule.frontmatter["instructions"]:
                instruction_rule_parser = new_resolver.rule_loader.load_rule(instruction_rule_name)
                if instruction_rule_parser.content.strip():
                    instruction_contents.append(instruction_rule_parser.content)
            resolved_instructions = "\n\n".join(instruction_contents)
        else:
            resolved_instructions = rule.content
        if not rule.frontmatter.get("compose"):
            config = rule.to_rule_config()
            config["instructions"] = resolved_instructions
            return config
        composed_config = {
            "name": rule.name,
            "description": rule.frontmatter.get("description", ""),
            "overview": rule.frontmatter.get("overview", DEFAULT_OVERVIEW_MODE),
            "instructions": resolved_instructions,
            "gitignores": {},
            "limit-to": {},
            "also-include": {},
            "implementations": [],
            "excerpt-modes": {},
            "excerpt-config": {},
        }
        compose_config = rule.frontmatter.get("compose", {})
        for filter_rule_name in compose_config.get("filters", []):
            composed_filter_rule = new_resolver.get_rule(filter_rule_name)
            filter_config = composed_filter_rule.to_dict()
            self._merge_gitignores(composed_config, filter_config)
            self._merge_limit_to(composed_config, filter_config)
            self._merge_also_include(composed_config, filter_config)
        for excerpter_rule_name in compose_config.get("excerpters", []):
            composed_excerpter_rule = new_resolver.get_rule(excerpter_rule_name)
            excerpter_config = composed_excerpter_rule.to_dict()
            self._merge_excerpt_modes(composed_config, excerpter_config)
            self._merge_excerpt_config(composed_config, excerpter_config)
        for field in [
            "gitignores",
            "limit-to",
            "also-include",
            "implementations",
            "excerpt-modes",
            "excerpt-config",
        ]:
            if field in rule.frontmatter:
                if field == "gitignores":
                    self._merge_gitignores(composed_config, rule.frontmatter)
                elif field == "limit-to":
                    self._merge_limit_to(composed_config, rule.frontmatter)
                elif field == "also-include":
                    self._merge_also_include(composed_config, rule.frontmatter)
                elif field == "excerpt-modes":
                    self._merge_excerpt_modes(composed_config, rule.frontmatter)
                elif field == "excerpt-config":
                    self._merge_excerpt_config(composed_config, rule.frontmatter)
                else:
                    composed_config[field].extend(rule.frontmatter[field])
        return composed_config
    def _merge_gitignores(self, target: dict, source: dict):
        source_gitignores = source.get("gitignores", {})
        for key, patterns in source_gitignores.items():
            if key not in target["gitignores"]:
                target["gitignores"][key] = []
            existing = set(target["gitignores"][key])
            target["gitignores"][key].extend([p for p in patterns if p not in existing])
    def _merge_limit_to(self, target: dict, source: dict):
        source_includes = source.get("limit-to", {})
        for key, patterns in source_includes.items():
            if key in target["limit-to"] and target["limit-to"][key]:
                log(
                    WARNING,
                    f"Multiple 'limit-to' clauses for '{key}' detected. "
                    f"Keeping patterns: {target['limit-to'][key]}. "
                    f"Dropping patterns: {patterns}.",
                )
                continue
            target["limit-to"][key] = list(patterns)
    def _merge_also_include(self, target: dict, source: dict):
        source_includes = source.get("also-include", {})
        for key, patterns in source_includes.items():
            if key not in target["also-include"]:
                target["also-include"][key] = []
            existing = set(target["also-include"][key])
            target["also-include"][key].extend([p for p in patterns if p not in existing])
    def _merge_excerpt_modes(self, target: dict, source: dict):
        source_modes = source.get("excerpt-modes", {})
        for pattern, mode in source_modes.items():
            if pattern not in target["excerpt-modes"]:
                target["excerpt-modes"][pattern] = mode
    def _merge_excerpt_config(self, target: dict, source: dict):
        source_configs = source.get("excerpt-config", {})
        for processor_name, processor_config in source_configs.items():
            if processor_name not in target["excerpt-config"]:
                target["excerpt-config"][processor_name] = {}
            existing = target["excerpt-config"][processor_name]
            existing.update(processor_config)
```