# Directory Structure
```
├── .DS_Store
├── .gitignore
├── .python-version
├── DESIGN_PLAN.md
├── LICENSE
├── pyproject.toml
├── README.md
├── src
│ └── logseq_mcp
│ ├── __init__.py
│ ├── client
│ │ ├── __init__.py
│ │ └── logseq_client.py
│ ├── mcp.py
│ ├── tools
│ │ ├── __init__.py
│ │ ├── blocks.py
│ │ └── pages.py
│ └── utils
│ ├── __init__.py
│ └── logging.py
└── uv.lock
```
# Files
--------------------------------------------------------------------------------
/.python-version:
--------------------------------------------------------------------------------
```
3.11
```
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
```
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
.pybuilder/
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# UV
# Similar to Pipfile.lock, it is generally recommended to include uv.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
#uv.lock
# poetry
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock
# pdm
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
#pdm.lock
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
# in version control.
# https://pdm.fming.dev/latest/usage/project/#working-with-version-control
.pdm.toml
.pdm-python
.pdm-build/
# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# pytype static type analyzer
.pytype/
# Cython debug symbols
cython_debug/
# PyCharm
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/
# Ruff stuff:
.ruff_cache/
# PyPI configuration file
.pypirc
```
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
```markdown
# Logseq MCP Tools
This project provides a set of Model Context Protocol (MCP) tools that enable AI agents to interact with your local Logseq instance.
## Installation
1. Ensure you have Python 3.11+ installed
2. Clone this repository
3. Install dependencies:
```bash
pip install -e .
```
## Setup
1. Make sure your Logseq has the API enabled.
- In Logseq, go to Settings > Advanced > Developer mode > Enable Developer mode
- Then, go to Plugins > Turn on Logseq Developer Plugin
- Also set an API token in the Advanced settings
- Restart Logseq
2. Configure the MCP server in your Cursor MCP configuration file (typically at `~/.cursor/mcp.json`):
```json
{
"mcpServers": {
"logseq": {
"command": "/opt/homebrew/bin/uvx",
"args": ["logseq-mcp"],
"env": {
"LOGSEQ_API_URL": "http://localhost:12315",
"LOGSEQ_TOKEN": "your-token-here"
}
}
}
}
```
OR
3. Configure Claude Code to use the MCP server with:
```
claude mcp add
```
- Select scope
- Select Stdio
- `LOGSEQ_API_URL=http://localhost:12315 LOGSEQ_TOKEN=your-token-here /opt/homebrew/bin/uvx logseq-mcp`
## Using with Cursor and Claude
### Adding to Cursor's MCP Tools
1. Configure the MCP server as shown above in the Setup section
2. Open Cursor and go to the MCP panel (sidebar)
3. The Logseq tool should appear in your list of available tools
### Using with Claude
When using Claude in Cursor, you'll need to inform it that you have Logseq tools available with a prompt similar to:
"You have access to Logseq tools that can help you interact with my Logseq graph. You can use functions like logseq.get_all_pages(), logseq.get_page(name), logseq.create_page(name), etc."
## Available Tools
All tools are available under the `logseq` namespace:
### Pages
- `logseq.get_all_pages`: Get a list of all pages in the Logseq graph
- `logseq.get_page`: Get a specific page by name
- `logseq.create_page`: Create a new page
- `logseq.delete_page`: Delete a page and all its blocks
### Blocks
- `logseq.get_page_blocks`: Get all blocks from a specific page
- `logseq.get_block`: Get a specific block by ID
- `logseq.create_block`: Create a new block on a page
- `logseq.insert_block`: Insert a block as a child of another block
- `logseq.update_block`: Update an existing block
- `logseq.move_block`: Move a block to a different location
- `logseq.remove_block`: Remove a block and all its children
- `logseq.search_blocks`: Search for blocks matching a query
## Working with Logseq
### Journal Pages
Journal pages in Logseq have a specific format and attributes:
1. Use the format "mmm dth, yyyy" (e.g., "Apr 4th, 2025") when creating or accessing journal pages
2. Journal pages are automatically formatted by Logseq with proper dating
3. Journal pages have special attributes that are automatically set by Logseq:
- `journal?`: true - Indicates this is a journal page
- `journalDay`: YYYYMMDD - The date in numeric format (e.g., 20250404 for April 4, 2025)
4. Example: `await logseq.create_page("Apr 4th, 2025")`
**Important:** You do not need to manually set the `journal?` or `journalDay` attributes. Simply creating a page with the proper date format (e.g., "Apr 4th, 2025") will automatically configure it as a journal page with the appropriate attributes.
### Block Structure and Formatting
Blocks in Logseq have some important characteristics to understand:
1. **Automatic Bullets**: All blocks are automatically rendered as bullet points in the Logseq UI
2. **Page Links**: Create links using double brackets: `[[Page Name]]`
3. **Hierarchical Blocks**:
- Block structure data contains hierarchical information:
- `parent`: The parent block's ID
- `level`: The indentation level (1 for top-level, 2+ for indented blocks)
- `left`: The block to the left (typically the parent for indented blocks)
4. **Block Content**: When creating blocks, you can include text formatting:
- Basic Markdown is supported (bold, italic, etc.)
- Bullet points within a block may have limited support
- Multi-line content is supported but may be subject to Logseq's parsing rules
5. **Journal Blocks**: Blocks created in journal pages inherit special attributes:
- `journal?`: true
- `journalDay`: YYYYMMDD - Same as the journal page
**Note:** Like journal pages, these block attributes are automatically handled by Logseq. You don't need to manually set the `journal?` or `journalDay` attributes when creating blocks on journal pages.
### Example Usage for Common Tasks
**Working with the Cursor agent:**
When you have Logseq MCP tools configured in Cursor, you can give the agent prompts like:
- "Create a new page called 'Meeting Notes' with bullet points for today's agenda"
- "Add today's tasks to my journal page with a 'Tasks' section"
- "Update today's journal entry with [[Project Plan]], set its child element to 'Completed milestone 1'"
- "Search my graph for blocks about 'python projects' and organize them on a new page"
The agent will use the appropriate Logseq tools to carry out these operations on your graph.
```
--------------------------------------------------------------------------------
/src/logseq_mcp/utils/__init__.py:
--------------------------------------------------------------------------------
```python
from .logging import log
__all__ = ["log"]
```
--------------------------------------------------------------------------------
/src/logseq_mcp/client/__init__.py:
--------------------------------------------------------------------------------
```python
from .logseq_client import LogseqAPIClient
__all__ = ["LogseqAPIClient"]
```
--------------------------------------------------------------------------------
/src/logseq_mcp/mcp.py:
--------------------------------------------------------------------------------
```python
from mcp.server.fastmcp import FastMCP
# Create a FastMCP instance that will be used in the tools modules
mcp = FastMCP("logseq-mcp")
```
--------------------------------------------------------------------------------
/src/logseq_mcp/utils/logging.py:
--------------------------------------------------------------------------------
```python
import sys
from datetime import datetime
def log(message: str) -> None:
"""Log a message to stderr with a timestamp."""
timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
print(f"[{timestamp}] {message}", file=sys.stderr)
```
--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------
```toml
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[project]
name = "logseq-mcp"
version = "0.2.0"
description = "MCP server for Logseq integration"
requires-python = ">=3.11"
dependencies = [
"mcp[cli]>=1.2.0",
"websocket-client>=1.4.0",
"requests>=2.28.1"
]
[project.urls]
"Repository" = "https://github.com/apw124/logseq-mcp.git"
[project.scripts]
logseq-mcp = "logseq_mcp:main"
[tool.hatch.build.targets.wheel]
packages = ["src/logseq_mcp"]
```
--------------------------------------------------------------------------------
/src/logseq_mcp/tools/__init__.py:
--------------------------------------------------------------------------------
```python
from .pages import get_all_pages, get_page, create_page, delete_page, get_page_linked_references
from .blocks import get_page_blocks, get_block, create_block, update_block, remove_block, insert_block, move_block, search_blocks
__all__ = [
"get_all_pages",
"get_page",
"create_page",
"delete_page",
"get_page_blocks",
"get_block",
"create_block",
"update_block",
"remove_block",
"insert_block",
"move_block",
"search_blocks",
"get_page_linked_references",
]
```
--------------------------------------------------------------------------------
/src/logseq_mcp/__init__.py:
--------------------------------------------------------------------------------
```python
from .mcp import mcp
from .utils.logging import log
from .tools import (
get_all_pages,
get_page,
create_page,
get_page_blocks,
get_block,
create_block,
update_block,
search_blocks,
get_page_linked_references,
)
import os
import inspect
__all__ = ["get_all_pages", "get_page", "create_page", "get_page_blocks", "get_block", "create_block", "update_block", "search_blocks", "get_page_linked_references"]
def main():
"""Main function to run the Logseq MCP server"""
log("Starting Logseq MCP server...")
mcp.run(transport="stdio")
```
--------------------------------------------------------------------------------
/src/logseq_mcp/tools/pages.py:
--------------------------------------------------------------------------------
```python
from typing import Dict, List, Optional
from ..client.logseq_client import LogseqAPIClient
from ..mcp import mcp
# Initialize client with configuration
logseq_client = LogseqAPIClient()
@mcp.tool()
def get_all_pages() -> List[Dict]:
"""
Gets all pages from the Logseq graph.
Journal pages can be identified by the "journal?" attribute set to true and
will include a "journalDay" attribute in the format YYYYMMDD.
Returns:
List of all pages in the Logseq graph.
"""
return logseq_client.get_all_pages()
@mcp.tool()
def get_page(name: str) -> Optional[Dict]:
"""
Gets a specific page from the Logseq graph by name.
For journal pages, use the format "mmm dth, yyyy" (e.g., "Apr 4th, 2025").
Journal pages have specific attributes:
- "journal?": true - Indicates this is a journal page
- "journalDay": YYYYMMDD - The date in numeric format
Args:
name: The name of the page to retrieve.
Returns:
Information about the requested page, or None if not found.
"""
return logseq_client.get_page(name)
@mcp.tool()
def create_page(name: str, properties: Optional[Dict] = None) -> Dict:
"""
Creates a new page in the Logseq graph.
For journal pages, use the format "mmm dth, yyyy" (e.g., "Apr 4th, 2025").
Logseq automatically sets "journal?": true and "journalDay": YYYYMMDD.
Args:
name: The name of the new page.
properties: Optional properties to set on the new page.
Returns:
Information about the created page.
"""
return logseq_client.create_page(name, properties)
@mcp.tool()
def delete_page(name: str) -> Dict:
"""
Deletes a page from the Logseq graph.
⚠️ This removes the page and all its blocks. Cannot be undone.
Args:
name: The name of the page to delete.
Returns:
Result of the deletion operation.
"""
return logseq_client.delete_page(name)
@mcp.tool()
def get_page_linked_references(page_name: str) -> List[Dict]:
"""
Gets all linked references to a specific page.
Returns blocks containing [[Page Name]] links to the specified page.
Args:
page_name: The name of the page to find references to.
Returns:
List of blocks that reference the specified page.
"""
return logseq_client.get_page_linked_references(page_name)
```
--------------------------------------------------------------------------------
/src/logseq_mcp/tools/blocks.py:
--------------------------------------------------------------------------------
```python
from typing import Dict, List, Optional
from ..client.logseq_client import LogseqAPIClient
from ..mcp import mcp
# Initialize client with configuration
logseq_client = LogseqAPIClient()
@mcp.tool()
def get_page_blocks(page_name: str) -> List[Dict]:
"""
Gets all blocks from a specific page in the Logseq graph.
For journal pages, use the format "mmm dth, yyyy" (e.g., "Apr 4th, 2025").
Returned blocks contain hierarchical structure information:
- parent: The parent block's ID
- level: The indentation level (1 for top-level, 2+ for indented)
- left: The block to the left (typically the parent for indented blocks)
Args:
page_name: The name of the page to retrieve blocks from.
Returns:
List of blocks from the specified page.
"""
return logseq_client.get_page_blocks(page_name)
@mcp.tool()
def get_block(block_id: str) -> Optional[Dict]:
"""
Gets a specific block from the Logseq graph by its ID.
The returned block contains hierarchical structure information:
- parent: The parent block's ID
- level: The indentation level
- left: The block to the left
Args:
block_id: The ID of the block to retrieve.
Returns:
Information about the requested block, or None if not found.
"""
return logseq_client.get_block(block_id)
@mcp.tool()
def create_block(page_name: str, content: str, properties: Optional[Dict] = None) -> Dict:
"""
Creates a new block on a page in the Logseq graph.
Note: Blocks are automatically formatted as bullet points in Logseq UI.
Use [[Page Name]] to create links to other pages.
Args:
page_name: The name of the page to create the block on.
content: The content of the new block.
properties: Optional properties to set on the new block.
Returns:
Information about the created block.
"""
return logseq_client.create_block(page_name, content, properties)
@mcp.tool()
def insert_block(parent_block_id: str, content: str, properties: Optional[Dict] = None, before: bool = False) -> Dict:
"""
Inserts a new block as a child of the specified parent block.
Creates hierarchical content by adding children to existing blocks.
The new block is inserted at the beginning (before=True) or end (before=False)
of the parent's children.
Args:
parent_block_id: The ID of the parent block to insert under.
content: The content of the new block.
properties: Optional properties to set on the new block.
before: Whether to insert at the beginning of children (default: False).
Returns:
Information about the created block.
"""
return logseq_client.insert_block(parent_block_id, content, properties, before)
@mcp.tool()
def update_block(block_id: str, content: str, properties: Optional[Dict] = None) -> Dict:
"""
Updates an existing block in the Logseq graph.
Use [[Page Name]] to create links to other pages.
Args:
block_id: The ID of the block to update.
content: The new content for the block.
properties: Optional properties to update on the block.
Returns:
Information about the updated block.
"""
return logseq_client.update_block(block_id, content, properties)
@mcp.tool()
def move_block(block_id: str, target_block_id: str, as_child: bool = False) -> Dict:
"""
Moves a block to a new location in the graph.
Moves a block and all its children to a different location.
- as_child=True: Block becomes a child of the target
- as_child=False: Block becomes a sibling after the target
Args:
block_id: The ID of the block to move.
target_block_id: The ID of the target block to move to.
as_child: Whether to make the block a child of the target (default: False).
Returns:
Result of the move operation.
"""
return logseq_client.move_block(block_id, target_block_id, as_child)
@mcp.tool()
def remove_block(block_id: str) -> Dict:
"""
Removes a block from the Logseq graph.
⚠️ Permanently removes the block and all its children. Cannot be undone.
Args:
block_id: The ID of the block to remove.
Returns:
Result of the removal operation.
"""
return logseq_client.remove_block(block_id)
@mcp.tool()
def search_blocks(query: str) -> List[Dict]:
"""
Searches for blocks matching a query in the Logseq graph.
Query examples:
- page:"Page Name" - blocks on a specific page
- "search term" - blocks containing the term
- [[Page Name]] - references to a specific page
Args:
query: The search query.
Returns:
List of blocks matching the search query.
"""
return logseq_client.search_blocks(query)
```
--------------------------------------------------------------------------------
/src/logseq_mcp/client/logseq_client.py:
--------------------------------------------------------------------------------
```python
import requests
import os
from typing import Dict, List, Optional, Any
class LogseqAPIClient:
"""Client for interacting with the Logseq API"""
def __init__(self, api_url: Optional[str] = None, token: Optional[str] = None) -> None:
"""
Initialize the Logseq API client
Args:
api_url: URL of the Logseq API (default from mcp config)
token: API token for authentication (default from mcp config)
"""
self.api_url = api_url or os.getenv("LOGSEQ_API_URL", "http://localhost:12315")
self.token = token or os.getenv("LOGSEQ_TOKEN")
def _get_headers(self) -> Dict[str, str]:
"""Get headers for API requests"""
headers = {
"Content-Type": "application/json"
}
if self.token:
headers["Authorization"] = f"Bearer {self.token}"
return headers
def call_api(self, method: str, args: Optional[List] = None) -> Any:
"""
Call the Logseq API using the proper format
Args:
method: API method to call (e.g., "logseq.Editor.getCurrentBlock")
args: Arguments for the method
Returns:
API response (could be a dict, list, or other JSON-serializable data)
"""
url = f"{self.api_url}/api"
headers = self._get_headers()
data = {
"method": method,
"args": args or []
}
try:
response = requests.post(url, headers=headers, json=data)
if response.status_code == 401:
return {
"success": False,
"error": f"401 Unauthorized: Please provide a valid token in LOGSEQ_API_TOKEN environment variable"
}
response.raise_for_status()
# Parse JSON response
json_response = response.json()
# Some Logseq API endpoints return the result directly, others wrap it in a result field
# We need to handle both cases
if isinstance(json_response, dict) and "result" in json_response:
return json_response
return json_response
except requests.exceptions.RequestException as e:
print(f"API request error: {e}")
return {"success": False, "error": str(e)}
# Legacy API methods - now using the proper format
def get_current_graph(self) -> Dict:
"""Get information about the current graph"""
return self.call_api("logseq.App.getCurrentGraph")
def get_all_pages(self) -> List[Dict]:
"""Get all pages in the graph"""
response = self.call_api("logseq.Editor.getAllPages")
if isinstance(response, list):
return response
return response.get("result", []) if isinstance(response, dict) else []
def get_page(self, page_name: str) -> Optional[Dict]:
"""Get a page by name"""
response = self.call_api("logseq.Editor.getPage", [page_name])
if response is None:
return None
return response.get("result") if isinstance(response, dict) else response
def get_page_blocks(self, page_name: str) -> List[Dict]:
"""Get all blocks for a page"""
response = self.call_api("logseq.Editor.getPageBlocksTree", [page_name])
if isinstance(response, list):
return response
return response.get("result", []) if isinstance(response, dict) else []
def search_blocks(self, query: str) -> List[Dict]:
"""Search for blocks matching a query"""
response = self.call_api("logseq.Editor.search", [query])
if isinstance(response, list):
return response
return response.get("result", []) if isinstance(response, dict) else []
def create_page(self, page_name: str, properties: Optional[Dict] = None) -> Dict:
"""Create a new page"""
params = [page_name]
if properties:
params.append(properties)
response = self.call_api("logseq.Editor.createPage", params)
if isinstance(response, dict) and "result" in response:
return response.get("result")
return response
def create_block(self, page_name: str, content: str, properties: Optional[Dict] = None) -> Dict:
"""Create a new block on a page"""
params = [page_name, content]
if properties:
params.append(properties)
response = self.call_api("logseq.Editor.appendBlockInPage", params)
if isinstance(response, dict) and "result" in response:
return response.get("result")
return response
def update_block(self, block_id: str, content: str, properties: Optional[Dict] = None) -> Dict:
"""Update an existing block"""
params = [block_id, content]
if properties:
params.append(properties)
response = self.call_api("logseq.Editor.updateBlock", params)
if isinstance(response, dict) and "result" in response:
return response.get("result")
return response
def get_block(self, block_id: str) -> Optional[Dict]:
"""Get a block by ID"""
response = self.call_api("logseq.Editor.getBlock", [block_id])
if response is None:
return None
return response.get("result") if isinstance(response, dict) else response
def get_block_properties(self, block_id: str) -> Dict:
"""Get properties of a block"""
response = self.call_api("logseq.Editor.getBlockProperties", [block_id])
if isinstance(response, dict) and "result" in response:
return response.get("result", {})
return response if isinstance(response, dict) else {}
def get_page_linked_references(self, page_name: str) -> List[Dict]:
"""Get linked references to a page"""
response = self.call_api("logseq.Editor.getPageLinkedReferences", [page_name])
if isinstance(response, list):
return response
return response.get("result", []) if isinstance(response, dict) else []
def delete_page(self, page_name: str) -> Dict:
"""Delete a page from the graph"""
response = self.call_api("logseq.Editor.deletePage", [page_name])
if isinstance(response, dict) and "result" in response:
return response.get("result")
return response
def remove_block(self, block_id: str) -> Dict:
"""Remove a block and its children from the graph"""
response = self.call_api("logseq.Editor.removeBlock", [block_id])
if isinstance(response, dict) and "result" in response:
return response.get("result")
return response
def insert_block(self, parent_block_id: str, content: str, properties: Optional[Dict] = None, before: bool = False) -> Dict:
"""Insert a new block as a child of the specified parent block"""
params = [parent_block_id, content]
if properties:
params.append(properties)
# Choose the appropriate API method based on the 'before' parameter
method = "logseq.Editor.insertBlock"
if before:
method = "logseq.Editor.prependBlock"
response = self.call_api(method, params)
if isinstance(response, dict) and "result" in response:
return response.get("result")
return response
def move_block(self, block_id: str, target_block_id: str, as_child: bool = False) -> Dict:
"""Move a block to a new location in the graph"""
# Determine the appropriate API method based on the as_child parameter
method = "logseq.Editor.moveBlock"
# The API expects a structured argument for the move operation
move_params = {
"srcUUID": block_id,
"targetUUID": target_block_id,
"isChild": as_child
}
response = self.call_api(method, [move_params])
if isinstance(response, dict) and "result" in response:
return response.get("result")
return response
```
--------------------------------------------------------------------------------
/DESIGN_PLAN.md:
--------------------------------------------------------------------------------
```markdown
# Logseq MCP Server Enhancement Plan
## Overview
This phased design plan addresses the identified improvements for the Logseq MCP server, focusing on documentation quality, resource implementation, prompts, and performance enhancements.
## Key Technical Findings (Web Search Results)
### Logseq API Specifications
- **Official Documentation**: https://logseq.github.io/plugins/ and https://plugins-doc.logseq.com/
- **Available Methods**: IEditorProxy interface includes block manipulation, page operations, UUID generation
- **Local Operation**: Logseq API is entirely local - no external rate limits
- **File-Based**: Operates on local .md files, enabling file system metadata access
- **Limitations**:
- No native batch operations or transaction support
- Performance degrades with ~10,000 interconnected pages
- Memory issues with large datasets
- **Block Content Restrictions**: Each block can only contain a single paragraph or list type - no mixing of multiple unordered lists or headings within one block
### FastMCP Framework
- **Resource Pattern**: `@mcp.resource("protocol://path/{param}")` decorator
- **Prompt Pattern**: `@mcp.prompt()` for reusable templates
- **Key Features**: Automatic schema generation, async/sync support, built-in Image handling
- **Execution**: `fastmcp run server.py` or direct Python execution
### Security & Performance Advantages
- **Local Operation**: No external API rate limits since Logseq runs locally
- **File System Access**: Can leverage OS file metadata for timestamps and modification tracking
- **MCP Best Practices**: In-memory session management, JSON-RPC error codes, multiple transport layers
- **Performance**: While Logseq lacks batch API support, local operation eliminates network latency
## Phase 1: Documentation Optimization
**Priority:** High
**Timeline:** 1-2 days
### Goals
- Remove duplicate docstrings in tool implementations
- Optimize docstring content for clarity and conciseness
- Ensure consistency across all tool documentation
### Tasks
1. **Fix duplicate docstrings**
- Remove secondary docstrings in pages.py (lines 19, 40, 63, 82, 101)
- Remove secondary docstrings in blocks.py (lines 29, 52, 76, 106, 130, 156, 178, 202)
2. **Optimize docstring content**
- Consolidate redundant information between main description and parameter descriptions
- Add return type hints to function signatures
- Ensure examples are concise but clear
3. **Add type hints**
- Update all function signatures with proper return type annotations
- Consider using TypedDict for complex return structures
## Phase 2: MCP Resources Implementation
**Priority:** High
**Timeline:** 3-4 days
### Goals
- Provide contextual information about the Logseq graph
- Enable better AI assistant understanding of graph structure
- Reduce need for repetitive API calls
### Resources to Implement
1. **graph_info** - Current graph metadata
```python
@mcp.resource("logseq://graph/info")
async def get_graph_info():
"""Returns current graph name, stats, and configuration"""
# Use caching to reduce API calls
return cache.get_or_fetch("graph_info",
lambda: logseq_client.get_current_graph())
```
2. **recent_pages** - Recently modified pages
```python
@mcp.resource("logseq://pages/recent")
async def get_recent_pages(limit: int = 20):
"""Returns recently modified pages with timestamps from file metadata"""
# Get all pages from Logseq
pages = await logseq_client.get_all_pages()
# For each page, get file path and OS modification time
pages_with_timestamps = []
for page in pages:
file_path = get_page_file_path(page['name']) # Helper to map page to .md file
if os.path.exists(file_path):
mtime = os.path.getmtime(file_path)
pages_with_timestamps.append({
**page,
'modified_time': mtime,
'modified_date': datetime.fromtimestamp(mtime).isoformat()
})
# Sort by modification time and return most recent
return sorted(pages_with_timestamps,
key=lambda x: x['modified_time'],
reverse=True)[:limit]
```
3. **journal_entries** - Recent journal entries
```python
@mcp.resource("logseq://journal/recent")
async def get_recent_journals(days: int = 7):
"""Returns journal entries from the last N days"""
```
4. **page_templates** - Common page templates
```python
@mcp.resource("logseq://templates/list")
async def get_templates():
"""Returns available page/block templates"""
```
5. **graph_structure** - Graph hierarchy overview
```python
@mcp.resource("logseq://graph/structure")
async def get_graph_structure():
"""Returns namespace hierarchy and page relationships"""
```
### Implementation Considerations
- **Caching Strategy**: Implement ResourceCache class with configurable TTL (default 300s)
- **Resource URIs**: Follow MCP pattern with protocol://path format
- **Async Operations**: Use async/await for all resource handlers
- **Error Handling**: Return appropriate JSON-RPC error codes
- **File System Integration**:
- Map Logseq pages to their corresponding .md files
- Use `os.path.getmtime()` for modification timestamps
- Consider watching file system for real-time updates
- **Graph Location**: Need to determine Logseq graph directory from API or configuration
- **Content Formatting**:
- Ensure resources return content that respects Logseq's block limitations
- Split complex content into multiple blocks when necessary
- Avoid mixing lists, headings, or paragraphs within a single block
## Phase 3: MCP Prompts Implementation
**Priority:** Medium
**Timeline:** 2-3 days
### Goals
- Guide users through common Logseq workflows
- Provide structured input collection for complex operations
- Improve user experience with templated actions
### Prompts to Implement
1. **daily_journal** - Create daily journal entry
```python
@mcp.prompt()
async def daily_journal_prompt():
return """Create a daily journal entry with sections for:
Daily goals (create as separate block)
Tasks (create as separate block)
Notes (create as separate block)
Reflection (create as separate block)
Note: Each section must be a separate block due to Logseq limitations."""
```
2. **create_project** - Project page creation
```python
@mcp.prompt()
async def create_project_prompt():
return """Create a new project page with:
- Project name: {name}
- Description: {description}
- Goals: {goals}
- Timeline: {timeline}"""
```
3. **search_assistant** - Advanced search query builder
```python
@mcp.prompt()
async def search_query_prompt():
return """Build a search query:
- Search in: [All pages/Specific page/Date range]
- Search for: {query}
- Include: [Tags/Properties/References]"""
```
4. **bulk_update** - Bulk operations guide
```python
@mcp.prompt()
async def bulk_operations_prompt():
return """Perform bulk operations:
- Operation: [Tag addition/Property update/Move blocks]
- Target: {pages/blocks}
- Changes: {changes}"""
```
## Phase 4: Composite Operations & Smart Tools
**Priority:** Medium
**Timeline:** 3-4 days
### Goals
- Provide high-value composite operations that combine multiple actions
- Add tools that leverage file system access for unique capabilities
- Improve workflow efficiency with smart helpers
### New Tools to Implement
1. **create_page_with_template**
```python
@mcp.tool()
async def create_page_with_template(
page_name: str,
template_name: str,
variables: Dict[str, str] = None
) -> Dict:
"""Create a new page and populate it with a template"""
# Create page
page = await create_page(page_name)
# Get template content
template = await get_template(template_name)
# Replace variables and create blocks
# IMPORTANT: Split content if it contains multiple lists or mixed content types
for block in template['blocks']:
content = replace_variables(block['content'], variables)
# Check if content needs to be split into multiple blocks
if needs_splitting(content):
for sub_content in split_content(content):
await create_block(page_name, sub_content)
else:
await create_block(page_name, content)
return page
```
2. **clone_page_structure**
```python
@mcp.tool()
async def clone_page_structure(
source_page: str,
target_page: str,
include_properties: bool = True
) -> Dict:
"""Clone a page with all its blocks and structure"""
# Get source page blocks
blocks = await get_page_blocks(source_page)
# Create target page
page = await create_page(target_page)
# Recreate block hierarchy
for block in blocks:
await create_block(target_page, block['content'],
block.get('properties') if include_properties else None)
return {"page": page, "blocks_cloned": len(blocks)}
```
3. **find_and_replace_global**
```python
@mcp.tool()
async def find_and_replace_global(
search_pattern: str,
replace_text: str,
page_filter: str = None,
dry_run: bool = True
) -> Dict:
"""Find and replace text across multiple pages"""
# Search for matching blocks
matches = await search_blocks(search_pattern)
if page_filter:
matches = [m for m in matches if page_filter in m['page']]
if dry_run:
return {"matches": len(matches), "preview": matches[:5]}
# Perform replacements
updated = []
for match in matches:
new_content = match['content'].replace(search_pattern, replace_text)
result = await update_block(match['id'], new_content)
updated.append(result)
return {"updated": len(updated), "blocks": updated}
```
4. **analyze_graph_statistics**
```python
@mcp.tool()
async def analyze_graph_statistics() -> Dict:
"""Analyze graph statistics using file system data"""
pages = await get_all_pages()
# Get file system stats
total_size = 0
oldest_page = None
newest_page = None
for page in pages:
file_path = get_page_file_path(page['name'])
if file_path:
metadata = get_file_metadata(file_path)
total_size += metadata['size']
# Track oldest/newest
if not oldest_page or metadata['created_time'] < oldest_page['time']:
oldest_page = {'page': page['name'], 'time': metadata['created_time']}
if not newest_page or metadata['modified_time'] > newest_page['time']:
newest_page = {'page': page['name'], 'time': metadata['modified_time']}
return {
"total_pages": len(pages),
"total_size_mb": round(total_size / 1024 / 1024, 2),
"oldest_page": oldest_page,
"newest_page": newest_page,
"journal_pages": len([p for p in pages if p.get('journal?')]),
"regular_pages": len([p for p in pages if not p.get('journal?')])
}
```
### Implementation Benefits
- **Higher Value**: These operations save significant time vs individual calls
- **Leverage Local Access**: Use file system metadata for unique insights
- **Smart Workflows**: Template-based creation, cloning, and analysis
- **Safe Operations**: Dry-run capability for destructive operations
## Phase 5: Advanced Features
**Priority:** Low
**Timeline:** 4-5 days
### Goals
- Add sophisticated querying capabilities
- Implement navigation helpers
- Provide advanced filtering options
### Features to Implement
1. **Advanced Query Builder**
- Support for complex queries with AND/OR/NOT operators
- Date range filtering
- Property-based filtering
- Regex support
2. **Graph Navigation Helpers**
```python
@mcp.tool()
def navigate_to_parent(block_id: str):
"""Navigate to parent block/page"""
@mcp.tool()
def get_siblings(block_id: str):
"""Get sibling blocks"""
@mcp.tool()
def get_descendants(block_id: str, max_depth: int = None):
"""Get all descendant blocks"""
```
3. **Smart Filters**
```python
@mcp.tool()
def filter_blocks(filters: Dict):
"""Filter blocks by multiple criteria"""
@mcp.tool()
def get_blocks_by_property(property_name: str, value: Any):
"""Get blocks with specific property values"""
```
4. **Export/Import Utilities**
```python
@mcp.tool()
def export_page_tree(page_name: str, format: str = "markdown"):
"""Export page and all blocks to specified format"""
@mcp.tool()
def import_content(content: str, format: str, target_page: str):
"""Import content into Logseq"""
```
## Implementation Guidelines
### Code Organization
- Create new modules for resources (`resources.py`) and prompts (`prompts.py`)
- Keep composite operations in a separate `composite.py` module
- Add `utils/filesystem.py` for file system operations
- Maintain backward compatibility with existing tools
### Configuration
```python
# config.py
class Config:
LOGSEQ_API_URL = os.getenv("LOGSEQ_API_URL", "http://localhost:12315")
LOGSEQ_TOKEN = os.getenv("LOGSEQ_TOKEN")
LOGSEQ_GRAPH_PATH = os.getenv("LOGSEQ_GRAPH_PATH") # Path to graph directory
CACHE_TTL = int(os.getenv("CACHE_TTL", "300"))
MAX_BATCH_SIZE = int(os.getenv("MAX_BATCH_SIZE", "50"))
REQUEST_TIMEOUT = int(os.getenv("REQUEST_TIMEOUT", "30"))
```
### File System Helpers
```python
# utils/filesystem.py
import os
from pathlib import Path
from typing import Optional
def get_page_file_path(page_name: str, graph_path: str) -> Optional[Path]:
"""Map a Logseq page name to its .md file path"""
# Handle special characters and namespaces
safe_name = page_name.replace("/", "___") # Logseq namespace separator
# Check pages directory
page_path = Path(graph_path) / "pages" / f"{safe_name}.md"
if page_path.exists():
return page_path
# Check journals directory for journal pages
journal_path = Path(graph_path) / "journals" / f"{safe_name}.md"
if journal_path.exists():
return journal_path
return None
def get_file_metadata(file_path: Path) -> dict:
"""Get file system metadata for a page file"""
stat = file_path.stat()
return {
'size': stat.st_size,
'modified_time': stat.st_mtime,
'created_time': stat.st_ctime,
'modified_date': datetime.fromtimestamp(stat.st_mtime).isoformat(),
'created_date': datetime.fromtimestamp(stat.st_ctime).isoformat()
}
```
### Content Formatting Helpers
```python
# utils/content_formatter.py
import re
from typing import List
def needs_splitting(content: str) -> bool:
"""
Check if content needs to be split into multiple blocks.
Returns True if content contains:
- Multiple unordered lists
- Multiple ordered lists
- Mixed content types (headings + lists, multiple paragraphs with lists, etc.)
"""
lines = content.strip().split('\n')
has_heading = any(line.strip().startswith('#') for line in lines)
has_unordered_list = any(line.strip().startswith(('- ', '* ', '+ ')) for line in lines)
has_ordered_list = any(re.match(r'^\d+\.', line.strip()) for line in lines)
# Count different content types
content_types = sum([has_heading, has_unordered_list, has_ordered_list])
# Check for multiple lists
list_groups = []
current_group = []
for line in lines:
if line.strip().startswith(('- ', '* ', '+ ', '1.', '2.', '3.')):
current_group.append(line)
else:
if current_group:
list_groups.append(current_group)
current_group = []
if current_group:
list_groups.append(current_group)
return content_types > 1 or len(list_groups) > 1
def split_content(content: str) -> List[str]:
"""
Split content into multiple blocks that Logseq can properly display.
Rules:
- Each heading becomes its own block
- Each list (ordered or unordered) becomes its own block
- Each paragraph becomes its own block
"""
lines = content.strip().split('\n')
blocks = []
current_block = []
current_type = None
for line in lines:
line_stripped = line.strip()
# Determine line type
if line_stripped.startswith('#'):
line_type = 'heading'
elif line_stripped.startswith(('- ', '* ', '+ ')):
line_type = 'unordered_list'
elif re.match(r'^\d+\.', line_stripped):
line_type = 'ordered_list'
elif line_stripped:
line_type = 'paragraph'
else:
line_type = 'empty'
# Handle type changes
if line_type != 'empty':
if current_type and line_type != current_type:
# Save current block and start new one
if current_block:
blocks.append('\\n'.join(current_block))
current_block = [line]
current_type = line_type
else:
current_block.append(line)
if not current_type:
current_type = line_type
# Don't forget the last block
if current_block:
blocks.append('\\n'.join(current_block))
return blocks
def format_for_logseq(content: str) -> List[str]:
"""
Format content for Logseq, splitting into multiple blocks if necessary.
Returns a list of content strings, each suitable for a single Logseq block.
"""
if needs_splitting(content):
return split_content(content)
return [content]
```
### Caching Implementation
```python
# utils/cache.py
from datetime import datetime, timedelta
from typing import Any, Callable, Optional
class ResourceCache:
def __init__(self, ttl_seconds: int = 300):
self._cache = {}
self._ttl = timedelta(seconds=ttl_seconds)
def get_or_fetch(self, key: str, fetcher: Callable[[], Any]) -> Any:
if key in self._cache:
data, timestamp = self._cache[key]
if datetime.now() - timestamp < self._ttl:
return data
data = fetcher()
self._cache[key] = (data, datetime.now())
return data
def invalidate(self, key: Optional[str] = None):
if key:
self._cache.pop(key, None)
else:
self._cache.clear()
```
### Testing Strategy
#### Unit Tests
```python
# tests/test_resources.py
import pytest
from unittest.mock import Mock, patch
@pytest.fixture
def mock_logseq_client():
return Mock()
async def test_graph_info_caching(mock_logseq_client):
# Test that repeated calls use cache
pass
async def test_resource_error_handling(mock_logseq_client):
# Test JSON-RPC error responses
pass
async def test_file_metadata_extraction():
# Test file system metadata helpers
pass
```
#### Integration Tests
```python
# tests/test_integration.py
async def test_composite_operations():
# Test create_page_with_template
# Test clone_page_structure
pass
async def test_find_and_replace_dry_run():
# Test dry run safety
pass
async def test_large_dataset_performance():
# Test with datasets approaching Logseq limits
pass
```
#### Performance Benchmarks
```python
# tests/benchmarks.py
import time
async def benchmark_composite_vs_individual():
# Compare composite operations vs individual calls
start = time.time()
# ... operations
elapsed = time.time() - start
assert elapsed < threshold
async def benchmark_file_metadata_access():
# Test file system access performance
pass
```
### Error Handling Strategy
```python
# utils/errors.py
class LogseqAPIError(Exception):
"""Base exception for Logseq API errors"""
pass
class DatasetTooLargeError(LogseqAPIError):
"""Raised when dataset exceeds Logseq capabilities"""
pass
class FileNotFoundError(LogseqAPIError):
"""Raised when page file cannot be found on disk"""
pass
class GraphPathNotConfiguredError(LogseqAPIError):
"""Raised when LOGSEQ_GRAPH_PATH is not set"""
pass
# Simple retry for local operations
async def retry_local_operation(func, max_retries=3):
for attempt in range(max_retries):
try:
return await func()
except FileNotFoundError:
if attempt == max_retries - 1:
raise
await asyncio.sleep(0.1) # Brief pause for file system
```
### Documentation Updates
- Update README with new features
- Create examples directory with usage scenarios
- Add configuration guide for resources
- **Add Logseq Block Content Guidelines**:
- Document that each block can only contain one content type
- Provide examples of how to split complex content into multiple blocks
- Include utility functions for content splitting
## Success Metrics
- Reduced API calls for common operations (target: 50% reduction using caching and file metadata)
- Improved docstring clarity (measured by user feedback)
- Successful implementation of all core resources and prompts
- Performance improvement for bulk operations (target: 3x faster due to local operation)
- File system integration providing real-time modification tracking
## Risk Mitigation
- Maintain backward compatibility throughout all phases
- Implement feature flags for new functionality
- Provide migration guide for existing users
- Extensive testing before each phase release
- Handle Logseq performance limits gracefully
- Implement proper error recovery for sequential batch operations
- Ensure cross-platform file path handling (Windows/macOS/Linux)
- Handle graph path discovery if not explicitly configured
```