#
tokens: 14530/50000 9/9 files
lines: on (toggle) GitHub
raw markdown copy reset
# Directory Structure

```
├── .dockerignore
├── .gitignore
├── .python-version
├── Dockerfile
├── README.md
├── requirements.txt
├── search.py
├── server.py
└── smithery.yaml
```

# Files

--------------------------------------------------------------------------------
/.python-version:
--------------------------------------------------------------------------------

```
1 | 3.10
2 | 
```

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
1 | venv/
2 | .claude/
3 | 
```

--------------------------------------------------------------------------------
/.dockerignore:
--------------------------------------------------------------------------------

```
 1 | __pycache__
 2 | *.pyc
 3 | *.pyo
 4 | *.pyd
 5 | .git
 6 | .gitignore
 7 | README.md
 8 | .env
 9 | .venv
10 | venv/
11 | .pytest_cache
12 | .coverage
13 | htmlcov/
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
  1 | # 📚 Semantic Scholar MCP Server
  2 | 
  3 | > **A comprehensive Model Context Protocol (MCP) server for seamless integration with Semantic Scholar's academic database**
  4 | 
  5 | [![smithery badge](https://smithery.ai/badge/@alperenkocyigit/semantic-scholar-graph-api)](https://smithery.ai/server/@alperenkocyigit/semantic-scholar-graph-api)
  6 | ![Python](https://img.shields.io/badge/python-3.10+-blue.svg)
  7 | ![License](https://img.shields.io/badge/license-MIT-green.svg)
  8 | 
  9 | **Maintainer:** [@alperenkocyigit](https://github.com/alperenkocyigit)
 10 | 
 11 | This powerful MCP server bridges the gap between AI assistants and academic research by providing direct access to Semantic Scholar's comprehensive database. Whether you're conducting literature reviews, exploring citation networks, or seeking academic insights, this server offers a streamlined interface to millions of research papers.
 12 | 
 13 | ## 🌟 What Can You Do?
 14 | 
 15 | ### 🔍 **Advanced Paper Discovery**
 16 | - **Smart Search**: Find papers using natural language queries
 17 | - **Bulk Operations**: Process multiple papers simultaneously
 18 | - **Autocomplete**: Get intelligent title suggestions as you type
 19 | - **Precise Matching**: Find exact papers using title-based search
 20 | 
 21 | ### 🎯 **AI-Powered Recommendations**
 22 | - **Smart Paper Recommendations**: Get personalized paper suggestions based on your interests
 23 | - **Multi-Example Learning**: Use multiple positive and negative examples to fine-tune recommendations
 24 | - **Single Paper Similarity**: Find papers similar to a specific research work
 25 | - **Relevance Scoring**: AI-powered relevance scores for better paper discovery
 26 | 
 27 | ### 👥 **Author Research**
 28 | - **Author Profiles**: Comprehensive author information and metrics
 29 | - **Bulk Author Data**: Fetch multiple author profiles at once
 30 | - **Author Search**: Discover researchers by name or affiliation
 31 | 
 32 | ### 📊 **Citation Analysis**
 33 | - **Citation Networks**: Explore forward and backward citations
 34 | - **Reference Mapping**: Understand paper relationships
 35 | - **Impact Metrics**: Access citation counts and paper influence
 36 | 
 37 | ### 💡 **Content Discovery**
 38 | - **Text Snippets**: Search within paper content
 39 | - **Contextual Results**: Find relevant passages and quotes
 40 | - **Full-Text Access**: When available through Semantic Scholar
 41 | 
 42 | ---
 43 | 
 44 | ## 🛠️ Quick Setup
 45 | 
 46 | ### System Requirements
 47 | - **Python**: 3.10 or higher
 48 | - **Dependencies**: `requests`, `mcp`, `bs4`, `pydantic`, `uvicorn`, `httpx`, `anyio`
 49 | - **Network**: Stable internet connection for API access
 50 | 
 51 | ### 🆕 **NEW: MCP Streamable HTTP Transport**
 52 | This server now implements the **MCP Streamable HTTP** transport protocol, providing:
 53 | - **20x Higher Concurrency**: Handle significantly more simultaneous requests
 54 | - **Lower Latency**: Direct HTTP communication for faster response times  
 55 | - **Better Resource Efficiency**: More efficient resource utilization
 56 | - **Future-Proofing**: HTTP is the recommended transport in MCP specifications
 57 | 
 58 | The server uses FastMCP for seamless MCP protocol compliance and optimal performance.
 59 | 
 60 | ## 🚀 Installation Options
 61 | 
 62 | ### ⚡ One-Click Install with Smithery
 63 | 
 64 | **For Claude Desktop:**
 65 | ```bash
 66 | npx -y @smithery/cli@latest install @alperenkocyigit/semantic-scholar-graph-api --client claude --config "{}"
 67 | ```
 68 | 
 69 | **For Cursor IDE:**
 70 | Navigate to `Settings → Cursor Settings → MCP → Add new server` and paste:
 71 | ```bash
 72 | npx -y @smithery/cli@latest run @alperenkocyigit/semantic-scholar-graph-api --client cursor --config "{}"
 73 | ```
 74 | 
 75 | **For Windsurf:**
 76 | ```bash
 77 | npx -y @smithery/cli@latest install @alperenkocyigit/semantic-scholar-graph-api --client windsurf --config "{}"
 78 | ```
 79 | 
 80 | **For Cline:**
 81 | ```bash
 82 | npx -y @smithery/cli@latest install @alperenkocyigit/semantic-scholar-graph-api --client cline --config "{}"
 83 | ```
 84 | 
 85 | ### 🔧 Manual Installation
 86 | 
 87 | 1. **Clone the repository:**
 88 |    ```bash
 89 |    git clone https://github.com/alperenkocyigit/semantic-scholar-graph-api.git
 90 |    cd semantic-scholar-graph-api
 91 |    ```
 92 | 
 93 | 2. **Install dependencies:**
 94 |    ```bash
 95 |    pip install -r requirements.txt
 96 |    ```
 97 | 
 98 | 3. **Run the MCP Streamable HTTP server:**
 99 |    ```bash
100 |    python server.py
101 |    ```
102 | 
103 | ---
104 | 
105 | ## 🔧 Configuration Guide
106 | 
107 | ### Local Setups
108 | 
109 | #### Claude Desktop Setup
110 | 
111 | **macOS/Linux Configuration:**
112 | Add to your `claude_desktop_config.json`:
113 | ```json
114 | {
115 |   "mcpServers": {
116 |     "semanticscholar": {
117 |       "command": "python",
118 |       "args": ["/path/to/your/semantic_scholar_server.py"]
119 |     }
120 |   }
121 | }
122 | ```
123 | 
124 | **Windows Configuration:**
125 | ```json
126 | {
127 |   "mcpServers": {
128 |     "semanticscholar": {
129 |       "command": "C:\\Users\\YOUR_USERNAME\\miniconda3\\envs\\mcp_server\\python.exe",
130 |       "args": ["D:\\path\\to\\your\\semantic_scholar_server.py"],
131 |       "env": {},
132 |       "disabled": false,
133 |       "autoApprove": []
134 |     }
135 |   }
136 | }
137 | ```
138 | 
139 | #### Cline Integration
140 | ```json
141 | {
142 |   "mcpServers": {
143 |     "semanticscholar": {
144 |       "command": "bash",
145 |       "args": [
146 |         "-c",
147 |         "source /path/to/your/.venv/bin/activate && python /path/to/your/semantic_scholar_server.py"
148 |       ],
149 |       "env": {},
150 |       "disabled": false,
151 |       "autoApprove": []
152 |     }
153 |   }
154 | }
155 | ```
156 | 
157 | ### Remote Setups
158 | 
159 | #### Auto Configuration
160 | ```bash
161 | npx -y @smithery/cli@latest install @alperenkocyigit/semantic-scholar-graph-api --client <valid-client-name> --key <your-smithery-api-key>
162 | ```
163 | **Valid client names: [claude,cursor,vscode,boltai]**
164 | 
165 | #### Json Configuration
166 | **macOS/Linux Configuration:**
167 | ```json
168 | {
169 |   "mcpServers": {
170 |     "semantic-scholar-graph-api": {
171 |       "command": "npx",
172 |       "args": [
173 |         "-y",
174 |         "@smithery/cli@latest",
175 |         "run",
176 |         "@alperenkocyigit/semantic-scholar-graph-api",
177 |         "--key",
178 |         "your-smithery-api-key"
179 |       ]
180 |     }
181 |   }
182 | }
183 | ```
184 | **Windows Configuration:**
185 | ```json
186 | {
187 |   "mcpServers": {
188 |     "semantic-scholar-graph-api": {
189 |       "command": "cmd",
190 |       "args": [
191 |         "/c",
192 |         "npx",
193 |         "-y",
194 |         "@smithery/cli@latest",
195 |         "run",
196 |         "@alperenkocyigit/semantic-scholar-graph-api",
197 |         "--key",
198 |         "your-smithery-api-key"
199 |       ]
200 |     }
201 |   }
202 | }
203 | ```
204 | **WSL Configuration:**
205 | ```json
206 | {
207 |   "mcpServers": {
208 |     "semantic-scholar-graph-api": {
209 |       "command": "wsl",
210 |       "args": [
211 |         "npx",
212 |         "-y",
213 |         "@smithery/cli@latest",
214 |         "run",
215 |         "@alperenkocyigit/semantic-scholar-graph-api",
216 |         "--key",
217 |         "your-smithery-api-key"
218 |       ]
219 |     }
220 |   }
221 | }
222 | ```
223 | 
224 | ---
225 | 
226 | ## 🎯 Available Tools
227 | 
228 | | Tool | Description | Use Case |
229 | |------|-------------|----------|
230 | | `search_semantic_scholar` | Search papers by query | Literature discovery |
231 | | `search_semantic_scholar_authors` | Find authors by name | Researcher identification |
232 | | `get_semantic_scholar_paper_details` | Get comprehensive paper info | Detailed analysis |
233 | | `get_semantic_scholar_author_details` | Get author profiles | Author research |
234 | | `get_semantic_scholar_citations_and_references` | Fetch citation network | Impact analysis |
235 | | `get_semantic_scholar_paper_match` | Find exact paper matches | Precise searching |
236 | | `get_semantic_scholar_paper_autocomplete` | Get title suggestions | Smart completion |
237 | | `get_semantic_scholar_papers_batch` | Bulk paper retrieval | Batch processing |
238 | | `get_semantic_scholar_authors_batch` | Bulk author data | Mass analysis |
239 | | `search_semantic_scholar_snippets` | Search text content | Content discovery |
240 | | `get_semantic_scholar_paper_recommendations_from_lists` | Get recommendations from positive/negative examples | AI-powered discovery |
241 | | `get_semantic_scholar_paper_recommendations` | Get recommendations from single paper | Similar paper finding |
242 | 
243 | ---
244 | 
245 | ## 💡 Usage Examples
246 | 
247 | ### Basic Paper Search
248 | ```python
249 | # Search for papers on machine learning
250 | results = await search_semantic_scholar("machine learning", num_results=5)
251 | ```
252 | 
253 | ### Author Research
254 | ```python
255 | # Find authors working on natural language processing
256 | authors = await search_semantic_scholar_authors("natural language processing")
257 | ```
258 | 
259 | ### Citation Analysis
260 | ```python
261 | # Get citation network for a specific paper
262 | citations = await get_semantic_scholar_citations_and_references("paper_id_here")
263 | ```
264 | 
265 | ### 🆕 AI-Powered Paper Recommendations
266 | 
267 | #### Multi-Example Recommendations
268 | ```python
269 | # Get recommendations based on multiple positive and negative examples
270 | positive_papers = ["paper_id_1", "paper_id_2", "paper_id_3"]
271 | negative_papers = ["bad_paper_id_1", "bad_paper_id_2"]
272 | recommendations = await get_semantic_scholar_paper_recommendations_from_lists(
273 |     positive_paper_ids=positive_papers,
274 |     negative_paper_ids=negative_papers,
275 |     limit=20
276 | )
277 | ```
278 | 
279 | #### Single Paper Similarity
280 | ```python
281 | # Find papers similar to a specific research work
282 | similar_papers = await get_semantic_scholar_paper_recommendations(
283 |     paper_id="target_paper_id",
284 |     limit=15
285 | )
286 | ```
287 | 
288 | #### Content Discovery
289 | ```python
290 | # Search for specific text content within papers
291 | snippets = await search_semantic_scholar_snippets(
292 |     query="neural network optimization",
293 |     limit=10
294 | )
295 | ```
296 | ---
297 | 
298 | ## 📂 Project Architecture
299 | 
300 | ```
301 | semantic-scholar-graph-api/
302 | ├── 📄 README.md                    # Project documentation
303 | ├── 📋 requirements.txt             # Python dependencies
304 | ├── 🔍 search.py   # Core API interaction module
305 | ├── 🖥️ server.py   # MCP server implementation
306 | └── 🗂️ __pycache__/                # Compiled Python files
307 | ```
308 | 
309 | ### Core Components
310 | 
311 | - **`search.py`**: Handles all interactions with the Semantic Scholar API, including rate limiting, error handling, and data processing
312 | - **`server.py`**: Implements the MCP server protocol and exposes tools for AI assistant integration
313 | 
314 | ---
315 | 
316 | ## 🤝 Contributing
317 | 
318 | We welcome contributions from the community! Here's how you can help:
319 | 
320 | ### Ways to Contribute
321 | - 🐛 **Bug Reports**: Found an issue? Let us know!
322 | - 💡 **Feature Requests**: Have ideas for improvements?
323 | - 🔧 **Code Contributions**: Submit pull requests
324 | - 📖 **Documentation**: Help improve our docs
325 | 
326 | ### Development Setup
327 | 1. Fork the repository
328 | 2. Create a feature branch: `git checkout -b feature/amazing-feature`
329 | 3. Make your changes and test thoroughly
330 | 4. Commit your changes: `git commit -m 'Add amazing feature'`
331 | 5. Push to the branch: `git push origin feature/amazing-feature`
332 | 6. Open a Pull Request
333 | 
334 | ---
335 | 
336 | ## 📄 License
337 | 
338 | This project is licensed under the **MIT License** - see the [LICENSE](LICENSE) file for details.
339 | 
340 | ---
341 | 
342 | ## 🙏 Acknowledgments
343 | 
344 | - **Semantic Scholar Team** for providing the excellent API
345 | - **Model Context Protocol** community for the framework
346 | - **Contributors** who help improve this project
347 | 
348 | ---
349 | 
350 | ## 📞 Support
351 | 
352 | - **Issues**: [GitHub Issues](https://github.com/alperenkocyigit/semantic-scholar-graph-api/issues)
353 | - **Discussions**: [GitHub Discussions](https://github.com/alperenkocyigit/semantic-scholar-graph-api/discussions)
354 | - **Maintainer**: [@alperenkocyigit](https://github.com/alperenkocyigit)
355 | 
356 | ---
357 | 
358 | <div align="center">
359 |   <strong>Made with ❤️ for the research community</strong>
360 |   <br>
361 |   <sub>Empowering AI agents with academic knowledge</sub>
362 | </div>
363 | 
```

--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------

```
1 | requests
2 | bs4 
3 | mcp
4 | uvicorn
5 | httpx
6 | pydantic
7 | anyio
8 | 
```

--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------

```dockerfile
 1 | FROM python:3.11-slim
 2 | 
 3 | # Set working directory
 4 | WORKDIR /app
 5 | 
 6 | # Copy requirements and install dependencies
 7 | COPY requirements.txt .
 8 | RUN pip install --no-cache-dir -r requirements.txt
 9 | 
10 | # Copy application code
11 | COPY . .
12 | 
13 | # Expose port
14 | EXPOSE 3000
15 | 
16 | # Set environment variables
17 | ENV HOST=0.0.0.0
18 | ENV PORT=3000
19 | 
20 | # Health check for MCP server
21 | HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
22 |   CMD curl -f http://localhost:3000/ || exit 1
23 | 
24 | # Run the HTTP server
25 | CMD ["python", "server.py"]
```

--------------------------------------------------------------------------------
/smithery.yaml:
--------------------------------------------------------------------------------

```yaml
 1 | name: semantic-scholar-graph-api
 2 | description: "A comprehensive Model Context Protocol (MCP) server for accessing Semantic Scholar's academic database"
 3 | version: "1.0.0"
 4 | transport: streamable-http
 5 | runtime: python
 6 | entrypoint: server.py
 7 | port: 8000
 8 | 
 9 | capabilities:
10 |   tools: true
11 |   resources: false
12 |   prompts: false
13 | 
14 | environment:
15 |   HOST: "0.0.0.0"
16 |   PORT: "8000"
17 | 
18 | dependencies:
19 |   - requests
20 |   - bs4
21 |   - mcp
22 |   - uvicorn
23 |   - httpx
24 |   - pydantic
25 |   - anyio
26 | 
27 | python:
28 |   version: "3.11"
29 |   requirements: requirements.txt
30 | 
31 | health_check:
32 |   path: "/"
33 |   timeout: 30
34 |   interval: 30
35 | 
36 | resources:
37 |   memory: 256
38 |   cpu: 0.5
39 | 
40 | tags:
41 |   - academic
42 |   - research
43 |   - papers
44 |   - citations
45 |   - semantic-scholar
46 |   - ai
47 |   - machine-learning
48 | 
```

--------------------------------------------------------------------------------
/server.py:
--------------------------------------------------------------------------------

```python
  1 | #!/usr/bin/env python3
  2 | """
  3 | Semantic Scholar MCP Server - Streamable HTTP Transport
  4 | A Model Context Protocol (MCP) server for accessing Semantic Scholar's academic database.
  5 | Implements the MCP Streamable HTTP transport protocol.
  6 | """
  7 | 
  8 | import asyncio
  9 | import logging
 10 | import os
 11 | from typing import Any, Dict, List, Optional
 12 | 
 13 | from mcp.server import FastMCP
 14 | from pydantic import BaseModel, Field
 15 | 
 16 | from search import (
 17 |     search_papers, get_paper_details, get_author_details, get_citations_and_references,
 18 |     search_authors, search_paper_match, get_paper_autocomplete, get_papers_batch,
 19 |     get_authors_batch, search_snippets, get_paper_recommendations_from_lists,
 20 |     get_paper_recommendations
 21 | )
 22 | 
 23 | # Set up logging
 24 | logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
 25 | logger = logging.getLogger(__name__)
 26 | 
 27 | # Initialize the FastMCP server
 28 | app = FastMCP("Semantic Scholar MCP Server")
 29 | 
 30 | # Tool implementations
 31 | @app.tool()
 32 | async def search_semantic_scholar_papers(
 33 |     query: str,
 34 |     num_results: int = 10
 35 | ) -> List[Dict[str, Any]]:
 36 |     """
 37 |     Search for papers on Semantic Scholar using a query string.
 38 |     
 39 |     Args:
 40 |         query: Search query for papers
 41 |         num_results: Number of results to return (max 100)
 42 |     
 43 |     Returns:
 44 |         List of paper objects with details like title, authors, year, abstract, etc.
 45 |     """
 46 |     logger.info(f"Searching for papers with query: {query}, num_results: {num_results}")
 47 |     try:
 48 |         results = await asyncio.to_thread(search_papers, query, num_results)
 49 |         return results
 50 |     except Exception as e:
 51 |         logger.error(f"Error searching papers: {e}")
 52 |         raise Exception(f"An error occurred while searching: {str(e)}")
 53 | 
 54 | @app.tool()
 55 | async def get_semantic_scholar_paper_details(
 56 |     paper_id: str
 57 | ) -> Dict[str, Any]:
 58 |     """
 59 |     Get details of a specific paper on Semantic Scholar.
 60 |     
 61 |     Args:
 62 |         paper_id: Paper ID (e.g., Semantic Scholar paper ID or DOI)
 63 |     
 64 |     Returns:
 65 |         Paper object with comprehensive details
 66 |     """
 67 |     logger.info(f"Fetching paper details for paper ID: {paper_id}")
 68 |     try:
 69 |         paper = await asyncio.to_thread(get_paper_details, paper_id)
 70 |         return paper
 71 |     except Exception as e:
 72 |         logger.error(f"Error fetching paper details: {e}")
 73 |         raise Exception(f"An error occurred while fetching paper details: {str(e)}")
 74 | 
 75 | @app.tool()
 76 | async def get_semantic_scholar_author_details(
 77 |     author_id: str
 78 | ) -> Dict[str, Any]:
 79 |     """
 80 |     Get details of a specific author on Semantic Scholar.
 81 |     
 82 |     Args:
 83 |         author_id: Author ID (Semantic Scholar author ID)
 84 |     
 85 |     Returns:
 86 |         Author object with comprehensive details including publications, h-index, etc.
 87 |     """
 88 |     logger.info(f"Fetching author details for author ID: {author_id}")
 89 |     try:
 90 |         author = await asyncio.to_thread(get_author_details, author_id)
 91 |         return author
 92 |     except Exception as e:
 93 |         logger.error(f"Error fetching author details: {e}")
 94 |         raise Exception(f"An error occurred while fetching author details: {str(e)}")
 95 | 
 96 | @app.tool()
 97 | async def get_semantic_scholar_citations_and_references(
 98 |     paper_id: str
 99 | ) -> Dict[str, Any]:
100 |     """
101 |     Get citations and references for a specific paper on Semantic Scholar.
102 |     
103 |     Args:
104 |         paper_id: Paper ID to get citations and references for
105 |     
106 |     Returns:
107 |         Object containing citations and references lists
108 |     """
109 |     logger.info(f"Fetching citations and references for paper ID: {paper_id}")
110 |     try:
111 |         citations_refs = await asyncio.to_thread(get_citations_and_references, paper_id)
112 |         return citations_refs
113 |     except Exception as e:
114 |         logger.error(f"Error fetching citations and references: {e}")
115 |         raise Exception(f"An error occurred while fetching citations and references: {str(e)}")
116 | 
117 | @app.tool()
118 | async def search_semantic_scholar_authors(
119 |     query: str,
120 |     limit: int = 10
121 | ) -> List[Dict[str, Any]]:
122 |     """
123 |     Search for authors on Semantic Scholar using a query string.
124 |     
125 |     Args:
126 |         query: Search query for authors
127 |         limit: Maximum number of authors to return
128 |     
129 |     Returns:
130 |         List of author objects with details
131 |     """
132 |     logger.info(f"Searching for authors with query: {query}, limit: {limit}")
133 |     try:
134 |         results = await asyncio.to_thread(search_authors, query, limit)
135 |         return results
136 |     except Exception as e:
137 |         logger.error(f"Error searching authors: {e}")
138 |         raise Exception(f"An error occurred while searching authors: {str(e)}")
139 | 
140 | @app.tool()
141 | async def get_semantic_scholar_paper_match(
142 |     query: str
143 | ) -> Dict[str, Any]:
144 |     """
145 |     Find the best matching paper on Semantic Scholar using title-based search.
146 |     
147 |     Args:
148 |         query: Paper title or description to match
149 |     
150 |     Returns:
151 |         Best matching paper object
152 |     """
153 |     logger.info(f"Finding paper match for query: {query}")
154 |     try:
155 |         result = await asyncio.to_thread(search_paper_match, query)
156 |         return result
157 |     except Exception as e:
158 |         logger.error(f"Error finding paper match: {e}")
159 |         raise Exception(f"An error occurred while finding paper match: {str(e)}")
160 | 
161 | @app.tool()
162 | async def get_semantic_scholar_paper_autocomplete(
163 |     query: str
164 | ) -> List[str]:
165 |     """
166 |     Get paper title autocompletion suggestions for a partial query.
167 |     
168 |     Args:
169 |         query: Partial paper title for autocomplete suggestions
170 |     
171 |     Returns:
172 |         List of autocomplete suggestions
173 |     """
174 |     logger.info(f"Getting paper autocomplete for query: {query}")
175 |     try:
176 |         results = await asyncio.to_thread(get_paper_autocomplete, query)
177 |         return results
178 |     except Exception as e:
179 |         logger.error(f"Error getting autocomplete suggestions: {e}")
180 |         raise Exception(f"An error occurred while getting autocomplete suggestions: {str(e)}")
181 | 
182 | @app.tool()
183 | async def get_semantic_scholar_papers_batch(
184 |     paper_ids: List[str]
185 | ) -> List[Dict[str, Any]]:
186 |     """
187 |     Get details for multiple papers at once using batch API.
188 |     
189 |     Args:
190 |         paper_ids: List of paper IDs to fetch
191 |     
192 |     Returns:
193 |         List of paper objects
194 |     """
195 |     logger.info(f"Fetching batch paper details for {len(paper_ids)} papers")
196 |     try:
197 |         results = await asyncio.to_thread(get_papers_batch, paper_ids)
198 |         return results
199 |     except Exception as e:
200 |         logger.error(f"Error fetching batch paper details: {e}")
201 |         raise Exception(f"An error occurred while fetching batch paper details: {str(e)}")
202 | 
203 | @app.tool()
204 | async def get_semantic_scholar_authors_batch(
205 |     author_ids: List[str]
206 | ) -> List[Dict[str, Any]]:
207 |     """
208 |     Get details for multiple authors at once using batch API.
209 |     
210 |     Args:
211 |         author_ids: List of author IDs to fetch
212 |     
213 |     Returns:
214 |         List of author objects
215 |     """
216 |     logger.info(f"Fetching batch author details for {len(author_ids)} authors")
217 |     try:
218 |         results = await asyncio.to_thread(get_authors_batch, author_ids)
219 |         return results
220 |     except Exception as e:
221 |         logger.error(f"Error fetching batch author details: {e}")
222 |         raise Exception(f"An error occurred while fetching batch author details: {str(e)}")
223 | 
224 | @app.tool()
225 | async def search_semantic_scholar_snippets(
226 |     query: str,
227 |     limit: int = 10
228 | ) -> List[Dict[str, Any]]:
229 |     """
230 |     Search for text snippets from papers that match the query.
231 |     
232 |     Args:
233 |         query: Search query for text snippets within papers
234 |         limit: Maximum number of snippets to return
235 |     
236 |     Returns:
237 |         List of snippet objects with context and source paper information
238 |     """
239 |     logger.info(f"Searching for text snippets with query: {query}, limit: {limit}")
240 |     try:
241 |         results = await asyncio.to_thread(search_snippets, query, limit)
242 |         return results
243 |     except Exception as e:
244 |         logger.error(f"Error searching snippets: {e}")
245 |         raise Exception(f"An error occurred while searching snippets: {str(e)}")
246 | 
247 | @app.tool()
248 | async def get_semantic_scholar_paper_recommendations_from_lists(
249 |     positive_paper_ids: List[str],
250 |     negative_paper_ids: List[str] = [],
251 |     limit: int = 10
252 | ) -> List[Dict[str, Any]]:
253 |     """
254 |     Get recommended papers based on lists of positive and negative example papers.
255 |     
256 |     Args:
257 |         positive_paper_ids: List of positive example paper IDs
258 |         negative_paper_ids: List of negative example paper IDs
259 |         limit: Maximum number of recommendations to return
260 |     
261 |     Returns:
262 |         List of recommended paper objects with relevance scores
263 |     """
264 |     logger.info(f"Getting paper recommendations from lists: {len(positive_paper_ids)} positive, {len(negative_paper_ids)} negative, limit: {limit}")
265 |     try:
266 |         results = await asyncio.to_thread(get_paper_recommendations_from_lists, positive_paper_ids, negative_paper_ids or [], limit)
267 |         return results
268 |     except Exception as e:
269 |         logger.error(f"Error getting paper recommendations from lists: {e}")
270 |         raise Exception(f"An error occurred while getting paper recommendations from lists: {str(e)}")
271 | 
272 | @app.tool()
273 | async def get_semantic_scholar_paper_recommendations(
274 |     paper_id: str,
275 |     limit: int = 10
276 | ) -> List[Dict[str, Any]]:
277 |     """
278 |     Get recommended papers for a single positive example paper.
279 |     
280 |     Args:
281 |         paper_id: Paper ID to get recommendations for
282 |         limit: Maximum number of recommendations to return
283 |     
284 |     Returns:
285 |         List of recommended paper objects with relevance scores
286 |     """
287 |     logger.info(f"Getting paper recommendations for single paper: {paper_id}, limit: {limit}")
288 |     try:
289 |         results = await asyncio.to_thread(get_paper_recommendations, paper_id, limit)
290 |         return results
291 |     except Exception as e:
292 |         logger.error(f"Error getting paper recommendations for single paper: {e}")
293 |         raise Exception(f"An error occurred while getting paper recommendations for single paper: {str(e)}")
294 | 
295 | if __name__ == "__main__":
296 |     # Get configuration from environment variables
297 |     port = int(os.getenv('PORT', 3000))
298 |     host = os.getenv('HOST', '0.0.0.0')
299 |     
300 |     logger.info(f"Starting Semantic Scholar MCP HTTP Server on {host}:{port}")
301 |     
302 |     # Run the FastMCP server with streamable HTTP transport
303 |     app.run(transport="streamable-http")
304 | 
```

--------------------------------------------------------------------------------
/search.py:
--------------------------------------------------------------------------------

```python
  1 | import requests
  2 | import time
  3 | import logging
  4 | from typing import List, Dict, Any, Optional
  5 | 
  6 | # Base URL for the Semantic Scholar API
  7 | BASE_URL = "https://api.semanticscholar.org/graph/v1"
  8 | BASE_RECOMMENDATION_URL = "https://api.semanticscholar.org/recommendations/v1"
  9 | 
 10 | # Configure logging
 11 | logging.basicConfig(level=logging.INFO)
 12 | logger = logging.getLogger(__name__)
 13 | 
 14 | def make_request_with_retry(url: str, params: Optional[Dict] = None, json_data: Optional[Dict] = None, 
 15 |                            method: str = "GET", max_retries: int = 5, base_delay: float = 1.0) -> Dict[str, Any]:
 16 |     """
 17 |     Make HTTP request with retry logic for 429 rate limit errors.
 18 |     
 19 |     Args:
 20 |         url: The URL to make the request to
 21 |         params: Query parameters for GET requests
 22 |         json_data: JSON data for POST requests
 23 |         method: HTTP method (GET or POST)
 24 |         max_retries: Maximum number of retry attempts
 25 |         base_delay: Base delay in seconds, will be exponentially increased
 26 |     
 27 |     Returns:
 28 |         JSON response as dictionary
 29 |     
 30 |     Raises:
 31 |         Exception: If all retries are exhausted or other errors occur
 32 |     """
 33 |     
 34 |     for attempt in range(max_retries + 1):
 35 |         try:
 36 |             if method.upper() == "GET":
 37 |                 response = requests.get(url, params=params, timeout=30)
 38 |             elif method.upper() == "POST":
 39 |                 response = requests.post(url, params=params, json=json_data, timeout=30)
 40 |             else:
 41 |                 raise ValueError(f"Unsupported HTTP method: {method}")
 42 |             
 43 |             # Check if request was successful
 44 |             if response.status_code == 200:
 45 |                 return response.json()
 46 |             
 47 |             # Handle rate limiting (429 Too Many Requests)
 48 |             elif response.status_code == 429:
 49 |                 if attempt < max_retries:
 50 |                     # Exponential backoff with jitter
 51 |                     delay = base_delay * (2 ** attempt)
 52 |                     logger.warning(f"Rate limit hit (429). Retrying in {delay} seconds... (attempt {attempt + 1}/{max_retries + 1})")
 53 |                     time.sleep(delay)
 54 |                     continue
 55 |                 else:
 56 |                     raise Exception(f"Rate limit exceeded. Max retries ({max_retries}) exhausted.")
 57 |             
 58 |             # Handle other HTTP errors
 59 |             else:
 60 |                 response.raise_for_status()
 61 |                 
 62 |         except requests.exceptions.Timeout:
 63 |             if attempt < max_retries:
 64 |                 delay = base_delay * (2 ** attempt)
 65 |                 logger.warning(f"Request timeout. Retrying in {delay} seconds... (attempt {attempt + 1}/{max_retries + 1})")
 66 |                 time.sleep(delay)
 67 |                 continue
 68 |             else:
 69 |                 raise Exception("Request timeout. Max retries exhausted.")
 70 |         
 71 |         except requests.exceptions.RequestException as e:
 72 |             if attempt < max_retries:
 73 |                 delay = base_delay * (2 ** attempt)
 74 |                 logger.warning(f"Request failed: {e}. Retrying in {delay} seconds... (attempt {attempt + 1}/{max_retries + 1})")
 75 |                 time.sleep(delay)
 76 |                 continue
 77 |             else:
 78 |                 raise Exception(f"Request failed after {max_retries} retries: {e}")
 79 |     
 80 |     raise Exception("Unexpected error in request retry logic")
 81 | 
 82 | def search_papers(query: str, limit: int = 10) -> List[Dict[str, Any]]:
 83 |     """Search for papers using a query string."""
 84 |     url = f"{BASE_URL}/paper/search"
 85 |     params = {
 86 |         "query": query,
 87 |         "limit": min(limit, 100),  # API limit is 100
 88 |         "fields": "paperId,title,abstract,year,authors,url,venue,publicationTypes,citationCount,tldr"
 89 |     }
 90 |     
 91 |     try:
 92 |         response_data = make_request_with_retry(url, params=params)
 93 |         papers = response_data.get("data", [])
 94 |         
 95 |         return [
 96 |             {
 97 |                 "paperId": paper.get("paperId"),
 98 |                 "title": paper.get("title"),
 99 |                 "abstract": paper.get("abstract"),
100 |                 "year": paper.get("year"),
101 |                 "authors": [{"name": author.get("name"), "authorId": author.get("authorId")} 
102 |                            for author in paper.get("authors", [])],
103 |                 "url": paper.get("url"),
104 |                 "venue": paper.get("venue"),
105 |                 "publicationTypes": paper.get("publicationTypes"),
106 |                 "citationCount": paper.get("citationCount"),
107 |                 "tldr": {
108 |                     "model": paper.get("tldr", {}).get("model", ""),
109 |                     "text": paper.get("tldr", {}).get("text", "")
110 |                 } if paper.get("tldr") else None
111 |             } for paper in papers
112 |         ]
113 |     except Exception as e:
114 |         logger.error(f"Error searching papers: {e}")
115 |         return []
116 | 
117 | def get_paper_details(paper_id: str) -> Dict[str, Any]:
118 |     """Get details of a specific paper."""
119 |     url = f"{BASE_URL}/paper/{paper_id}"
120 |     params = {
121 |         "fields": "paperId,title,abstract,year,authors,url,venue,publicationTypes,citationCount,referenceCount,influentialCitationCount,fieldsOfStudy,publicationDate,tldr"
122 |     }
123 |     
124 |     try:
125 |         response_data = make_request_with_retry(url, params=params)
126 |         return {
127 |             "paperId": response_data.get("paperId"),
128 |             "title": response_data.get("title"),
129 |             "abstract": response_data.get("abstract"),
130 |             "year": response_data.get("year"),
131 |             "authors": [{"name": author.get("name"), "authorId": author.get("authorId")} 
132 |                        for author in response_data.get("authors", [])],
133 |             "url": response_data.get("url"),
134 |             "venue": response_data.get("venue"),
135 |             "publicationTypes": response_data.get("publicationTypes"),
136 |             "citationCount": response_data.get("citationCount"),
137 |             "referenceCount": response_data.get("referenceCount"),
138 |             "influentialCitationCount": response_data.get("influentialCitationCount"),
139 |             "fieldsOfStudy": response_data.get("fieldsOfStudy"),
140 |             "publicationDate": response_data.get("publicationDate"),
141 |             "tldr": {
142 |                     "model": response_data.get("tldr", {}).get("model", ""),
143 |                     "text": response_data.get("tldr", {}).get("text", "")
144 |                 } if response_data.get("tldr") else None
145 |         }
146 |     except Exception as e:
147 |         logger.error(f"Error getting paper details for {paper_id}: {e}")
148 |         return {"error": f"Failed to get paper details: {e}"}
149 | 
150 | def get_author_details(author_id: str) -> Dict[str, Any]:
151 |     """Get details of a specific author."""
152 |     url = f"{BASE_URL}/author/{author_id}"
153 |     params = {
154 |         "fields": "authorId,name,url,affiliations,paperCount,citationCount,hIndex"
155 |     }
156 |     
157 |     try:
158 |         response_data = make_request_with_retry(url, params=params)
159 |         return {
160 |             "authorId": response_data.get("authorId"),
161 |             "name": response_data.get("name"),
162 |             "url": response_data.get("url"),
163 |             "affiliations": response_data.get("affiliations"),
164 |             "paperCount": response_data.get("paperCount"),
165 |             "citationCount": response_data.get("citationCount"),
166 |             "hIndex": response_data.get("hIndex")
167 |         }
168 |     except Exception as e:
169 |         logger.error(f"Error getting author details for {author_id}: {e}")
170 |         return {"error": f"Failed to get author details: {e}"}
171 | 
172 | def get_paper_citations(paper_id: str, limit: int = 10) -> List[Dict[str, Any]]:
173 |     """Get citations for a specific paper."""
174 |     url = f"{BASE_URL}/paper/{paper_id}/citations"
175 |     params = {
176 |         "limit": min(limit, 100),  # API limit is 100
177 |         "fields": "contexts,isInfluential,title,authors,year,venue"
178 |     }
179 |     
180 |     try:
181 |         response_data = make_request_with_retry(url, params=params)
182 |         citations = response_data.get("data", [])
183 |         
184 |         return [
185 |             {
186 |                 "contexts": citation.get("contexts", []),
187 |                 "isInfluential": citation.get("isInfluential"),
188 |                 "citingPaper": {
189 |                     "paperId": citation.get("citingPaper", {}).get("paperId"),
190 |                     "title": citation.get("citingPaper", {}).get("title"),
191 |                     "authors": [{"name": author.get("name"), "authorId": author.get("authorId")} 
192 |                                for author in citation.get("citingPaper", {}).get("authors", [])],
193 |                     "year": citation.get("citingPaper", {}).get("year"),
194 |                     "venue": citation.get("citingPaper", {}).get("venue")
195 |                 }
196 |             } for citation in citations
197 |         ]
198 |     except Exception as e:
199 |         logger.error(f"Error getting citations for {paper_id}: {e}")
200 |         return []
201 | 
202 | def get_paper_references(paper_id: str, limit: int = 10) -> List[Dict[str, Any]]:
203 |     """Get references for a specific paper."""
204 |     url = f"{BASE_URL}/paper/{paper_id}/references"
205 |     params = {
206 |         "limit": min(limit, 100),  # API limit is 100
207 |         "fields": "contexts,isInfluential,title,authors,year,venue"
208 |     }
209 |     
210 |     try:
211 |         response_data = make_request_with_retry(url, params=params)
212 |         references = response_data.get("data", [])
213 |         
214 |         return [
215 |             {
216 |                 "contexts": reference.get("contexts", []),
217 |                 "isInfluential": reference.get("isInfluential"),
218 |                 "citedPaper": {
219 |                     "paperId": reference.get("citedPaper", {}).get("paperId"),
220 |                     "title": reference.get("citedPaper", {}).get("title"),
221 |                     "authors": [{"name": author.get("name"), "authorId": author.get("authorId")} 
222 |                                for author in reference.get("citedPaper", {}).get("authors", [])],
223 |                     "year": reference.get("citedPaper", {}).get("year"),
224 |                     "venue": reference.get("citedPaper", {}).get("venue")
225 |                 }
226 |             } for reference in references
227 |         ]
228 |     except Exception as e:
229 |         logger.error(f"Error getting references for {paper_id}: {e}")
230 |         return []
231 | 
232 | def get_citations_and_references(paper_id: str) -> Dict[str, List[Dict[str, Any]]]:
233 |     """Get citations and references for a paper using paper ID."""
234 |     citations = get_paper_citations(paper_id)
235 |     references = get_paper_references(paper_id)
236 |     
237 |     return {
238 |         "citations": citations,
239 |         "references": references
240 |     }
241 | 
242 | def search_authors(query: str, limit: int = 10) -> List[Dict[str, Any]]:
243 |     """Search for authors using a query string."""
244 |     url = f"{BASE_URL}/author/search"
245 |     params = {
246 |         "query": query,
247 |         "limit": min(limit, 100),  # API limit is 100
248 |         "fields": "authorId,name,url,affiliations,paperCount,citationCount,hIndex"
249 |     }
250 |     
251 |     try:
252 |         response_data = make_request_with_retry(url, params=params)
253 |         authors = response_data.get("data", [])
254 |         
255 |         return [
256 |             {
257 |                 "authorId": author.get("authorId"),
258 |                 "name": author.get("name"),
259 |                 "url": author.get("url"),
260 |                 "affiliations": author.get("affiliations"),
261 |                 "paperCount": author.get("paperCount"),
262 |                 "citationCount": author.get("citationCount"),
263 |                 "hIndex": author.get("hIndex")
264 |             } for author in authors
265 |         ]
266 |     except Exception as e:
267 |         logger.error(f"Error searching authors: {e}")
268 |         return []
269 | 
270 | def search_paper_match(query: str) -> Dict[str, Any]:
271 |     """Find the best matching paper using title-based search."""
272 |     url = f"{BASE_URL}/paper/search/match"
273 |     params = {
274 |         "query": query,
275 |         "fields": "paperId,title,abstract,year,authors,url,venue,publicationTypes,citationCount,tldr"
276 |     }
277 |     
278 |     try:
279 |         response_data = make_request_with_retry(url, params=params)
280 |         if response_data.get("data"):
281 |             paper = response_data["data"][0]  # Returns single best match
282 |             return {
283 |                 "matchScore": paper.get("matchScore"),
284 |                 "paperId": paper.get("paperId"),
285 |                 "title": paper.get("title"),
286 |                 "abstract": paper.get("abstract"),
287 |                 "year": paper.get("year"),
288 |                 "authors": [{"name": author.get("name"), "authorId": author.get("authorId")} 
289 |                            for author in paper.get("authors", [])],
290 |                 "url": paper.get("url"),
291 |                 "venue": paper.get("venue"),
292 |                 "publicationTypes": paper.get("publicationTypes"),
293 |                 "citationCount": paper.get("citationCount"),
294 |                 "tldr": {
295 |                     "model": paper.get("tldr", {}).get("model", ""),
296 |                     "text": paper.get("tldr", {}).get("text", "")
297 |                 } if paper.get("tldr") else None
298 |             }
299 |         else:
300 |             return {"error": "No matching paper found"}
301 |     except Exception as e:
302 |         logger.error(f"Error finding paper match: {e}")
303 |         return {"error": f"Failed to find paper match: {e}"}
304 | 
305 | def get_paper_autocomplete(query: str) -> List[Dict[str, Any]]:
306 |     """Get paper title autocompletion suggestions."""
307 |     url = f"{BASE_URL}/paper/autocomplete"
308 |     params = {
309 |         "query": query[:100]  # API truncates to 100 characters
310 |     }
311 |     
312 |     try:
313 |         response_data = make_request_with_retry(url, params=params)
314 |         matches = response_data.get("matches", [])
315 |         
316 |         return [
317 |             {
318 |                 "id": match.get("id"),
319 |                 "title": match.get("title"),
320 |                 "authorsYear": match.get("authorsYear")
321 |             } for match in matches
322 |         ]
323 |     except Exception as e:
324 |         logger.error(f"Error getting autocomplete: {e}")
325 |         return []
326 | 
327 | def get_papers_batch(paper_ids: List[str]) -> List[Dict[str, Any]]:
328 |     """Get details for multiple papers using batch API."""
329 |     url = f"{BASE_URL}/paper/batch"
330 |     
331 |     # API limit is 500 papers at a time
332 |     if len(paper_ids) > 500:
333 |         paper_ids = paper_ids[:500]
334 |         logger.warning(f"Paper IDs list truncated to 500 items (API limit)")
335 |     
336 |     params = {
337 |         "fields": "paperId,title,abstract,year,authors,url,venue,publicationTypes,citationCount,referenceCount,influentialCitationCount,fieldsOfStudy,publicationDate,tldr"
338 |     }
339 |     
340 |     json_data = {"ids": paper_ids}
341 |     
342 |     try:
343 |         response_data = make_request_with_retry(url, params=params, json_data=json_data, method="POST")
344 |         if isinstance(response_data, list):
345 |             return [
346 |                 {
347 |                     "paperId": paper.get("paperId"),
348 |                     "title": paper.get("title"),
349 |                     "abstract": paper.get("abstract"),
350 |                     "year": paper.get("year"),
351 |                     "authors": [{"name": author.get("name"), "authorId": author.get("authorId")} 
352 |                                for author in paper.get("authors", [])],
353 |                     "url": paper.get("url"),
354 |                     "venue": paper.get("venue"),
355 |                     "publicationTypes": paper.get("publicationTypes"),
356 |                     "citationCount": paper.get("citationCount"),
357 |                     "referenceCount": paper.get("referenceCount"),
358 |                     "influentialCitationCount": paper.get("influentialCitationCount"),
359 |                     "fieldsOfStudy": paper.get("fieldsOfStudy"),
360 |                     "publicationDate": paper.get("publicationDate"),
361 |                     "tldr": {
362 |                         "model": paper.get("tldr", {}).get("model", ""),
363 |                         "text": paper.get("tldr", {}).get("text", "")
364 |                     } if paper.get("tldr") else None
365 |                 } for paper in response_data if paper  # Filter out None entries
366 |             ]
367 |         else:
368 |             return []
369 |     except Exception as e:
370 |         logger.error(f"Error getting papers batch: {e}")
371 |         return []
372 | 
373 | def get_authors_batch(author_ids: List[str]) -> List[Dict[str, Any]]:
374 |     """Get details for multiple authors using batch API."""
375 |     url = f"{BASE_URL}/author/batch"
376 |     
377 |     # API limit is 1000 authors at a time
378 |     if len(author_ids) > 1000:
379 |         author_ids = author_ids[:1000]
380 |         logger.warning(f"Author IDs list truncated to 1000 items (API limit)")
381 |     
382 |     params = {
383 |         "fields": "authorId,name,url,affiliations,paperCount,citationCount,hIndex"
384 |     }
385 |     
386 |     json_data = {"ids": author_ids}
387 |     
388 |     try:
389 |         response_data = make_request_with_retry(url, params=params, json_data=json_data, method="POST")
390 |         if isinstance(response_data, list):
391 |             return [
392 |                 {
393 |                     "authorId": author.get("authorId"),
394 |                     "name": author.get("name"),
395 |                     "url": author.get("url"),
396 |                     "affiliations": author.get("affiliations"),
397 |                     "paperCount": author.get("paperCount"),
398 |                     "citationCount": author.get("citationCount"),
399 |                     "hIndex": author.get("hIndex")
400 |                 } for author in response_data if author  # Filter out None entries
401 |             ]
402 |         else:
403 |             return []
404 |     except Exception as e:
405 |         logger.error(f"Error getting authors batch: {e}")
406 |         return []
407 | 
408 | def search_snippets(query: str, limit: int = 10) -> List[Dict[str, Any]]:
409 |     """Search for text snippets from papers."""
410 |     url = f"{BASE_URL}/snippet/search"
411 |     params = {
412 |         "query": query,
413 |         "limit": min(limit, 1000),  # API limit is 1000
414 |         "fields": "snippet.text,snippet.snippetKind,snippet.section,snippet.snippetOffset"
415 |     }
416 |     
417 |     try:
418 |         response_data = make_request_with_retry(url, params=params)
419 |         data = response_data.get("data", [])
420 |         
421 |         return [
422 |             {
423 |                 "score": item.get("score"),
424 |                 "snippet": {
425 |                     "text": item.get("snippet", {}).get("text"),
426 |                     "snippetKind": item.get("snippet", {}).get("snippetKind"),
427 |                     "section": item.get("snippet", {}).get("section"),
428 |                     "snippetOffset": item.get("snippet", {}).get("snippetOffset")
429 |                 },
430 |                 "paper": {
431 |                     "corpusId": item.get("paper", {}).get("corpusId"),
432 |                     "title": item.get("paper", {}).get("title"),
433 |                     "authors": item.get("paper", {}).get("authors", [])
434 |                 }
435 |             } for item in data
436 |         ]
437 |     except Exception as e:
438 |         logger.error(f"Error searching snippets: {e}")
439 |         return []
440 | 
441 | def get_paper_recommendations_from_lists(positive_paper_ids: List[str], negative_paper_ids: List[str] = None, limit: int = 10) -> List[Dict[str, Any]]:
442 |     """Get recommended papers based on lists of positive and negative example papers."""
443 |     url = f"{BASE_RECOMMENDATION_URL}/papers"
444 |     
445 |     # Prepare the request payload
446 |     payload = {
447 |         "positivePaperIds": positive_paper_ids
448 |     }
449 |     
450 |     if negative_paper_ids:
451 |         payload["negativePaperIds"] = negative_paper_ids
452 |     
453 |     params = {
454 |         "limit": min(limit, 500),
455 |         "fields": "paperId,corpusId,externalIds,url,title,abstract,venue,publicationVenue,year,referenceCount,citationCount,influentialCitationCount,isOpenAccess,openAccessPdf,fieldsOfStudy,s2FieldsOfStudy,publicationTypes,publicationDate,journal,citationStyles,authors"
456 |     }
457 |     
458 |     try:
459 |         response_data = make_request_with_retry(url, params=params, json_data=payload, method="POST")
460 |         
461 |         # Handle response structure with recommendedPapers wrapper
462 |         papers = response_data.get("recommendedPapers", [])
463 |         
464 |         return [
465 |             {
466 |                 "paperId": paper.get("paperId"),
467 |                 "corpusId": paper.get("corpusId"),
468 |                 "externalIds": paper.get("externalIds"),
469 |                 "url": paper.get("url"),
470 |                 "title": paper.get("title"),
471 |                 "abstract": paper.get("abstract"),
472 |                 "venue": paper.get("venue"),
473 |                 "publicationVenue": paper.get("publicationVenue"),
474 |                 "year": paper.get("year"),
475 |                 "referenceCount": paper.get("referenceCount"),
476 |                 "citationCount": paper.get("citationCount"),
477 |                 "influentialCitationCount": paper.get("influentialCitationCount"),
478 |                 "isOpenAccess": paper.get("isOpenAccess"),
479 |                 "openAccessPdf": paper.get("openAccessPdf"),
480 |                 "fieldsOfStudy": paper.get("fieldsOfStudy"),
481 |                 "s2FieldsOfStudy": paper.get("s2FieldsOfStudy"),
482 |                 "publicationTypes": paper.get("publicationTypes"),
483 |                 "publicationDate": paper.get("publicationDate"),
484 |                 "journal": paper.get("journal"),
485 |                 "citationStyles": paper.get("citationStyles"),
486 |                 "authors": [
487 |                     {
488 |                         "authorId": author.get("authorId"),
489 |                         "name": author.get("name")
490 |                     } for author in paper.get("authors", [])
491 |                 ]
492 |             } for paper in papers
493 |         ]
494 |     except Exception as e:
495 |         logger.error(f"Error getting paper recommendations from lists: {e}")
496 |         return []
497 | 
498 | def get_paper_recommendations(paper_id: str, limit: int = 10) -> List[Dict[str, Any]]:
499 |     """Get recommended papers for a single positive example paper."""
500 |     url = f"{BASE_RECOMMENDATION_URL}/papers/forpaper/{paper_id}"
501 |     
502 |     params = {
503 |         "limit": min(limit, 500),  # API typical limit
504 |         "fields": "paperId,corpusId,externalIds,url,title,abstract,venue,publicationVenue,year,referenceCount,citationCount,influentialCitationCount,isOpenAccess,openAccessPdf,fieldsOfStudy,s2FieldsOfStudy,publicationTypes,publicationDate,journal,citationStyles,authors"
505 |     }
506 |     
507 |     try:
508 |         response_data = make_request_with_retry(url, params=params)
509 |         
510 |         # Handle response structure with recommendedPapers wrapper
511 |         papers = response_data.get("recommendedPapers", [])
512 |         
513 |         return [
514 |             {
515 |                 "paperId": paper.get("paperId"),
516 |                 "corpusId": paper.get("corpusId"),
517 |                 "externalIds": paper.get("externalIds"),
518 |                 "url": paper.get("url"),
519 |                 "title": paper.get("title"),
520 |                 "abstract": paper.get("abstract"),
521 |                 "venue": paper.get("venue"),
522 |                 "publicationVenue": paper.get("publicationVenue"),
523 |                 "year": paper.get("year"),
524 |                 "referenceCount": paper.get("referenceCount"),
525 |                 "citationCount": paper.get("citationCount"),
526 |                 "influentialCitationCount": paper.get("influentialCitationCount"),
527 |                 "isOpenAccess": paper.get("isOpenAccess"),
528 |                 "openAccessPdf": paper.get("openAccessPdf"),
529 |                 "fieldsOfStudy": paper.get("fieldsOfStudy"),
530 |                 "s2FieldsOfStudy": paper.get("s2FieldsOfStudy"),
531 |                 "publicationTypes": paper.get("publicationTypes"),
532 |                 "publicationDate": paper.get("publicationDate"),
533 |                 "journal": paper.get("journal"),
534 |                 "citationStyles": paper.get("citationStyles"),
535 |                 "authors": [
536 |                     {
537 |                         "authorId": author.get("authorId"),
538 |                         "name": author.get("name")
539 |                     } for author in paper.get("authors", [])
540 |                 ]
541 |             } for paper in papers
542 |         ]
543 |     except Exception as e:
544 |         logger.error(f"Error getting paper recommendations for {paper_id}: {e}")
545 |         return []
546 | 
547 | def main():
548 |     """Test function for the API client."""
549 |     try:
550 |         # Search for papers
551 |         search_results = search_papers("machine learning", limit=2)
552 |         print(f"Search results: {search_results}")
553 | 
554 |         # Get paper details
555 |         if search_results:
556 |             paper_id = search_results[0]['paperId']
557 |             if paper_id:
558 |                 paper_details = get_paper_details(paper_id)
559 |                 print(f"Paper details: {paper_details}")
560 | 
561 |                 # Get citations and references
562 |                 citations_refs = get_citations_and_references(paper_id)
563 |                 print(f"Citations count: {len(citations_refs['citations'])}")
564 |                 print(f"References count: {len(citations_refs['references'])}")
565 | 
566 |         # Get author details
567 |         author_id = "1741101"  # Example author ID
568 |         author_details = get_author_details(author_id)
569 |         print(f"Author details: {author_details}")
570 | 
571 |         # Search for authors
572 |         author_search_results = search_authors("john", limit=2)
573 |         print(f"Author search results: {author_search_results}")
574 | 
575 |         # Find paper match
576 |         if search_results:
577 |             paper_title = search_results[0]['title']
578 |             paper_match = search_paper_match(paper_title)
579 |             print(f"Paper match: {paper_match}")
580 | 
581 |         # Get paper autocomplete
582 |         if search_results:
583 |             paper_query = search_results[0]['title'][:10]  # First 10 characters
584 |             autocomplete_results = get_paper_autocomplete(paper_query)
585 |             print(f"Autocomplete results: {autocomplete_results}")
586 | 
587 |         # Get papers batch
588 |         if search_results:
589 |             paper_ids = [paper['paperId'] for paper in search_results]
590 |             papers_batch = get_papers_batch(paper_ids)
591 |             print(f"Papers batch: {papers_batch}")
592 | 
593 |         # Get authors batch
594 |         if author_search_results:
595 |             author_ids = [author['authorId'] for author in author_search_results]
596 |             authors_batch = get_authors_batch(author_ids)
597 |             print(f"Authors batch: {authors_batch}")
598 | 
599 |         # Search snippets
600 |         if search_results:
601 |             snippet_query = search_results[0]['title']
602 |             snippets = search_snippets(snippet_query, limit=2)
603 |             print(f"Snippets: {snippets}")
604 | 
605 |         # Get paper recommendations from lists
606 |         if search_results:
607 |             positive_paper_ids = [search_results[0]['paperId']]
608 |             negative_paper_ids = [search_results[1]['paperId']]  # Just for testing, may not be relevant
609 |             recommendations = get_paper_recommendations_from_lists(positive_paper_ids, negative_paper_ids, limit=2)
610 |             print(f"Recommendations from lists: {recommendations}")
611 | 
612 |         # Get paper recommendations single
613 |         if search_results:
614 |             paper_id = search_results[0]['paperId']
615 |             single_recommendations = get_paper_recommendations(paper_id, limit=2)
616 |             print(f"Single paper recommendations: {single_recommendations}")
617 | 
618 |     except Exception as e:
619 |         print(f"An error occurred: {e}")
620 | 
621 | if __name__ == "__main__":
622 |     main()
623 | 
```