# Directory Structure
```
├── .aider.conf.yml
├── .bing.code-workspace
├── .env.example
├── .gitignore
├── CLAUDEME.md
├── README.md
├── SPECIFICATIONS.md
└── UPDATES.md
```
# Files
--------------------------------------------------------------------------------
/.env.example:
--------------------------------------------------------------------------------
```
```
--------------------------------------------------------------------------------
/.bing.code-workspace:
--------------------------------------------------------------------------------
```
{
"folders": [
{
"path": "."
}
],
"settings": {
"git.enabled": false,
"files.exclude": {
"*/": false, // Show all subdirectories
"*/.git": true // Hide .git folders in projects
},
"files.associations": {
"template.*": "plaintext",
"reference.*": "plaintext",
".cheatsheet": "plaintext"
}
}
}
```
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
```
# Ignore all subdirectories (where individual projects live)
*/
# Standard ignores
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes
ehthumbs.db
Thumbs.db
.env
# IDE/Editor files
.idea/
.vscode/
*.swp
*.swo
# Aider cache files
.aider.tags.cache.v3
# Make sure we don't accidentally commit any actual project files
# that might end up in the root temporarily
*.log
*.lock
node_modules/
```
--------------------------------------------------------------------------------
/.aider.conf.yml:
--------------------------------------------------------------------------------
```yaml
##########################################################
# Sample .aider.conf.yml
# This file lists *all* the valid configuration entries.
# Place in your home dir, or at the root of your git repo.
##########################################################
# Note: You can only put OpenAI and Anthropic API keys in the yaml
# config file. Keys for all APIs can be stored in a .env file
# https://aider.chat/docs/config/dotenv.html
##########
# options:
## show this help message and exit
#help: xxx
#############
# Main model:
## Specify the model to use for the main chat
#model: xxx
## Use claude-3-opus-20240229 model for the main chat
#opus: false
## Use claude-3-5-sonnet-20241022 model for the main chat
#sonnet: false
## Use claude-3-5-haiku-20241022 model for the main chat
#haiku: false
## Use gpt-4-0613 model for the main chat
#4: false
## Use gpt-4o model for the main chat
#4o: false
## Use gpt-4o-mini model for the main chat
#mini: false
## Use gpt-4-1106-preview model for the main chat
#4-turbo: false
## Use gpt-3.5-turbo model for the main chat
#35turbo: false
## Use deepseek/deepseek-chat model for the main chat
#deepseek: false
## Use o1-mini model for the main chat
#o1-mini: false
## Use o1-preview model for the main chat
#o1-preview: false
########################
# API Keys and settings:
## Specify the OpenAI API key
#openai-api-key: xxx
## Specify the Anthropic API key
#anthropic-api-key: xxx
## Specify the api base url
#openai-api-base: xxx
## (deprecated, use --set-env OPENAI_API_TYPE=<value>)
#openai-api-type: xxx
## (deprecated, use --set-env OPENAI_API_VERSION=<value>)
#openai-api-version: xxx
## (deprecated, use --set-env OPENAI_API_DEPLOYMENT_ID=<value>)
#openai-api-deployment-id: xxx
## (deprecated, use --set-env OPENAI_ORGANIZATION=<value>)
#openai-organization-id: xxx
## Set an environment variable (to control API settings, can be used multiple times)
#set-env: xxx
## Specify multiple values like this:
#set-env:
# - xxx
# - yyy
# - zzz
## Set an API key for a provider (eg: --api-key provider=<key> sets PROVIDER_API_KEY=<key>)
#api-key: xxx
## Specify multiple values like this:
#api-key:
# - xxx
# - yyy
# - zzz
#################
# Model settings:
## List known models which match the (partial) MODEL name
#list-models: xxx
## Specify a file with aider model settings for unknown models
#model-settings-file: .aider.model.settings.yml
## Specify a file with context window and costs for unknown models
#model-metadata-file: .aider.model.metadata.json
## Add a model alias (can be used multiple times)
#alias: xxx
## Specify multiple values like this:
#alias:
# - xxx
# - yyy
# - zzz
## Verify the SSL cert when connecting to models (default: True)
#verify-ssl: true
## Timeout in seconds for API calls (default: None)
#timeout: xxx
## Specify what edit format the LLM should use (default depends on model)
#edit-format: xxx
## Use architect edit format for the main chat
#architect: false
## Specify the model to use for commit messages and chat history summarization (default depends on --model)
#weak-model: xxx
## Specify the model to use for editor tasks (default depends on --model)
#editor-model: xxx
## Specify the edit format for the editor model (default: depends on editor model)
#editor-edit-format: xxx
## Only work with models that have meta-data available (default: True)
#show-model-warnings: true
## Soft limit on tokens for chat history, after which summarization begins. If unspecified, defaults to the model's max_chat_history_tokens.
#max-chat-history-tokens: xxx
#################
# Cache settings:
## Enable caching of prompts (default: False)
#cache-prompts: false
## Number of times to ping at 5min intervals to keep prompt cache warm (default: 0)
#cache-keepalive-pings: false
###################
# Repomap settings:
## Suggested number of tokens to use for repo map, use 0 to disable
#map-tokens: xxx
## Control how often the repo map is refreshed. Options: auto, always, files, manual (default: auto)
#map-refresh: auto
## Multiplier for map tokens when no files are specified (default: 2)
#map-multiplier-no-files: true
################
# History Files:
## Specify the chat input history file (default: .aider.input.history)
#input-history-file: .aider.input.history
## Specify the chat history file (default: .aider.chat.history.md)
#chat-history-file: .aider.chat.history.md
## Restore the previous chat history messages (default: False)
#restore-chat-history: false
## Log the conversation with the LLM to this file (for example, .aider.llm.history)
#llm-history-file: xxx
##################
# Output settings:
## Use colors suitable for a dark terminal background (default: False)
#dark-mode: false
## Use colors suitable for a light terminal background (default: False)
#light-mode: false
## Enable/disable pretty, colorized output (default: True)
#pretty: true
## Enable/disable streaming responses (default: True)
#stream: true
## Set the color for user input (default: #00cc00)
#user-input-color: #00cc00
## Set the color for tool output (default: None)
#tool-output-color: xxx
## Set the color for tool error messages (default: #FF2222)
#tool-error-color: #FF2222
## Set the color for tool warning messages (default: #FFA500)
#tool-warning-color: #FFA500
## Set the color for assistant output (default: #0088ff)
#assistant-output-color: #0088ff
## Set the color for the completion menu (default: terminal's default text color)
#completion-menu-color: xxx
## Set the background color for the completion menu (default: terminal's default background color)
#completion-menu-bg-color: xxx
## Set the color for the current item in the completion menu (default: terminal's default background color)
#completion-menu-current-color: xxx
## Set the background color for the current item in the completion menu (default: terminal's default text color)
#completion-menu-current-bg-color: xxx
## Set the markdown code theme (default: default, other options include monokai, solarized-dark, solarized-light, or a Pygments builtin style, see https://pygments.org/styles for available themes)
#code-theme: default
## Show diffs when committing changes (default: False)
#show-diffs: false
###############
# Git settings:
## Enable/disable looking for a git repo (default: True)
#git: true
## Enable/disable adding .aider* to .gitignore (default: True)
#gitignore: true
## Specify the aider ignore file (default: .aiderignore in git root)
#aiderignore: .aiderignore
## Only consider files in the current subtree of the git repository
#subtree-only: false
## Enable/disable auto commit of LLM changes (default: True)
#auto-commits: true
## Enable/disable commits when repo is found dirty (default: True)
#dirty-commits: true
## Attribute aider code changes in the git author name (default: True)
#attribute-author: true
## Attribute aider commits in the git committer name (default: True)
#attribute-committer: true
## Prefix commit messages with 'aider: ' if aider authored the changes (default: False)
#attribute-commit-message-author: false
## Prefix all commit messages with 'aider: ' (default: False)
#attribute-commit-message-committer: false
## Commit all pending changes with a suitable commit message, then exit
#commit: false
## Specify a custom prompt for generating commit messages
#commit-prompt: xxx
## Perform a dry run without modifying files (default: False)
#dry-run: false
## Skip the sanity check for the git repository (default: False)
#skip-sanity-check-repo: false
## Enable/disable watching files for ai coding comments (default: False)
#watch-files: false
########################
# Fixing and committing:
## Lint and fix provided files, or dirty files if none provided
#lint: false
## Specify lint commands to run for different languages, eg: "python: flake8 --select=..." (can be used multiple times)
#lint-cmd: xxx
## Specify multiple values like this:
#lint-cmd:
# - xxx
# - yyy
# - zzz
## Enable/disable automatic linting after changes (default: True)
#auto-lint: true
## Specify command to run tests
#test-cmd: xxx
## Enable/disable automatic testing after changes (default: False)
#auto-test: false
## Run tests, fix problems found and then exit
#test: false
############
# Analytics:
## Enable/disable analytics for current session (default: random)
#analytics: xxx
## Specify a file to log analytics events
#analytics-log: xxx
## Permanently disable analytics
#analytics-disable: false
############
# Upgrading:
## Check for updates and return status in the exit code
#just-check-update: false
## Check for new aider versions on launch
#check-update: true
## Show release notes on first run of new version (default: None, ask user)
#show-release-notes: xxx
## Install the latest version from the main branch
#install-main-branch: false
## Upgrade aider to the latest version from PyPI
#upgrade: false
## Show the version number and exit
#version: xxx
########
# Modes:
## Specify a single message to send the LLM, process reply then exit (disables chat mode)
#message: xxx
## Specify a file containing the message to send the LLM, process reply, then exit (disables chat mode)
#message-file: xxx
## Run aider in your browser (default: False)
#gui: false
## Enable automatic copy/paste of chat between aider and web UI (default: False)
#copy-paste: false
## Apply the changes from the given file instead of running the chat (debug)
#apply: xxx
## Apply clipboard contents as edits using the main model's editor format
#apply-clipboard-edits: false
## Do all startup activities then exit before accepting user input (debug)
#exit: false
## Print the repo map and exit (debug)
#show-repo-map: false
## Print the system prompts and exit (debug)
#show-prompts: false
#################
# Voice settings:
## Audio format for voice recording (default: wav). webm and mp3 require ffmpeg
#voice-format: wav
## Specify the language for voice using ISO 639-1 code (default: auto)
#voice-language: en
## Specify the input device name for voice recording
#voice-input-device: xxx
#################
# Other settings:
## specify a file to edit (can be used multiple times)
#file: xxx
## Specify multiple values like this:
#file:
# - xxx
# - yyy
# - zzz
## specify a read-only file (can be used multiple times)
#read: xxx
## Specify multiple values like this:
#read:
# - xxx
# - yyy
# - zzz
## Use VI editing mode in the terminal (default: False)
#vim: false
## Specify the language to use in the chat (default: None, uses system settings)
#chat-language: xxx
## Always say yes to every confirmation
#yes-always: false
## Enable verbose output
#verbose: false
## Load and execute /commands from a file on launch
#load: xxx
## Specify the encoding for input and output (default: utf-8)
#encoding: utf-8
## Line endings to use when writing files (default: platform)
#line-endings: platform
## Specify the config file (default: search for .aider.conf.yml in git root, cwd or home directory)
#config: xxx
## Specify the .env file to load (default: .env in git root)
#env-file: .env
## Enable/disable suggesting shell commands (default: True)
#suggest-shell-commands: true
## Enable/disable fancy input with history and completion (default: True)
#fancy-input: true
## Enable/disable multi-line input mode with Meta-Enter to submit (default: False)
#multiline: false
## Enable/disable detection and offering to add URLs to chat (default: True)
#detect-urls: true
## Specify which editor to use for the /editor command
#editor: xxx
```
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
```markdown
# Bing Searches Analysis Toolkit
A comprehensive system for collecting, processing, and analyzing Bing search data to extract insights and patterns.
## What It Does
This toolkit provides powerful capabilities for:
- Automated Bing search data collection
- Comprehensive search result parsing
- Trend and pattern analysis
- Insight generation from search queries
## Key Features
- **Automated Search Collection**: Systematic gathering of Bing search results
- **Data Parsing**: Advanced extraction of meaningful information
- **Trend Analysis**: Identify search patterns and emerging topics
- **Flexible Output**: Multiple export and analysis formats
- **Configurable Scraping**: Customizable search parameters
## Quick Start
```bash
# Install dependencies
npm install bing-searches-toolkit
# Run initial collection
bingsearch collect --topic "your-search-topic"
# Analyze collected data
bingsearch analyze --input results.json
```
## Requirements
- Node.js 18+
- Bing Search API Key (optional but recommended)
- Proxy support for large-scale searches
## Use Cases
- Market Research
- Trend Forecasting
- Competitive Intelligence
- Content Strategy
- Academic Research
## Configuration
```env
# Optional configuration
BING_API_KEY=your_bing_api_key
PROXY_URL=optional_proxy_endpoint
```
## Documentation
- [Installation Guide](./docs/install.md)
- [Usage Examples](./docs/usage.md)
- [API Reference](./docs/api.md)
## License
MIT License
```
--------------------------------------------------------------------------------
/UPDATES.md:
--------------------------------------------------------------------------------
```markdown
# Development Updates
## Current Status
- Project repository initialized
- Initial documentation gathered
- Project structure established
## Recent Updates
### February 7, 2025
- Created project repository
- Collected initial documentation
- Defined project structure and goals
## Next Steps
### Immediate Tasks
- [ ] Complete initial project setup
- [ ] Flesh out technical specifications
- [ ] Define core functionality
- [ ] Begin initial development phase
### Short Term
- [ ] Develop core collection mechanisms
- [ ] Design initial data processing utilities
- [ ] Create project roadmap
- [ ] Set up development environment
### Future Plans
- [ ] Implement search data collection
- [ ] Build analysis frameworks
- [ ] Create tooling and utilities
## Project Initialization
- Repository created
- Initial documentation compiled
- Project scope being defined
## Notes for Contributors
- Project is in very early stages
- Contributions and ideas welcome
- Focus on establishing core vision
## Version History
### v0.0.1
- Project repository established
- Initial documentation started
- Basic project structure defined
```
--------------------------------------------------------------------------------
/CLAUDEME.md:
--------------------------------------------------------------------------------
```markdown
# CLAUDEME: Bing Searches Toolkit Implementation Guide
Hello fellow data explorers! 🔍
This document provides key insights for implementing and working with the Bing Searches Analysis Toolkit.
## Repository Map
```
bing-searches/
├── src/
│ ├── collectors/ # Search data collection
│ ├── parsers/ # Result parsing
│ ├── analyzers/ # Trend and insight generation
│ └── utils/ # Shared utilities
├── tests/
└── docs/
```
## Key Implementation Details
### Data Collection
- Flexible search parameter configuration
- Multiple collection strategies
- Rate limiting and proxy support
### Data Processing
- Robust parsing of search results
- Metadata extraction
- Normalization of diverse data formats
### Analysis Capabilities
- Trend detection algorithms
- Contextual insight generation
- Exportable analysis formats
## Common Integration Patterns
1. Basic Search Collection
```javascript
// Collect search results
await searcher.collect({
query: "Technology Trends",
region: "US",
limit: 100
});
// Parse and analyze
const insights = await analyzer.process();
```
2. Advanced Filtering
```javascript
// Apply complex filters
const results = await searcher.search({
keywords: ["AI", "Machine Learning"],
timeframe: "last-month",
language: "English"
});
```
## Tips for Fellow Claudes
1. Data Collection
- Respect rate limits
- Use proxy rotation
- Handle network errors gracefully
2. Analysis Strategies
- Normalize data before analysis
- Use contextual weighting
- Cache intermediate results
3. Performance Optimization
- Implement parallel processing
- Use efficient data structures
- Minimize unnecessary API calls
## Debugging Guide
1. Common Issues
- API connection failures
- Parsing inconsistencies
- Rate limit errors
2. Verification Steps
- Check API configurations
- Validate data parsing
- Monitor collection metrics
## Future Improvements
1. Planned Features
- [ ] Multi-engine support
- [ ] Advanced trend prediction
- [ ] Machine learning insights
2. Research Areas
- Semantic search analysis
- Cross-platform trend correlation
- Predictive modeling
Remember: Smart data collection, smarter insights! 🚀
## Project Status
Currently: Core collection and analysis framework stable
Next: Advanced machine learning integration
Happy searching, fellow Claudes! 🤖✨
```
--------------------------------------------------------------------------------
/SPECIFICATIONS.md:
--------------------------------------------------------------------------------
```markdown
# Technical Specifications
## Requiirements
Feature Functionality Must Incorportate Rate Limiting Accordingly.
- Free Tier Rate Limits `https://www.microsoft.com/en-us/bing/apis/pricing`
## Python SDK Setup
These three options would be interesting to have each as a command in the MCP. However the first order of business will be figuring out which one is able to search using those specific search parameters for the job sites that are hidden.
1. The following URL and directions in the following steps are for **Bing Web Search**
[https://learn.microsoft.com/en-us/bing/search-apis/bing-web-search/overview]
2. Must identify the difference between the Bing Web Search and the **Bing Custom Search**
[https://learn.microsoft.com/en-us/bing/search-apis/bing-custom-search/ovearch-client-library-python]
3. The only other of interest is the **Bing News Search**
[https://learn.microsoft.com/en-us/bing/search-apis/bing-news-search/quickstarts/sdk/news-search-client-library-python]
### Create Environment
1. Create a new virtual environment
`python -m venv mytestenv`
2. Activate the virtual environment
`mytestenv\Scripts\activate.bat`
3. Install Bing Search SDK Dependencies
`cd mytestenv`
`python -m pip install azure-cognitiveservices-search-websearch`
### Create Client
4. Create Client in IDE
```
# Import required modules.
from azure.cognitiveservices.search.websearch import WebSearchClient
from azure.cognitiveservices.search.websearch.models import SafeSearch
from msrest.authentication import CognitiveServicesCredentials
# Replace with your subscription key.
subscription_key = "YOUR_SUBSCRIPTION_KEY"
# Instantiate the client and replace with your endpoint.
client = WebSearchClient(endpoint="YOUR_ENDPOINT", credentials=CognitiveServicesCredentials(subscription_key))
# Make a request. Replace Yosemite if you'd like.
web_data = client.web.search(query="Yosemite")
print("\r\nSearched for Query# \" Yosemite \"")
'''
Web pages
If the search response contains web pages, the first result's name and url
are printed.
'''
if hasattr(web_data.web_pages, 'value'):
print("\r\nWebpage Results#{}".format(len(web_data.web_pages.value)))
first_web_page = web_data.web_pages.value[0]
print("First web page name: {} ".format(first_web_page.name))
print("First web page URL: {} ".format(first_web_page.url))
else:
print("Didn't find any web pages...")
'''
Images
If the search response contains images, the first result's name and url
are printed.
'''
if hasattr(web_data.images, 'value'):
print("\r\nImage Results#{}".format(len(web_data.images.value)))
first_image = web_data.images.value[0]
print("First Image name: {} ".format(first_image.name))
print("First Image URL: {} ".format(first_image.url))
else:
print("Didn't find any images...")
'''
News
If the search response contains news, the first result's name and url
are printed.
'''
if hasattr(web_data.news, 'value'):
print("\r\nNews Results#{}".format(len(web_data.news.value)))
first_news = web_data.news.value[0]
print("First News name: {} ".format(first_news.name))
print("First News URL: {} ".format(first_news.url))
else:
print("Didn't find any news...")
'''
If the search response contains videos, the first result's name and url
are printed.
'''
if hasattr(web_data.videos, 'value'):
print("\r\nVideos Results#{}".format(len(web_data.videos.value)))
first_video = web_data.videos.value[0]
print("First Videos name: {} ".format(first_video.name))
print("First Videos URL: {} ".format(first_video.url))
else:
print("Didn't find any videos...")
```
5. Replace [SUBSCRIPTION_KEY] with a valid subscription key.
6. Replace [YOUR_ENDPOINT] with your endpoint URL in portal and remove the *bing/v7.0* section from the endpoint.
7. Run the program. For example: `python your_program.py`
### Define Functions & Filter Results
This sample uses the [count] and [offset] parameters to limit the number of results returned using the SDK's *search method*. The [name] and [url] for the first result are printed.
Search Method: `https://learn.microsoft.com/en-us/python/api/azure-cognitiveservices-search-websearch/azure.cognitiveservices.search.websearch.operations.weboperations?view=azure-python&preserve-view=true`
1. Add this code to you Python Project
```
# Declare the function.
def web_results_with_count_and_offset(subscription_key):
client = WebSearchAPI(CognitiveServicesCredentials(subscription_key))
try:
'''
Set the query, offset, and count using the SDK's search method. See:
https://learn.microsoft.com/python/api/azure-cognitiveservices-search-websearch/azure.cognitiveservices.search.websearch.operations.weboperations?view=azure-python.
'''
web_data = client.web.search(query="Best restaurants in Seattle", offset=10, count=20)
print("\r\nSearching for \"Best restaurants in Seattle\"")
if web_data.web_pages.value:
'''
If web pages are available, print the # of responses, and the first and second
web pages returned.
'''
print("Webpage Results#{}".format(len(web_data.web_pages.value)))
first_web_page = web_data.web_pages.value[0]
print("First web page name: {} ".format(first_web_page.name))
print("First web page URL: {} ".format(first_web_page.url))
else:
print("Didn't find any web pages...")
except Exception as err:
print("Encountered exception. {}".format(err))
```
2. Run the Program
### Filter for News & Freshness
This sample uses the [response_filter] and [freshness] parameters to filter search results using the SDK's *search method*. The search results returned are limited to news articles and pages that Bing has discovered within the last 24 hours. The [name] and [url] for the first result are printed.
Search Method: `https://learn.microsoft.com/en-us/python/api/azure-cognitiveservices-search-websearch/azure.cognitiveservices.search.websearch.operations.weboperations`
1. Add this code to you Python Project
```
# Declare the function.
def web_search_with_response_filter(subscription_key):
client = WebSearchAPI(CognitiveServicesCredentials(subscription_key))
try:
'''
Set the query, response_filter, and freshness using the SDK's search method. See:
https://learn.microsoft.com/python/api/azure-cognitiveservices-search-websearch/azure.cognitiveservices.search.websearch.operations.weboperations?view=azure-python.
'''
web_data = client.web.search(query="xbox",
response_filter=["News"],
freshness="Day")
print("\r\nSearching for \"xbox\" with the response filter set to \"News\" and freshness filter set to \"Day\".")
'''
If news articles are available, print the # of responses, and the first and second
articles returned.
'''
if web_data.news.value:
print("# of news results: {}".format(len(web_data.news.value)))
first_web_page = web_data.news.value[0]
print("First article name: {} ".format(first_web_page.name))
print("First article URL: {} ".format(first_web_page.url))
print("")
second_web_page = web_data.news.value[1]
print("\nSecond article name: {} ".format(second_web_page.name))
print("Second article URL: {} ".format(second_web_page.url))
else:
print("Didn't find any news articles...")
except Exception as err:
print("Encountered exception. {}".format(err))
# Call the function.
web_search_with_response_filter(subscription_key)
```
2. Run the Program
### Use safe search, answer count, and the promote filter
This sample uses the [answer_count], [promote], and [safe_search] parameters to filter search results using the SDK's *search method*. The [name] and [url] for the first result are displayed.
Search Method: `https://learn.microsoft.com/en-us/python/api/azure-cognitiveservices-search-websearch/azure.cognitiveservices.search.websearch.operations.weboperations`
1. Add this code to you Python Project
```
# Declare the function.
def web_search_with_answer_count_promote_and_safe_search(subscription_key):
client = WebSearchAPI(CognitiveServicesCredentials(subscription_key))
try:
'''
Set the query, answer_count, promote, and safe_search parameters using the SDK's search method. See:
https://learn.microsoft.com/python/api/azure-cognitiveservices-search-websearch/azure.cognitiveservices.search.websearch.operations.weboperations?view=azure-python.
'''
web_data = client.web.search(
query="Niagara Falls",
answer_count=2,
promote=["videos"],
safe_search=SafeSearch.strict # or directly "Strict"
)
print("\r\nSearching for \"Niagara Falls\"")
'''
If results are available, print the # of responses, and the first result returned.
'''
if web_data.web_pages.value:
print("Webpage Results#{}".format(len(web_data.web_pages.value)))
first_web_page = web_data.web_pages.value[0]
print("First web page name: {} ".format(first_web_page.name))
print("First web page URL: {} ".format(first_web_page.url))
else:
print("Didn't see any Web data..")
except Exception as err:
print("Encountered exception. {}".format(err))
```
2. Run the Program
### Clean up resources
When you're done with this project, make sure to remove your subscription key from the program's code and to deactivate your virtual environment.
## Learn how to use the Cognitive Services Python SDK with these samples
`https://github.com/Azure-Samples/cognitive-services-python-sdk-samples?tab=readme-ov-file`
```