# API Reference
Source: https://techiral.mintlify.app/api-reference/introduction
Complete OmniMind API Documentation
# Development
Source: https://techiral.mintlify.app/development
Contributing to OmniMind
# Development Guide
This guide will help you set up your development environment and contribute to OmniMind.
## Prerequisites
* Python 3.8 or higher
* Git
* Node.js and npm (for MCP server dependencies)
## Setting Up Development Environment
1. Clone the repository:
```bash
git clone https://github.com/Techiral/OmniMind.git
cd omnimind
```
1. Install development dependencies:
```bash
pip install -e ".[dev]"
```
1. Install pre-commit hooks:
```bash
pre-commit install
```
## Code Style
OmniMind uses Ruff for code formatting and linting. The project follows these style guidelines:
* Use type hints for all function parameters and return values
* Follow the PEP 8 style guide
* Use docstrings for all public functions and classes
* Keep functions focused and single-purpose
## Running Tests
The project uses pytest for testing. To run the test suite:
```bash
pytest
```
For more specific test runs:
```bash
# Run tests with coverage
pytest --cov=omnimind
# Run specific test file
pytest tests/test_client.py
# Run tests with verbose output
pytest -v
```
## Documentation
Documentation is written in MDX format and uses Mintlify for rendering. To preview documentation changes:
1. Install Mintlify CLI:
```bash
npm i -g mintlify
```
1. Run the development server:
```bash
mintlify dev
```
## Contributing
1. Create a new branch for your feature:
```bash
git checkout -b feature/your-feature-name
```
1. Make your changes and commit them:
```bash
git add .
git commit -m "Description of your changes"
```
1. Push your changes and create a pull request:
```bash
git push origin feature/your-feature-name
```
## Project Structure
```
OmniMind/
├── omnimind/ # Main package code
├── tests/ # Test files
├── examples/ # Example usage
├── docs/ # Documentation
├── static/ # Static assets
└── pyproject.toml # Project configuration
```
## Adding New MCP Servers
To add support for a new MCP server:
1. Create a new configuration template in the examples directory
2. Add the necessary server-specific code in the `omnimind` package
3. Update documentation with new server information
4. Add tests for the new server functionality
## Release Process
1. Update the version in `pyproject.toml`
2. Update CHANGELOG.md
3. Create a new release tag
4. Build and publish to PyPI:
```bash
python -m build
python -m twine upload dist/*
```
# Configuration
Source: https://techiral.mintlify.app/essentials/configuration
Configure your OmniMind environment
# Configuration Guide
This guide covers all the configuration options available in OmniMind.
## API Keys
Make sure to have the api key relative to the provider of your choice available in the environment. You can either:
1 - Create `.env` file with your keys as:
```bash
# OpenAI
OPENAI_API_KEY=your_api_key_here
# Anthropic
ANTHROPIC_API_KEY=your_api_key_here
# Groq
GROQ_API_KEY=your_api_key_here
# Gemini
GEMINI_API_KEY=your_api_key_here
```
and load it in Python using
```python
from dotenv import load_dotenv
load_dotenv()
```
This will make all the keys defined in `.env` available in your Python runtime, granted that you run from where the .env is located.
2 - Set it in your environment by running in your terminal the following command, e.g., for OpenAI:
```bash
export OPENAI_API_KEY='..."
```
and then import it in your Python code as:
```python
import os
OPENAI_API_KEY = os.getenv(OPENAI_API_KEY,"")
```
or any other method you might prefer.
## MCP Server Configuration
OmniMind supports any MCP server through a flexible configuration system. (For a list of awesome servers, you can visit [https://github.com/punkpeye/awesome-mcp-servers](https://github.com/punkpeye/awesome-mcp-servers) or [https://github.com/appcypher/awesome-mcp-servers](https://github.com/appcypher/awesome-mcp-servers), which have an amazing collection of them)
The configuration is defined in a JSON file with the following structure:
```json
{
"mcpServers": {
"server_name": {
"command": "command_to_run",
"args": ["arg1", "arg2"],
"env": {
"ENV_VAR": "value"
}
}
}
}
```
MCP servers can use different connection types (STDIO, HTTP). For details on these connection types and how to configure them, see the [Connection Types](./connection-types) guide.
### Configuration Options
* `server_name`: A unique identifier for your MCP server
* `command`: The command to start the MCP server
* `args`: Array of arguments to pass to the command
* `env`: Environment variables to set for the server
### Example Configuration
Here's a basic example of how to configure an MCP server:
```json
{
"mcpServers": {
"my_server": {
"command": "npx",
"args": ["@my-mcp/server"],
"env": {
"PORT": "3000"
}
}
}
}
```
### Multiple Server Configuration
You can configure multiple MCP servers in a single configuration file, allowing you to use different servers for different tasks or combine their capabilities (e.g.):
```json
{
"mcpServers": {
"airbnb": {
"command": "npx",
"args": ["-y", "@openbnb/mcp-server-airbnb", "--ignore-robots-txt"]
},
"playwright": {
"command": "npx",
"args": ["@playwright/mcp@latest"],
"env": { "DISPLAY": ":1" }
},
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/home/pietro/projects/mcp-use/"]
}
}
}
```
For a complete example of using multiple servers, see the [multi-server example](https://github.com/pietrozullo/mcp-use/blob/main/examples/multi_server_example.py) in our repository.
## Agent Configuration
When creating an MCPAgent, you can configure several parameters:
```python
from omnimind import OmniMind
import asyncio
import os
import json
async def advanced_multi_server_usage():
# Ensure GOOGLE_API_KEY is set
api_key = os.getenv("GOOGLE_API_KEY")
if not api_key:
raise ValueError("Please set the GOOGLE_API_KEY environment variable before running this script")
# Initialize OmniMind with the multi-server config
agent = OmniMind(config_path="multi_server_config.json", api_key=api_key)
try:
# Connect to all servers
await agent._connect_servers()
print("Tools loaded:", [tool.name for tool in agent.tools])
# Step 1: Fetch web content
fetch_query = "Fetch the content of 'https://marvel.fandom.com/wiki/Stark_Industries_(Earth-616)' and summarize it"
fetch_response = await agent.invoke(fetch_query)
summary = fetch_response["messages"][-1].content # Extract the summary from the last message
print("\nStep 1 - Web Summary:", summary)
# Step 2: Store the summary in memory
memory_query = f"Store this in memory under key 'stark_summary': {summary}"
await agent.invoke(memory_query)
print("Step 2 - Stored summary in memory")
# Step 3: Recall from memory and save to a file
recall_query = "Recall the value stored under 'stark_summary' and save it to a file named 'stark_summary.txt' in the workspace directory"
recall_response = await agent.invoke(recall_query)
print("Step 3 - Recalled and saved to file:", recall_response["messages"][-1].content)
except Exception as e:
print(f"Error during execution: {e}")
finally:
await agent.close()
if __name__ == "__main__":
asyncio.run(advanced_multi_server_usage())
```
### Available Parameters
* `config_path`: Path to the server configuration file.
* `api_key`: Authentication key for accessing services.
* `server_name`: Specifies the target server for tasks.
## Error Handling
OmniMind provides several ways to handle errors:
1. **Connection Errors**: Check your MCP server configuration and ensure the server is running
2. **Authentication Errors**: Verify your API keys are correctly set in the environment
3. **Timeout Errors**: Adjust the `max_steps` parameter if operations are timing out
## Best Practices
1. Always use environment variables for sensitive information
2. Keep configuration files in version control (without sensitive data)
3. Use appropriate timeouts for different types of operations
4. Enable verbose logging during development
5. Test configurations in a development environment before production
# Connection Types
Source: https://techiral.mintlify.app/essentials/connection-types
Understanding the different connection types for MCP servers
# Connection Types for MCP Servers
MCP servers can communicate with clients using different connection protocols, each with its own advantages and use cases. This guide explains the three primary connection types supported by OmniMind:
## Standard Input/Output (STDIO)
STDIO connections run the MCP server as a child process and communicate through standard input and output streams.
### Characteristics:
* **Local Operation**: The server runs as a child process on the same machine
* **Simplicity**: Easy to set up with minimal configuration
* **Security**: No network exposure, ideal for sensitive operations
* **Performance**: Low latency for local operations
### Configuration Example:
```json
{
"mcpServers": {
"stdio_server": {
"command": "npx",
"args": ["@my-mcp/server"],
"env": {}
}
}
}
```
## HTTP Connections
HTTP connections communicate with MCP servers over standard HTTP/HTTPS protocols.
### Characteristics:
* **RESTful Architecture**: Follows familiar HTTP request/response patterns
* **Statelessness**: Each request is independent
* **Compatibility**: Works well with existing web infrastructure
* **Firewall-Friendly**: Uses standard ports that are typically open
### Configuration Example:
```json
{
"mcpServers": {
"http_server": {
"url": "http://localhost:3000",
"headers": {
"Authorization": "Bearer ${AUTH_TOKEN}"
}
}
}
}
```
## Choosing the Right Connection Type
The choice of connection type depends on your specific use case:
1. **STDIO**: Best for local development, testing, and enhanced security scenarios where network exposure is a concern
2. **HTTP**: Ideal for stateless operations, simple integrations, and when working with existing HTTP infrastructure
When configuring your OmniMind environment, you can specify the connection type in your configuration file as shown in the examples above.
## Using Connection Types
Connection types are automatically inferred from your configuration file based on the parameters provided:
```python
from omnimind import OmniMind
# The connection type is automatically inferred based on your config file
agent = OmniMind(config_path="my_config.json")
agent.run()
```
For example:
* If your configuration includes `command` and `args`, an STDIO connection will be used
* If your configuration has a `url` starting with `http://` or `https://`, an HTTP connection will be used
This automatic inference simplifies the configuration process and ensures the appropriate connection type is used without requiring explicit specification.
For more details on connection configuration, see the [Configuration Guide](./configuration).
# LLM Integration
Source: https://techiral.mintlify.app/essentials/llm-integration
Integrate any LLM with OmniMind through LangChain
# LLM Integration Guide
OmniMind supports integration with **any** Language Learning Model (LLM) that is compatible with LangChain. This guide covers how to use different LLM providers with OmniMind and emphasizes the flexibility to use any LangChain-supported model.
## Universal LLM Support
OmniMind leverages LangChain's architecture to support any LLM that implements the LangChain interface. This means you can use virtually any model from any provider, including:
* OpenAI models (GPT-4, GPT-3.5, etc.)
* Anthropic models (Claude)
* Google models (Gemini)
* Mistral models
* Groq models
* Llama models
* Cohere models
* Open source models (via LlamaCpp, HuggingFace, etc.)
* Custom or self-hosted models
* Any other model with a LangChain integration
Read more at [https://python.langchain.com/docs/integrations/chat/](https://python.langchain.com/docs/integrations/chat/)
# Introduction
Source: https://techiral.mintlify.app/introduction
Welcome to OmniMind - The Open Source MCP Client Library
## What is OmniMind?
OmniMind is an open-source library that enables developers to connect any Language Learning Model (LLM) to any MCP server, allowing the creation of custom agents with tool access without relying on closed-source or application-specific clients.
## Key Features
Connect any LLM to any MCP server without vendor lock-in
Support for any MCP server through a simple configuration system
Simple JSON-based configuration for MCP server integration
Compatible with any LangChain-supported LLM provider
Connect to MCP servers running on specific HTTP ports for web-based integrations
Agents can dynamically choose the most appropriate MCP server for the task.
## Getting Started
Install omnimind and set up your environment
Learn how to configure omnimind with your MCP server
# Quickstart
Source: https://techiral.mintlify.app/quickstart
Get started with OmniMind in minutes!
# Quickstart Guide
This guide will help you get started with OmniMind quickly. We'll cover installation, basic configuration, and running your first agent.
## Installation
You can install OmniMind using pip:
```bash
pip install omnimind
```
Or install from source:
```bash
git clone https://github.com/Techiral/OmniMind.git
cd OmniMind
pip install -e .
```
## Installing LangChain Providers
OmniMind works with various LLM providers through LangChain. You'll need to install the appropriate LangChain provider package for your chosen LLM. For example:
```bash
# For OpenAI
pip install langchain-openai
# For Anthropic
pip install langchain-anthropic
# For other providers, check the [LangChain chat models documentation](https://python.langchain.com/docs/integrations/chat/)
```
> **Important**: Only models with tool calling capabilities can be used with OmniMind. Make sure your chosen model supports function calling or tool use.
## Environment Setup
Set up your environment variables in a `.env` file:
```bash
OPENAI_API_KEY=your_api_key_here
ANTHROPIC_API_KEY=your_api_key_here
```
## Your First Agent
Here's a simple example to get you started:
```python
# Add your own server to OmniMind
from omnimind import OmniMind
agent = OmniMind()
agent.add_server("my_server", command="python", args=["my_server.py"])
agent.run() # Your server’s live!
```
## Configuration Options
You can also add the servers configuration from a config file:
```python
# Initialize OmniMind with the multi-server config
agent = OmniMind(config_path="browser_mcp.json", api_key=api_key)
```
Example configuration file (`browser_mcp.json`):
```json
{
"mcpServers": {
"playwright": {
"command": "npx",
"args": ["@playwright/mcp@latest"],
"env": {
"DISPLAY": ":1"
}
}
}
}
```
## Using Multiple Servers
The `MCPClient` can be configured with multiple MCP servers, allowing your agent to access tools from different sources. This capability enables complex workflows spanning various domains (e.g., web browsing and API interaction).
**Configuration:**
Define multiple servers in your configuration file (`multi_server_config.json`):
```json
{
"mcpServers": {
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
},
"memory": {
"command": "C:\\Program Files\\nodejs\\npx.cmd",
"args": [
"-y",
"@modelcontextprotocol/server-memory"
],
"env": {
"MEMORY_FILE_PATH": "C:\\Users\\Lenovo\\OneDrive\\Desktop\\final JARVIS\\mcp\\workspace\\memory.json"
}
},
"filesystem": {
"command": "C:\\Program Files\\nodejs\\npx.cmd",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"C:\\Users\\Lenovo\\OneDrive\\Desktop",
"C:\\Users\\Lenovo\\OneDrive\\Desktop\\final JARVIS\\mcp\\workspace"
]
}
}
}
```
**Usage:**
When working with an `OmniMind` agent is configured for multiple servers, the agent can access tools available on all the connected servers. However, for tasks targeting a specific server, you may need to explicitly specify which server to use. This is done using the `server_name` parameter in the `agent.run()` method, as demonstrated in the following code snippet:
```python
# Initialize OmniMind with the multi-server config
agent = OmniMind(config_path="multi_server_config.json", api_key=api_key)
try:
# Connect to all servers
await agent._connect_servers()
print("Tools loaded:", [tool.name for tool in agent.tools])
# Step 1: Fetch web content
fetch_query = "Fetch the content of 'https://marvel.fandom.com/wiki/Stark_Industries_(Earth-616)' and summarize it"
fetch_response = await agent.invoke(fetch_query)
summary = fetch_response["messages"][-1].content # Extract the summary from the last message
print("\nStep 1 - Web Summary:", summary)
except Exception as e:
print(f"Error during execution: {e}")
finally:
await agent.close()
```
## Available MCP Servers
OmniMind supports any MCP server, allowing you to connect to a wide range of server implementations. For a comprehensive list of available servers, check out the [awesome-mcp-servers](https://github.com/punkpeye/awesome-mcp-servers) repository.
Each server requires its own configuration. Check the [Configuration Guide](/essentials/configuration) for details.
## Next Steps
* Learn about [Configuration Options](/essentials/configuration)
* Explore [Example Use Cases](/examples)
* Check out [Advanced Features](/essentials/advanced)