Quickstart
Get started with OmniMind in minutes!
Quickstart Guide
This guide will help you get started with OmniMind quickly. We’ll cover installation, basic configuration, and running your first agent.
Installation
You can install OmniMind using pip:
Or install from source:
Installing LangChain Providers
OmniMind works with various LLM providers through LangChain. You’ll need to install the appropriate LangChain provider package for your chosen LLM. For example:
Important: Only models with tool calling capabilities can be used with OmniMind. Make sure your chosen model supports function calling or tool use.
Environment Setup
Set up your environment variables in a .env
file:
Your First Agent
Here’s a simple example to get you started:
Configuration Options
You can also add the servers configuration from a config file:
Example configuration file (browser_mcp.json
):
Using Multiple Servers
The MCPClient
can be configured with multiple MCP servers, allowing your agent to access tools from different sources. This capability enables complex workflows spanning various domains (e.g., web browsing and API interaction).
Configuration:
Define multiple servers in your configuration file (multi_server_config.json
):
Usage:
When working with an OmniMind
agent is configured for multiple servers, the agent can access tools available on all the connected servers. However, for tasks targeting a specific server, you may need to explicitly specify which server to use. This is done using the server_name
parameter in the agent.run()
method, as demonstrated in the following code snippet:
Available MCP Servers
OmniMind supports any MCP server, allowing you to connect to a wide range of server implementations. For a comprehensive list of available servers, check out the awesome-mcp-servers repository.
Each server requires its own configuration. Check the Configuration Guide for details.
Next Steps
- Learn about Configuration Options
- Explore Example Use Cases
- Check out Advanced Features