The MemoryGraph Python SDK provides native integrations for popular AI frameworks, giving your agents persistent memory without the MCP overhead.
Installation
# Core SDK
pip install memorygraphsdk
# With framework integrations
pip install memorygraphsdk[llamaindex] # LlamaIndex
pip install memorygraphsdk[langchain] # LangChain
pip install memorygraphsdk[crewai] # CrewAI
pip install memorygraphsdk[autogen] # AutoGen
pip install memorygraphsdk[all] # All frameworks Quick Start
from memorygraphsdk import MemoryGraphClient
# Initialize client
client = MemoryGraphClient(api_key="mg_your_key_here")
# Create a memory
memory = client.create_memory(
type="solution",
title="Fixed Redis timeout issue",
content="Used exponential backoff with max 5 retries.",
tags=["redis", "timeout", "solution"],
importance=0.8
)
print(f"Created memory: {memory.id}")
# Search memories
results = client.search_memories(query="redis timeout", limit=10)
for mem in results:
print(f"- {mem.title}")
# Create relationships
client.create_relationship(
from_memory_id=solution_id,
to_memory_id=problem_id,
relationship_type="SOLVES"
) Framework Integrations
| Framework | Integration Class | Description |
|---|---|---|
| LlamaIndex | MemoryGraphChatMemory, MemoryGraphRetriever | Chat memory + RAG retrieval |
| LangChain | MemoryGraphMemory | BaseMemory with session support |
| CrewAI | MemoryGraphCrewMemory | Multi-agent persistent memory |
| AutoGen | MemoryGraphAutoGenHistory | Conversation history |
Async Support
Full async/await support for high-performance applications:
from memorygraphsdk import AsyncMemoryGraphClient
async with AsyncMemoryGraphClient(api_key="mg_...") as client:
memory = await client.create_memory(
type="solution",
title="Async solution",
content="..."
)
memories = await client.search_memories(query="async") Client API Reference
| Method | Description |
|---|---|
create_memory(...) | Create a new memory |
get_memory(id) | Get memory by ID |
update_memory(id, ...) | Update a memory |
delete_memory(id) | Delete a memory |
search_memories(...) | Search for memories |
recall_memories(query) | Natural language recall |
create_relationship(...) | Create relationship between memories |
get_related_memories(id) | Get related memories |
Memory Types
task- Tasks and todossolution- Solutions to problemsproblem- Problems encounterederror- Errors and exceptionsfix- Bug fixescode_pattern- Code patternsworkflow- Workflows and processesconversation- Chat messages (used by integrations)general- General information
Relationship Types
SOLVES- Solution to ProblemCAUSES- Cause to EffectTRIGGERS- Trigger to EventREQUIRES- Dependent to DependencyRELATED_TO- General association
Configuration
Environment variables:
export MEMORYGRAPH_API_KEY="mg_..."
export MEMORYGRAPH_API_URL="https://graph-api.memorygraph.dev" # optional Or in code:
client = MemoryGraphClient(
api_key="mg_...",
api_url="https://graph-api.memorygraph.dev",
timeout=30.0
) Error Handling
from memorygraphsdk import (
MemoryGraphClient,
AuthenticationError,
RateLimitError,
NotFoundError
)
try:
memory = client.get_memory("invalid-id")
except AuthenticationError:
print("Invalid API key")
except NotFoundError:
print("Memory not found")
except RateLimitError:
print("Rate limit exceeded, please retry") Get API Key
Sign up at app.memorygraph.dev to get your API key. Go to Settings → API Keys and click Generate New Key.
Next Steps
- LlamaIndex Integration - Chat memory + RAG pipelines
- LangChain Integration - Persistent conversation memory
- CrewAI Integration - Multi-agent memory
- AutoGen Integration - Conversation history