< Back to Blog

MCP Gets Tasks: A Game-Changer for Long-Running AI Operations

> December 5, 2024 | > Gregory Dickson | > 7 min read

The Model Context Protocol is adding async task support—and it’s going to fundamentally change how AI agents handle complex, time-intensive work.

The Model Context Protocol (MCP) has been revolutionizing how AI agents interact with external tools and data sources since its release. But there’s been a significant limitation holding back more sophisticated use cases: every tool call blocks until completion. No way to check progress. No way to retrieve results later. No way to handle operations that take minutes or hours.

That’s about to change.

The Problem: When Tool Calls Take Too Long

If you’ve built any serious MCP server, you’ve hit this wall. Maybe you’re wrapping a workflow API that processes large datasets. Maybe you’re orchestrating multiple AI agents. Maybe you’re running comprehensive test suites or complex data analysis pipelines.

The current pattern forces an uncomfortable choice:

Option 1: Block and wait - Your agent sits idle for minutes or hours while a single operation completes. If the connection drops, you lose everything and start over.

Option 2: Split into multiple tools - Create start_job, check_status, and get_result tools. Now you’re relying on prompt engineering to make the agent poll correctly. Sometimes it works. Sometimes the agent “forgets” to check back. Sometimes it hallucinates job IDs.

Option 3: Build a polling server - Your MCP server does nothing but poll other services. You’re just moving the problem around.

None of these are good solutions.

Enter SEP-1686: Tasks

The MCP core team has accepted SEP-1686, a specification for first-class async task support in the protocol. And it’s elegant.

How It Works

Tasks introduce a three-phase pattern:

// 1. CREATE - Start the operation, get task metadata back immediately
const task = await client.callTool({
  name: "analyze_dataset",
  arguments: { dataset: "large_file.csv" }
}, {
  createTask: true,
  ttl: 3600000 // Keep results for 1 hour
});

// Returns immediately with taskId: "abc-123", status: "working"

// 2. POLL - Check status when you want
const status = await client.getTaskStatus(task.taskId);
// { status: "working", pollInterval: 5000 }

// 3. RETRIEVE - Get the actual result when complete
const result = await client.getTaskResult(task.taskId);
// Returns the actual tool call result

Your host application stays in control. The agent can do other work. You can show progress in your UI. If the connection drops, you can reconnect and fetch results using the task ID.

Key Features

Generic Primitive: This isn’t just for tools. Tasks work with any MCP request type—tools, resources, prompts, sampling, you name it. The same pattern, consistently applied across the entire protocol.

Idempotent & Retry-Safe: Client-generated task IDs mean you can safely retry requests without creating duplicate tasks. Perfect for unreliable networks.

Resource Management: Built-in TTL (time-to-live) support means servers can clean up completed tasks automatically. No memory leaks from abandoned operations.

Graceful Degradation: Servers that don’t support tasks just ignore the metadata and return results normally. No version negotiation needed.

Bidirectional: Either clients or servers can create tasks. A server can task-ify a sampling request that needs user input, for example.

Real-World Impact

Amazon cited several production use cases driving this specification:

  • Healthcare & Life Sciences: Molecular analysis jobs processing hundreds of thousands of data points over several hours
  • Enterprise Automation: SDLC workflows spanning multiple teams and systems
  • Code Migration: Automated refactoring across large codebases with dependency analysis
  • Test Execution: Comprehensive test suites with thousands of cases
  • Multi-Agent Systems: Agents that need to coordinate without blocking each other

These aren’t edge cases. These are fundamental patterns for production AI applications.

MemoryGraph + Tasks = Powerful Memory Operations

I’m particularly excited about this because of what it means for MemoryGraph, my open-source MCP memory server.

MemoryGraph uses graph-based relationship tracking to give AI agents sophisticated, queryable memory. But some operations are computationally expensive:

Complex Graph Traversals

Finding all solutions related to a problem, following relationship chains, or exploring multi-hop connections across hundreds of memories—these queries can take time, especially as the graph grows.

Batch Memory Operations

Importing large conversation histories, bulk relationship creation, or memory consolidation operations that process hundreds of nodes.

Semantic Search at Scale

Vector similarity searches across large memory sets, especially with complex filtering or multi-term queries.

Memory Curation

Background cleanup operations, relationship strength decay, automated summarization of old memories, or graph optimization.

With task support, MemoryGraph can:

  1. Return immediately for expensive queries, letting agents continue other work
  2. Provide progress updates as complex traversals complete
  3. Cache results so agents can retrieve them multiple times without re-computation
  4. Support background operations without blocking the conversation
  5. Enable proactive polling from host applications to show memory operation status in the UI

Here’s what it might look like:

// Start a complex memory query
const task = await client.callTool({
  name: "memorygraph:recall_memories",
  arguments: {
    query: "authentication solutions",
    maxDepth: 3, // Deep relationship traversal
    includeRelated: true
  }
}, { createTask: true });

// Agent continues with other tasks...

// Host application polls and shows progress
const status = await client.getTaskStatus(task.taskId);
// UI shows: "Searching memories... (traversed 450 nodes)"

// Retrieve when ready
const memories = await client.getTaskResult(task.taskId);

Timeline & Implementation

The specification is already accepted and targeted for the DRAFT-2025-11-25 milestone. The full spec text is available in PR #1732, and SDK updates are in progress.

MemoryGraph will add task support once the official SDKs land. I’m planning to start with:

  1. Semantic search operations (initial implementation)
  2. Complex graph traversals with relationship depth > 2
  3. Batch imports for large memory sets
  4. Background memory curation operations

Future Possibilities

The task primitive is designed to be extensible. Future enhancements being discussed include:

  • Push notifications for state changes (no polling needed)
  • Intermediate results (stream partial outputs as they’re available)
  • Nested tasks (hierarchical workflows with parent/child relationships)

These would enable even more sophisticated patterns, like a memory query that spawns subtasks for different relationship types, or real-time streaming of search results as they’re found.

Why This Matters

Tasks aren’t just a nice-to-have feature. They’re a fundamental building block that unlocks entire categories of MCP applications that weren’t practically feasible before.

You can now build MCP servers that:

  • Wrap existing workflow APIs cleanly
  • Handle genuinely long-running operations (minutes to hours)
  • Support sophisticated multi-step processes
  • Enable true agent concurrency

And you can do it with a standard, well-defined protocol pattern instead of ad-hoc conventions that every server implements differently.

For MemoryGraph specifically, this means more sophisticated memory operations without blocking agents, better user experience in host applications, and the ability to handle much larger memory graphs efficiently.


Get Involved

The future of AI tooling is async. And it’s arriving in MCP.


Gregory Dickson is a Senior AI Developer & Solutions Architect specializing in AI/ML development and cloud architecture. He’s the creator of MemoryGraph, an open-source MCP memory server using graph-based relationship tracking.