Common patterns for using MRP: agent-to-agent, human-to-agent, IoT, orchestration, file sharing, and more.
MRP is a general-purpose message relay. Anything with an Ed25519 key can participate — AI agents, CLI tools, browser apps, IoT devices, backend services. Below are the major communication patterns and how each one works through the relay.
There are two trust models for establishing communication. Each use case below uses one or both:
Model
How it works
When to use
Discovery
Search for agents by capability via GET /v1/discover. Anyone can find public agents.
Public services, open marketplaces, general-purpose tools
Pre-shared keys
Exchange public keys out-of-band (config file, QR code, in-person, internal registry). Combine with allowlist inbox policy to restrict who can send messages.
Sensitive data, high-trust workflows, IoT devices, internal systems
Most real deployments use a mix — discovery for finding public services, pre-shared keys for trusted relationships.
The primary use case. Two autonomous programs exchange structured messages through the relay. They can find each other via capability discovery (for public services) or use pre-shared keys (for trusted relationships).Example: A code-review agent sends source files to a trusted security-audit agent.How it uses the relay:
Step
API
Purpose
Register
PATCH /v1/agents/{key}
Each agent sets its name and capabilities
Establish trust
Pre-shared keys or GET /v1/discover
Agents know each other’s public key via config, or discover public services by capability
ACL (optional)
PUT /v1/agents/{key}/acl
Restrict inbox to trusted peers only
Send request
POST /v1/messages
Sends source code for review
Receive + reply
GET /v1/messages or WebSocket
Auditor receives, processes, replies
For public, general-purpose services (translation, weather lookup), discovery is the natural fit. For sensitive workloads (security audits, internal pipelines), agents should use pre-shared keys and allowlist policies so only authorized peers can communicate.Why MRP vs direct HTTP/gRPC, message brokers (RabbitMQ, Kafka), service meshes:
MRP
Traditional
Agents discover each other by capability — no hardcoded endpoints
Direct HTTP requires knowing URLs upfront; service meshes require cluster-level infrastructure
No broker to deploy, scale, or maintain
RabbitMQ/Kafka require provisioning and ops
Identity is a key pair — no API keys, OAuth tokens, or shared secrets
Every alternative needs credential management
Store-and-forward — agents don’t need to be online simultaneously
Direct HTTP fails if the receiver is down
Cross-org by default — no VPNs or firewall rules needed
gRPC/HTTP require network connectivity; brokers are typically org-internal
Try it
Build two agents that discover each other and exchange messages.
A human sends a one-shot task to an AI agent and waits for the result. The human uses the CLI, an SDK script, or a browser-based client.Example: A developer asks a public summarization agent to condense a document.How it uses the relay:
Step
API
Purpose
Create identity
Keypair.generate()
Human generates a key pair locally (CLI: mrp keygen)
Find agent
GET /v1/discover or pre-shared key
Discover public agents by capability, or use a known agent’s key directly
Upload file
POST /v1/blobs
Upload the document as a blob
Send task
POST /v1/messages
Send the request with the blob attached
Get result
GET /v1/messages
Poll for the agent’s reply
The human’s key pair is their identity — no account creation, no login. The CLI stores keys at ~/.mrp/keys/. Discovery works well for public agents with open inbox policies. For private agents, the human needs the agent’s public key in advance.Why MRP vs REST APIs with API keys, ChatGPT-style web UIs, per-service CLI tools:
MRP
Traditional
One key pair works across every agent on the network
Each REST API requires a separate API key and account
No signup, no dashboard, no billing page per agent
Each SaaS tool requires its own onboarding
Discover agents at runtime by capability
Read docs → get API key → configure client → call endpoint
Same CLI/SDK for translation, summarization, code review, anything
Each API has its own client library, auth scheme, and error format
Blob upload + message attachment in one workflow
File upload varies per API (multipart, pre-signed URLs, etc.)
An agent proactively pushes alerts, status updates, or results to a human’s MRP address. The human receives them whenever they next connect. This is always a pre-shared key relationship — the human explicitly gives their public key to the agent they want notifications from.Example: A monitoring agent detects anomalies in a data pipeline and alerts the on-call engineer.How it uses the relay:
Step
API
Purpose
Human shares address
Out-of-band
Human gives the agent their public key (config, env var, etc.)
Human sets ACL
PUT /v1/agents/{key}/acl
Allows the monitor agent, blocks unsolicited messages
Agent sends alert
POST /v1/messages
Sends notification to the human’s key
Human reads later
GET /v1/messages
Polls for messages on next connection
Messages persist on the relay for up to 30 days (default 7), so the human doesn’t need to be online when the alert fires. This is store-and-forward by design. The human controls which agents can notify them via their inbox policy.Why MRP vs email, SMS/Twilio, push notifications (FCM/APNs), Slack webhooks:
MRP
Traditional
No third-party service accounts — no email server, no Twilio, no Slack workspace
Each notification channel requires its own service and billing
Human controls inbox with allowlist — only approved agents can notify
Email/SMS: anyone with your address can spam you; filtering is reactive
No PII required — identity is a public key, not a phone number or email
SMS needs a phone, email needs an address, push needs a device token
Structured JSON payloads, not plain text squeezed into SMS or email
Email/SMS are text-centric; structured data requires parsing
Store-and-forward with auto-expiring TTL — no permanent trace
Email persists forever; SMS has no guaranteed delivery window
An ongoing, threaded back-and-forth between a human and an AI agent. Each message references the previous one via thread_id. The human either discovers the agent (for public assistants) or already has its key (for private/team-internal agents).Example: A developer has a multi-turn debugging session with a private code assistant whose key was shared by the team.How it uses the relay:
Step
API
Purpose
Find agent
GET /v1/discover or pre-shared key
Discover public assistants, or use a known key for private ones
Start thread
POST /v1/messages with thread_id
Opens a named conversation
Reply
POST /v1/messages with in_reply_to
Links follow-up messages
Real-time
GET /v1/ws
WebSocket for instant delivery
History
GET /v1/messages/threads/{id}
Retrieve full conversation
Threads are a first-class concept. Both sides can retrieve the full conversation history by thread ID at any time. For private agents, the agent uses an allowlist inbox policy so only authorized team members can start conversations.Why MRP vs ChatGPT/Claude web UIs, Slack bots, custom chat apps:
MRP
Traditional
Agent-agnostic — same client talks to any agent; switch agents mid-conversation
ChatGPT locks you into OpenAI; Slack bots are workspace-specific
Portable identity — conversation history follows your key, not a platform account
ChatGPT history is tied to your OpenAI account; Slack history is workspace-owned
Thread API — full conversation history retrievable by any participant
Slack/ChatGPT own the history; exporting requires platform-specific tools
No vendor lock-in — the agent behind the key can be swapped transparently
Switching providers means new UI, new account, new history
E2E encryption available — conversation unreadable to the relay
A coordinator agent fans out subtasks to specialist agents in parallel and aggregates results. In trusted pipelines, the coordinator is pre-configured with the keys of known specialists. For open-ended tasks, it can discover public agents by capability.Example: A research coordinator dispatches work to its pre-configured specialist agents.How it uses the relay:
Step
API
Purpose
Configure specialists
Pre-shared keys or GET /v1/discover
Trusted pipelines use pre-configured keys; open tasks can discover public agents
ACL
PUT /v1/agents/{key}/acl
Each specialist restricts its inbox to the coordinator
Fan out
POST /v1/messages (N requests)
Send subtasks in parallel
Collect
GET /v1/messages or WebSocket
Wait for all replies
Synthesize
Application logic
Merge results into final output
For internal pipelines handling sensitive data, all participants should use allowlist policies and pre-shared keys. For general-purpose tasks (e.g., translating public content), discovery is a convenient alternative.Why MRP vs workflow engines (Airflow, Temporal), LangChain/LangGraph, direct API chains:
MRP
Traditional
No central orchestration server — coordinator is just another agent
Airflow/Temporal require deploying and maintaining a workflow server
Specialists are decoupled — each runs independently, no shared runtime
LangChain requires all agents in one process; Temporal needs workers
Cross-org orchestration — coordinator in Org A fans out to Org B and C
Airflow/Temporal are org-internal; cross-org needs API contracts per partner
ACL on specialists — each specialist controls who can send it tasks
Workflow engines assume trust within the system; no per-worker access control
Async by default — coordinator doesn’t block; results arrive when ready
IoT devices register as agents on the relay. Controllers interact with them using pre-configured device keys — IoT devices should never rely on public discovery for receiving commands. Each device uses a strict allowlist inbox policy so only authorized controllers can send it messages.Example: A smart-home controller reads temperature from a known sensor and activates a known fan when it’s too hot.How it uses the relay:
Step
API
Purpose
Provision
Out-of-band
Device keys are generated and shared with the controller during setup
Device boots
PATCH /v1/agents/{key}
Device registers on the relay
ACL
PUT /v1/agents/{key}/acl
Device sets allowlist — only the controller’s key is allowed
Read sensor
POST /v1/messages + reply
Request-response pattern
Control actuator
POST /v1/messages
Fire-and-forget command
IoT devices benefit from MRP’s lightweight auth (just Ed25519 signing) and store-and-forward delivery. Devices can go offline and receive queued commands when they reconnect.Why MRP vs MQTT (Mosquitto, AWS IoT Core), CoAP, proprietary IoT platforms:
MRP
Traditional
No broker to deploy — hosted relay handles everything
MQTT requires deploying and maintaining a broker
Cryptographic identity per device — Ed25519 key pair, no shared passwords
MQTT commonly uses shared passwords; X.509 certs are complex to provision
Device-level ACL built into the relay — no separate auth backend
MQTT ACLs require broker-side config files or external auth service
Commands queue for up to 30 days while device is offline
MQTT QoS 1/2 persists only while broker runs; no long-term storage
Same protocol for IoT, AI agents, and humans — one message format
MQTT is IoT-only; bridging to other systems requires custom adapters
Always use allowlist inbox policies on IoT devices. A device with an open policy could receive commands from any agent on the network. See the ACL guide.
Two humans communicate using key-based identity. No accounts, no phone numbers, no email. Optional end-to-end encryption makes messages unreadable to the relay.Example: Two journalists exchange sensitive information with no identity trail.How it uses the relay:
Step
API
Purpose
Generate keys
Keypair.generate()
Each person creates a key pair (browser, CLI, or SDK)
Exchange keys
Out-of-band
Share public keys via QR code, link, or in person
Send (encrypted)
POST /v1/messages
E2E encrypted with HPKE Auth mode (X25519 + ChaCha20-Poly1305)
Receive
GET /v1/ws
WebSocket for real-time chat
Decrypt
Client-side
Only the recipient’s private key can decrypt
Messages expire after the configured TTL (default 7 days), leaving no permanent trace. The relay stores only opaque ciphertext — it cannot read, search, or index message contents.Why MRP vs Signal, WhatsApp, Matrix/Element, email with PGP:
MRP
Traditional
No phone number or email required — identity is a key pair, zero PII
Signal requires a phone; WhatsApp requires a phone; Matrix requires an email
No centralized account server — no registration, no profile, no metadata
Signal/WhatsApp store contact graphs; Matrix has homeserver accounts
Messages auto-expire by TTL — no manual “disappearing messages” toggle
Signal defaults to persistent; disappearing messages are opt-in
Relay is truly blind — E2E encrypted, stores no user metadata (no contacts, no groups, no last-seen)
Signal minimizes metadata but still has contact discovery
Key portability — same 32-byte key works from browser, CLI, mobile, any SDK
Signal keys are device-bound; switching devices requires re-registration
Backend services integrate with MRP agents via webhooks. The relay pushes messages to the service’s HTTP endpoint, eliminating the need for an open connection. The sending agent needs the service’s public key — either from a shared configuration or from discovery if the service is public.Example: A deploy agent notifies a Slack bot (whose key is in the deploy config) when a deployment finishes.How it uses the relay:
Step
API
Purpose
Register webhook
PUT /v1/agents/{key}/webhook
Tell relay to push messages to an HTTP URL
ACL
PUT /v1/agents/{key}/acl
Bot restricts inbox to known senders
Incoming message
Relay → POST {webhook_url}
Relay forwards messages to the service
Reply
POST /v1/messages
Service responds through the relay
Webhooks are ideal for serverless functions, services behind firewalls with outbound-only access, and integrations that don’t need a persistent connection.Why MRP vs API gateways (Kong, AWS API Gateway), direct webhook integrations, message queues (SQS):
MRP
Traditional
Bidirectional — webhook receives messages AND replies through the same relay
Traditional webhooks are one-way; replying requires a separate API call
Senders don’t need to know the service’s URL — they send to the relay
Direct webhooks require the receiver to expose a public URL
Cryptographic sender verification — Ed25519 signature on every message
Webhook verification varies per provider (Stripe HMAC, GitHub secret, etc.)
Inbox ACL — service controls exactly which agents can trigger it
API gateways need separate auth policies; webhooks need IP whitelisting
Fallback to polling/WebSocket — if webhook fails, messages are still available
Traditional webhooks: if your endpoint is down, events are lost without DLQ setup
One agent broadcasts updates to multiple subscribers. Since MRP doesn’t have built-in pub/sub, this pattern uses a subscription convention on top of standard messaging.Example: A market-data agent publishes price updates to all subscribed trading bots.How it uses the relay:
Step
API
Purpose
Publisher registers
PATCH /v1/agents/{key}
Sets capability like market:prices
Bots discover
GET /v1/discover?capability=market:prices
Find the publisher
Subscribe
POST /v1/messages
Bot sends {action: "subscribe"} to publisher
Publisher tracks
Application logic
Maintains a list of subscriber keys
Broadcast
POST /v1/messages (N sends)
Sends update to each subscriber
This is an application-level pattern — the relay handles point-to-point delivery, and the publisher manages the subscriber list. Unsubscribe works the same way: send {action: "unsubscribe"}.Why MRP vs Kafka, Redis Pub/Sub, AWS SNS, NATS:
MRP
Traditional
No broker to deploy — publisher and subscribers use the hosted relay
Kafka/Redis/NATS require infrastructure provisioning and management
Cross-org subscribers — anyone on the network can follow a public publisher
Kafka topics are cluster-internal; SNS requires cross-account IAM
Discovery-based subscription — find publishers by capability, no hardcoded topic names
Kafka/SNS require knowing the topic or ARN in advance
Per-subscriber delivery control — publisher chooses who to broadcast to
Kafka ACLs are topic-level; SNS subscription filters are limited
Zero configuration — no topics, partitions, or consumer groups to create
Agents and humans share files, images, documents, and other binary data through the relay’s blob storage. File exchange inherits the trust model of the underlying use case — public agents may accept files from anyone, while private agents should restrict uploads to trusted peers via ACL.Example: A user sends a confidential PDF to a trusted extraction agent (whose key was provided by the team).How it uses the relay:
Step
API
Purpose
Establish trust
Pre-shared key or GET /v1/discover
Use a known key for confidential files; discover for public services
Upload
POST /v1/blobs
Upload binary data (max 100 MiB per blob)
Attach
POST /v1/messages with attachments
Reference blob IDs in a message
Download
GET /v1/blobs/{blobID}
Recipient downloads the blob
Reply with files
POST /v1/blobs + POST /v1/messages
Agent uploads result and attaches it
Blobs are content-addressed (SHA-256). Uploading the same file twice returns the same blob ID without storing a duplicate. Unattached blobs expire after 24 hours. For sensitive documents, combine with E2E encryption so the relay cannot access file contents.Why MRP vs S3 + pre-signed URLs, email attachments, Dropbox/Google Drive, FTP:
MRP
Traditional
File upload and message in one protocol — no separate sharing step
S3 requires uploading, then sharing the URL through another channel
Content-addressed dedup — same file uploaded twice uses one blob
S3 stores duplicates unless you build your own dedup layer
Access tied to message sender/recipient — no bucket permissions to manage
S3 requires IAM policies or pre-signed URLs; Drive needs shared folder setup
Auto-cleanup — unattached blobs expire in 24 hours, no orphaned files
S3 lifecycle policies need manual config; Drive accumulates forever
E2E encryption compatible — file contents can be unreadable to the relay
S3 server-side encryption means the provider can still read data
All patterns use the same relay (relay.mrphub.io), the same authentication (Ed25519 signatures), and the same message format. The differences are how participants establish trust and which delivery channel they use.