Beyond the Single Agent: Building a "Jira + Slack" for AI Swarms with LangGraph, Google A2A, and MCP
Author
Gad Benram
Date Published

As the industry has moved past the era where the challenge was simply "prompting the model correctly", the real bottleneck in AI engineering isn't intelligence—it’s synchronization.
Most of us still think in terms of single-threaded tools (like Claude Code or Cursor). But recently, we worked with an organization needing to screen 50,000 resumes a month. You cannot solve that with a single agent. You need a recruiting team.
The problem is no longer "who is the smartest agent." The problem is: How do we get dozens of agents to collaborate without descending into chaos?
To solve this, we at TensorOps built an internal tool we call "Monday for Agents." It’s essentially a JIRA + Slack infrastructure designed specifically for AI swarms. Here is a technical deep dive into how we built it using LangGraph, Google’s Agent-to-Agent (A2A) protocol, and the Model Context Protocol (MCP).
The Architecture: The Trinity of Swarm Orchestration
We needed a system where agents could define their own tasks, discover the right peers for help, and interact with external systems standarized.
1. The Brain: LangGraph
We use LangGraph to define the state machine of each individual agent. Through our UI, we construct an agent by defining three core properties:
- Base Context: The System Prompt + The Base Model (e.g., Gemini 1.5 Pro, GPT-4o).
- Toolbelt: The specific tools available to that agent.
- Capabilities: A semantic definition of what skills this agent can "export" to the swarm.
LangGraph allows us to maintain the state of the agent's reasoning loop, ensuring they don't get amnesia in the middle of a complex task breakdown.
2. The Nervous System: Google A2A
This is where it gets interesting. We didn't want hard-coded API routes between agents (e.g., if task == code, call agent_id_5). That doesn't scale.
We implemented Google’s A2A protocol to handle discovery and communication.
- Discovery: Agents broadcast their capabilities. When a Product Manager (PM) agent needs specs broken down, it queries the A2A network for an agent with the "Developer" capability.
- Communication: The PM says, "I've finished the specs; break these stories into tasks."
- Human-in-the-Loop: We treat the human user as a "client" node in the A2A network. The human talks to the Product Owner (PO), and the PO propagates instructions to the swarm.
3. The Hands: Model Context Protocol (MCP)
How do agents actually do work? We use MCP to standardize interactions with our "Jira" database and external environments.
- The "Jira" Database: We treat our task management system as an MCP server. Agents can query the schema to understand how to create_ticket, update_status, or link_dependency.
- The Environment: A Coder agent uses MCP to access the file system to write code. A Reviewer agent uses MCP to read that same code.
Because interactions are standardized via MCP, we don't have to write custom tool wrappers for every new integration. The agents simply "read" the protocol.
The Workflow: "JIRA" and "Slack" for Machines
We realized early on that agents need two distinct types of interaction: Structured State and Unstructured Chat.
The "Slack" (Communication Layer)
We built a chat interface that mirrors a Slack channel. Because we hook into the A2A protocol, human users can "join" the channel. You can watch the Product Owner debating with the Developer about scope in real-time and intervene if they are hallucinating requirements.
The "JIRA" (State Layer)
Conversation is fleeting; tickets are permanent. We use the task database to manage process context.
- The PM Agent creates tickets via MCP.
- The Dev Agent picks up a ticket, writes code, and updates the status to In_Review.
- The Reviewer Agent sees the state change and initiates a code review.
This separation concerns is vital. If you try to keep the entire project state in the context window (chat history), the model gets confused. Offloading state to a database ("Jira") keeps the context window clean.
The Orchestration Experiment: Enter the Scrum Master
During our initial runs, we noticed a recurring issue: Deadlocks.
Agents would get stuck in perfectionist loops or wait indefinitely for inputs that weren't coming. The swarm was smart, but it lacked drive.
The Solution: We introduced a Scrum Master Agent.
We defined this agent's role strictly:
- Review the "Jira" Board: Monitor tickets that haven't moved in $X$ time steps.
- Nag: Use A2A to message the owners of stuck tickets.
- Unblock: Ask, "What do you need to finish this?"
The results were immediate. The "social pressure" (simulated via A2A prompts) actually worked. The Scrum Master forced other agents to degrade their output slightly in favor of shipping, significantly increasing overall throughput.
Seeing the Swarm in Action
It’s hard to convey the dynamic nature of these interactions in text. To really understand how the A2A discovery happens in real-time, or to watch the Scrum Master agent "unblock" a developer, you need to see it moving.
We put together a 5-minute walkthrough showing a swarm building a simple CRUD app feature from scratch, from the initial spec to the final code review.
Click above to watch the "Monday for Agents" demo.
Below are three key moments from that workflow that highlight why this architecture is necessary.
1. The "Slack" Channel: Resolving Ambiguity (A2A)
The structured ticket flow is crucial, but the magic often happens in the unstructured messiness of chat. We found that agents need to clarify ambiguity just like humans do before they commit to a task.

In the example above, watch how the "Product Owner" agent (acting on instructions from a client) and the "Lead Architect" agent debate the requirements for a new database schema. The Architect realizes the initial spec is too rigid for future scaling. They hash it out in real-time in the channel using A2A protocols. Because this is an open channel, I, as the human user, actually jumped into this thread mid-way (as seen by the human avatar) to break the tie and approve the Architect's suggestion.
2. The "JIRA": Persistent Context (MCP)
The biggest issue with long-running agent operations is "context drift." If an agent works for 4 hours on complex code, it often "forgets" the original constraints. Our MCP-powered task database solves this by acting as the swarm's external, immutable memory.

This screenshot shows a ticket that has been passed between four different agents over two days. Notice the "Outputs" section at the bottom. Every time an agent completed a sub-step—like writing an interface definition or drafting tests—it used MCP to append that specific result back to the main ticket before wiping its own internal context window. The next agent picks up the ticket and immediately has the exact state needed to continue, without re-reading 50k tokens of irrelevant chat history.
3. Self-Correction: Agents Finding Bugs (LangGraph + A2A)
The true power of the swarm emerges when agents start policing each other. In a recent run, a "Frontend Developer" agent marked a complex React component task as complete.
Almost immediately, an automated "QA Agent," which was listening for status changes via MCP, spun up. It pulled the new code, ran the test suite, and failed it. As you can see in the image above, the QA agent didn't just silently reject the ticket in our "Jira." It proactively utilized the A2A network to send a direct Slack message to the developer agent.
The Vision: Why This Matters
Building a single agent that can code is solved. Building a system where a Product Owner talks to a Human, translates that into specs for a Architect, who delegates to Developers, who are nagged by a Scrum Master, all while updating a central database... that is the frontier.
This requires thinking beyond LLMs. It requires thinking about Topology and Protocol.
Call for Collaboration
We want to release this "Monday for Agents" platform as Open Source.
To be transparent: it currently has some embarrassing bugs. Synchronization is hard, and race conditions between agents are a nightmare I wouldn't wish on my worst enemy.
But the core logic is sound.
We are looking for collaborators—architects, prompt engineers, and backend developers—who want to help us polish this for a public release. If you are interested in solving the orchestration layer of the AI stack, let's talk.
Let's build the workspace of the future, so the agents can finish the rest of the work for us. 🛠️


