Stop building single agents. Learn how we built a "Jira" for AI swarms using LangGraph, Google A2A, and MCP to synchronize teams and scale complex ops.
Single agents are solved.The hard problemis the swarm:how dozens collaboratewithout descending into chaos.

As the industry has moved past the era where the challenge was simply "prompting the model correctly", the real bottleneck in AI engineering isn't intelligence—it’s synchronization.
Most of us still think in terms of single-threaded tools (like Claude Code or Cursor). But recently, we worked with an organization needing to screen 50,000 resumes a month. You cannot solve that with a single agent. You need a recruiting team.
The problem is no longer "who is the smartest agent." The problem is: How do we get dozens of agents to collaborate without descending into chaos?
To solve this, we at TensorOps built an internal tool we call "Monday for Agents." It’s essentially a JIRA + Slack infrastructure designed specifically for AI swarms. Here is a technical deep dive into how we built it using LangGraph, Google’s Agent-to-Agent (A2A) protocol, and the Model Context Protocol (MCP).
We needed a system where agents could define their own tasks, discover the right peers for help, and interact with external systems standarized.
We use LangGraph to define the state machine of each individual agent. Through our UI, we construct an agent by defining three core properties:
LangGraph allows us to maintain the state of the agent's reasoning loop, ensuring they don't get amnesia in the middle of a complex task breakdown.
This is where it gets interesting. We didn't want hard-coded API routes between agents (e.g., if task == code, call agent_id_5). That doesn't scale.
We implemented Google’s A2A protocol to handle discovery and communication.
How do agents actually do work? We use MCP to standardize interactions with our "Jira" database and external environments.
Because interactions are standardized via MCP, we don't have to write custom tool wrappers for every new integration. The agents simply "read" the protocol.
We realized early on that agents need two distinct types of interaction: Structured State and Unstructured Chat.
We built a chat interface that mirrors a Slack channel. Because we hook into the A2A protocol, human users can "join" the channel. You can watch the Product Owner debating with the Developer about scope in real-time and intervene if they are hallucinating requirements.
Conversation is fleeting; tickets are permanent. We use the task database to manage process context.
This separation concerns is vital. If you try to keep the entire project state in the context window (chat history), the model gets confused. Offloading state to a database ("Jira") keeps the context window clean.
During our initial runs, we noticed a recurring issue: Deadlocks.
Agents would get stuck in perfectionist loops or wait indefinitely for inputs that weren't coming. The swarm was smart, but it lacked drive.
The Solution: We introduced a Scrum Master Agent.
We defined this agent's role strictly:
The results were immediate. The "social pressure" (simulated via A2A prompts) actually worked. The Scrum Master forced other agents to degrade their output slightly in favor of shipping, significantly increasing overall throughput.
It’s hard to convey the dynamic nature of these interactions in text. To really understand how the A2A discovery happens in real-time, or to watch the Scrum Master agent "unblock" a developer, you need to see it moving.
We put together a 5-minute walkthrough showing a swarm building a simple CRUD app feature from scratch, from the initial spec to the final code review.
Click above to watch the "Monday for Agents" demo.
Below are three key moments from that workflow that highlight why this architecture is necessary.
The structured ticket flow is crucial, but the magic often happens in the unstructured messiness of chat. We found that agents need to clarify ambiguity just like humans do before they commit to a task.

In the example above, watch how the "Product Owner" agent (acting on instructions from a client) and the "Lead Architect" agent debate the requirements for a new database schema. The Architect realizes the initial spec is too rigid for future scaling. They hash it out in real-time in the channel using A2A protocols. Because this is an open channel, I, as the human user, actually jumped into this thread mid-way (as seen by the human avatar) to break the tie and approve the Architect's suggestion.
The biggest issue with long-running agent operations is "context drift." If an agent works for 4 hours on complex code, it often "forgets" the original constraints. Our MCP-powered task database solves this by acting as the swarm's external, immutable memory.

This screenshot shows a ticket that has been passed between four different agents over two days. Notice the "Outputs" section at the bottom. Every time an agent completed a sub-step—like writing an interface definition or drafting tests—it used MCP to append that specific result back to the main ticket before wiping its own internal context window. The next agent picks up the ticket and immediately has the exact state needed to continue, without re-reading 50k tokens of irrelevant chat history.
The true power of the swarm emerges when agents start policing each other. In a recent run, a "Frontend Developer" agent marked a complex React component task as complete.
Almost immediately, an automated "QA Agent," which was listening for status changes via MCP, spun up. It pulled the new code, ran the test suite, and failed it. As you can see in the image above, the QA agent didn't just silently reject the ticket in our "Jira." It proactively utilized the A2A network to send a direct Slack message to the developer agent.
Building a single agent that can code is solved. Building a system where a Product Owner talks to a Human, translates that into specs for a Architect, who delegates to Developers, who are nagged by a Scrum Master, all while updating a central database... that is the frontier.
This requires thinking beyond LLMs. It requires thinking about Topology and Protocol.
We want to release this "Monday for Agents" platform as Open Source.
To be transparent: it currently has some embarrassing bugs. Synchronization is hard, and race conditions between agents are a nightmare I wouldn't wish on my worst enemy.
But the core logic is sound.
We are looking for collaborators—architects, prompt engineers, and backend developers—who want to help us polish this for a public release. If you are interested in solving the orchestration layer of the AI stack, let's talk.
Let's build the workspace of the future, so the agents can finish the rest of the work for us. 🛠️