Loading…
June 14-15, 2026
Mumbai, India
View More Details & Registration

The Sched app allows you to build your schedule but is not a substitute for your event registration. You must be registered for MCP Dev Summit Mumbai to participate in the sessions. If you have not registered but would like to join us, please go to the event registration page to purchase a registration..

IMPORTANT NOTE: Timing of sessions and room locations are subject to change.


Company: Beginner clear filter
Sunday, June 14
 

6:05pm IST

The Coordination Tax: Why Your MCP Multi-agent System Degrades at Scale, and How To Fix It - Rudra Kushwah & Jay Shukla, Indian Institute of Information Technology, Nagpur; Shivaprasad Gowda, Indian Institute of Technology Roorkee; Krishna Padia, Shri Vil
Sunday June 14, 2026 6:05pm - 6:30pm IST
Multi-agent MCP systems work beautifully in staging. They fail in production. We learned this the hard way: three agents, nine tools, accuracy that quietly degraded under real load, and a job that blew past its token budget before anyone noticed.

We weren't alone. Google DeepMind and MIT's December 2025 paper "Towards a Science of Scaling Agent Systems" measured up to 17× error amplification in naive multi-agent setups and found coordination yields negative returns past a saturation threshold. Separate work (MAFBench, 2025) shows framework design choices alone can cut planning accuracy by 30% and collapse coordination success from over 90% to under 30%. Most MCP deployments hit this wall and misdiagnose it as a model problem.

This talk walks through three failure modes - Infinite Loop, False Consensus, Silent Fallback with message traces, token costs, and detection times. We then introduce the "topology contract": a lightweight JSON schema embedded in MCP server metadata, compatible with the 2026 Server Cards roadmap. Additive to the spec, zero protocol changes.

Attendees leave with a reproducible benchmark suite and a schema they can adopt in an afternoon.
Speakers
avatar for Rudra Pratap Singh

Rudra Pratap Singh

AI Systems Developer, Indian Institute of Information Technology, Nagpur
Rudra Pratap Singh is a CSE (AI/ML) student at IIIT Nagpur and Research Intern at IIT Mandi, working on medical imaging with deep learning. Skilled in Python, TensorFlow, and GenAI, he has built impactful AI systems and led hackathons, mentoring 60+ students.
avatar for Shivaprasad Gowda

Shivaprasad Gowda

Research Intern, Indian Institute of Technology Roorkee
Research Intern at IIT Roorkee (Prof. Sparsh Mittal). Second-year undergrad at IIIT Nagpur. Independent research on Destructive Rank Collapse in deep networks. Built a financial documents GenAI solution at scale. Experience in LiDAR 3D perception, synthetic EEG generation, and multi-GPU... Read More →
avatar for Krishna Padia

Krishna Padia

BTech Student (Computer Engineering), Shri Vile Parle Kelavani Mandal's Shri Bhagubhai Mafatlal Polytechnic And College of Engineering
Krishna Padia is a Computer Engineering student with a strong interest in technology, problem-solving, and innovation. She enjoys exploring new ideas and applying logical thinking to real-world challenges. With a curious mindset and a drive to learn, she actively engages in activities... Read More →
avatar for Jay Shukla

Jay Shukla

AI System Developer, Indian Institute of Information Technology, Nagpur
Jay Shukla is a CSE (AI/ML) student at IIIT Nagpur and a Research Intern at SVNIT Surat, working on energy time forecasting using deep learning. He is skilled in Python, TensorFlow, and Generative AI, and is also exploring Reinforcement Learning, with experience in building AI models... Read More →
Sunday June 14, 2026 6:05pm - 6:30pm IST
Lotus 2
 
Monday, June 15
 

11:30am IST

The Benchmark That Almost Convinced Us MCP Was Wrong - Ravi Madabhushi, Scalekit
Monday June 15, 2026 11:30am - 11:55am IST
We ran 75 benchmark runs comparing CLI and MCP for identical agent tasks. CLI won every efficiency metric.

For the simplest task like identifying a repo's language — CLI used 1,365 tokens. MCP used 44,026. That's a 32x difference, almost entirely schema overhead: 43 tool definitions injected into every conversation, most never touched.

CLI was also 100% reliable. MCP failed 28% of the time like TCP-level timeouts on a remote server that never responded.

If we'd stopped there: use CLI, skip MCP, move on.

But that benchmark tested one scenario: a single developer automating their own workflow. Not what production agent products look like.

The moment your agent acts as your customer's employees inside their orgs, across services they control — CLI becomes a liability. It inherits your credentials. Can't scope per user, per org, or per action. There'd be no audit trail and no consent as well.

The data is real. CLI is faster, cheaper, simpler. For personal dev tools, use it.

But if you're building a product with data that belongs to someone else, CLI works in demos. You won't catch the problem until a customer asks why your agent touched their Salesforce without asking.
Speakers
avatar for Ravi Madabhushi

Ravi Madabhushi

Cofounder & CTO, Scalekit
Ravi has been building infra for how software talks to other software for more than a decade. He co-founded Pipemonk — a SaaS integration platform acq. by Freshworks (NASDAQ listed) then spent years leading product on Freshworks' auth platform as it scaled to 50K+ businesses and... Read More →
Monday June 15, 2026 11:30am - 11:55am IST
Lotus 1
  Building with MCP

11:55am IST

Rethinking Testing for MCP Servers - Puja Jagani, BrowserStack
Monday June 15, 2026 11:55am - 12:20pm IST
MCP servers introduced a new kind of client: one driven by an LLM. Unlike traditional clients, this one is non-deterministic. It can call tools in unexpected sequences, with unpredictable inputs.

We can’t reliably test how an LLM will call our tools. This makes the MCP server the only component under our control and the one that must be tested rigorously. A well-designed server is a well-tested server.

While reviewing several popular MCP servers, I found that “works correctly” often means testing happy paths and checking that tool descriptions exist.

This talk introduces a practical testing model for MCP servers. Attendees will learn how to treat tool descriptions as functional contracts in their tests, how to design tests that cover real-world and deliberate out-of-order tool call sequences, essentially what an LLM would attempt, and how to validate error channels so that, when things go wrong, the server returns errors that are actually useful for an LLM to recover or respond appropriately. These are practical techniques that can be applied immediately.

The goal is to establish a testing mental model that can keep up with the fast-moving MCP ecosystem.
Speakers
avatar for Puja Jagani

Puja Jagani

Lead of Open Source and Developer Advocacy, BrowserStack
I lead Open Source and Developer Advocacy at BrowserStack, working at the intersection of engineering, community, and product. I’m a core committer and Technical Leadership Committee member for Selenium, collaborating with browser vendors to advance web automation. As a Developer... Read More →
Monday June 15, 2026 11:55am - 12:20pm IST
Lotus 1
  Building with MCP

11:55am IST

Who Watches the MCP Servers? Building Observability for the MCP Layer - Koteswara Rao Vellanki, TransUnion
Monday June 15, 2026 11:55am - 12:20pm IST
I have nine MCP servers in production wrapping kubectl, Prometheus, Grafana, OpenSearch, and CI/CD. Three months in, one started silently dropping tool calls. Connection pool exhausted but health checks passing. Logs clean.
Agent did not raise any error. It stopped using that tool and started making up answers. For two weeks nobody caught it. Then one incident response went wrong because the agent was working with data it generated itself.
After that I realised we are building this whole MCP layer with zero visibility into whether it is healthy. Normal monitoring does not catch these failures.
So I built an observability platform for MCP servers. OpenTelemetry hooks that plug into any FastMCP server in two lines of code. Prometheus metrics built for MCP: tool call latency, error rates, connection pool usage. The important one is tool call frequency deviation. It tells you when an agent gradually stops using a tool. That is how you catch the worst failure: agent walking away from a broken server and hallucinating.
In the demo I degrade a server on stage. Normal monitoring will not catch it. Frequency alert fires in under two minutes.
Open-source, packaged as a Helm chart.
Speakers
avatar for Koti Vellanki

Koti Vellanki

DevOps Engineer, TransUnion
Senior DevOps Engineer based in Bangalore with over a decade of experience in platform engineering and cloud infrastructure. I work mostly with Kubernetes, observability systems, and CI/CD at scale. Currently building open-source MCP tools that connect AI agents to production infrastructure... Read More →
Monday June 15, 2026 11:55am - 12:20pm IST
Lotus 2

4:40pm IST

Agentic DX: Bringing Your IDP Into the IDE - Adnan Vahora & Dhwani Suthar, Motorola Solutions
Monday June 15, 2026 4:40pm - 5:05pm IST
Platform engineering has a chicken-and-egg problem: the platform needs adoption to justify investment, but adoption requires onboarding that teams resist when deadlines are tight. Our internal developer platform hit this hard. It serves 4,000+ developers across clouds and managed Kubernetes, yet many teams found the portal too unfamiliar.
We solved it with a second entry point built on MCP. Instead of learning a new UI, developers get 30+ platform capabilities directly in IDE chat, from namespace provisioning and Helm deployments to cost analysis and access management. An MCP App renders forms in chat, developers approve and execute, and a first deployment can happen with almost no onboarding.
This session covers the production architecture: sandboxed iframe-based MCP Apps, Elicitation for structured write approvals, an Adaptive Tool Router that keeps 30+ tool schemas from flooding the context window, a split between deterministic Agent Skills and ReAct reasoning, and a safety layer with a sub-500ms kill switch plus delegated RBAC tied to existing permissions. Attendees leave with a practical blueprint for meeting developers where they already work.
Speakers
avatar for Dhwani Suthar

Dhwani Suthar

Software Engineer, Motorola Solutions
Everyone loves spinning up massive cloud infrastructure. Absolutely nobody loves figuring out who has to pay for it.That’s where I come in. At Motorola Solutions, I’m a full-stack data engineer in FinOps, taking high-velocity streaming data and reverse-engineering it into beautiful... Read More →
avatar for Adnan Vahora

Adnan Vahora

Software Engineer, Motorola Solutions
Building the roads and traffic lights for the next generation of AI at Motorola Solutions. I’m currently obsessed with solving the 'hard parts' of Agentic AI—like figuring out how to secure Agent-to-Agent traffic without slowing it down.

I’m a big believer in open standards (huge fan of Envoy & Wasm) and love turning chaotic problems into clean architecture. Always happy to swap stories about platform engineering, Rust, or the latest in AI governance. Come say hi... Read More →
Monday June 15, 2026 4:40pm - 5:05pm IST
Lotus 3
 
  • Filter By Date
  • Filter By Venue
  • Filter By Type
  • Audience Experience Level
  • Timezone

Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.