Ask HN: For vertical AI does MCP break need for 3rd party API libraries?
Hey HN, is ti just me or does MCP feel like one of those “this changes everything’ moments for vertical AI apps and API integrations? If you’ve ever wrestled with middleware libraries.. is this the exit ramp moment?
Goodbye spaghetti integrations: MCP flattens the M×N mess of custom connectors into a clean M+N setup. Models and tools just need to speak MCP, and they’re good to go.
Third-party libraries sweating bullets: why pay for middleware when MCP gives you a universal protocol out of the box? Libraries might have to pivot hard or risk becoming legacy tech.
Scalability++: MCP’s structured communication (Prompts, Resources, Tools) lets models and APIs dynamically discover and interact in real time. Debugging? Way easier. Maintenance? Less of a headache.
What happens to the middleware giants? are Merge, Finch, and Paragon doomed, or will they just rebrand as “MCP experts”?
Adoption pain: if your stack is built on third-party libraries, migrating to MCP might feel like open-heart surgery.
In reality tho, customization probably isn’t dead: vertical AI apps still need their niche workflows. How do we balance MCP’s standardization with the bespoke stuff that makes these apps tick?
is MCP the API integration library killer, or just another protocol to add to the pile?
Will third-party libraries adapt, or are we witnessing the beginning of the end of a web 2.0 middleware chapter that doesn’t apply to where the puck is moving?
So I built https://skeet.build and something’s that I learned along the way:
1. Yes MCP is basically an api wrapper and function calling. However if you “just” port open api v3 specs then you get really weird results like (sorry I can’t move you linear ticket because you need the uuid of the todo and in progress workflow ids). Basically people don’t memorize uuids of the APIs. APIs weren’t designed for natural language so there’s a lot of work that needs to be put in in order for it to “just” work.
2. MCP is still in early days. There’s a lot of community mcp servers but more than half of them don’t actually work. Also you have to setup your own oauth, go get your own keys, read docs on auth because some authenticate at the app level not on user level and make sure the urls are correct. So even though everyone is posting about how easy it is to build your own mcp servers it’s actually a lot of friction.
3. SSE setup is trickier than you think. Remote mcp is something they’re working on still. SSE requires a lot of low level networking, load balancer time out tuning, heartbeats, etc to get things working reliably. Sure it works for your local machine for that 30 mins but to get it working scalably and reliably on cloud infrastructure is really hard and not yet proven out yet.
Great points, and thanks for sharing your experience (Skeet looks sweet)—this is exactly the kind of real-world insight that’s missing from the MCP hype.
1. api design vs. natural language - totally agree, porting specs directly to MCP is little like fitting a square peg into a round hole. APIs weren’t built with conversational interfaces in mind, and the weird outputs shows. These quirks make it clear that MCP needs a lot of thoughtful abstraction to “just work” for end users.
2. Definitely early days still… the gap between the MCP promise and its current state is real. OAuth setup, inconsistent auth models, and half-baked community servers are huge speed bumps. It’s easy to talk about how MCP simplifies things, but the devil’s in the details—and those details are still messy.
3. SSE scalability - yea, this is a big one. Getting SSE to work reliably at scale is a nightmare, and it’s not something you can handwave away. Until MCP proves it can handle real-world, production-grade workloads, it’s hard to fully buy into the vision.
MCP feels like it’s addressing the right problems, but it’s clear we’re still in the “rough draft” phase. Curious to see how much of this friction gets smoothed out over the next year or so