Earlier this month, Dax Raad (@thdxr) posted something that resonated with a lot of developers:
"idk how people manage infrastructure anymore. every service has their own bespoke cli / config file and they don't support terraform well anymore. your system is never just one provider so do people just have a mess of these smashed together?"
Within a day, over fifty thousand people saw it. The replies poured in. SST. Pulumi. Ansible. "Just stay on AWS." "Python scripts that make REST calls." "It's job security." "Infrastructure today is duct tape wearing a dashboard."
Everyone recognized the problem. But the solutions were all tools, not foundations. I've been building something to address this, and seeing fifty thousand developers feel the same frustration confirmed I wasn't imagining it. Lock-in is a symptom, fragmentation is the disease, and programming languages solved the underlying problem decades ago.
I started thinking about vendor lock-in
The original frustration was familiar. You build on one provider, they change pricing, deprecate an API, or just aren't the right tool anymore, and migrating is brutal. Not because the concepts are hard, but because every provider speaks a different language.
The obvious answer seems like abstraction. Build a layer on top. That's what Terraform tried to do, and SST, and a dozen other tools.
But abstraction layers don't actually solve the problem, they just move it. You're still dependent on someone else to keep up with every provider. You're still waiting for the plugin to be written. You're still one licensing change away from being back where you started.
The root cause
@Zenul_Abidin nailed the trajectory: "Abstractions are breaking down. Terraform worked when providers were predictable, now every service ships its own opinionated layer." And @aalachimo connected it to incentives: "The shift away from Terraform support says more about vendors optimizing for lock-in than about infrastructure evolving."
@jetpen got closest to the structural problem:
"There is no compatibility across infrastructure and platform vendors for provisioning anything, so there is no hope of having a single implementation that works across GCP, AWS, Azure, OCI..."
He's right that there's no compatibility. But I'd frame the root cause differently. There's no standard way for a service to describe itself.
And then I had a thought that reframed the whole thing for me. This is actually a solved problem inside software. Swift has Protocols. Go has Interfaces. Haskell has typeclasses. You write code that says "I don't care what this is, as long as it can do these things." You code to the shape, not the implementation. Swap the backing type and your code still compiles.
If it walks like a duck and quacks like a duck, treat it like a duck. The key insight is that this inverts the dependency. Instead of your code depending on a specific implementation, implementations depend on satisfying a shared shape. New providers can show up without changing anything upstream. Programming languages have had this for decades, but the web has never had a widely adopted equivalent at the network boundary. Every service requires prior knowledge. You can't just ask "can you store a file?" and have any compliant service answer.
What if you could?
The index.html problem
There's a related piece that kept bugging me. Every website has an index.html. It's the front door, a shared convention for discovery that requires no prior knowledge of how a site is structured. You go to the root, you start there.
APIs have nothing like this. You have to know where the docs are, which spec format they use (OpenAPI? AsyncAPI? gRPC? Their own proprietary format?), how to authenticate, what the operations are called. And even once you find all that, there's no way to say that "upload a file to S3" and "upload a file to R2" are semantically the same operation expressed through different protocols.
Last year the IETF published RFC 9727, a well-known URI (/.well-known/api-catalog) for discovering that APIs exist. That tells you what APIs a domain has. It doesn't tell you what those APIs do, or which ones are equivalent. The two are complementary. An api-catalog entry could link to an OBI. But discovery alone doesn't close the gap. You still need prior knowledge to use what you find.
So every tool that tries to unify providers has to do it by brute force: hard-code every integration, maintain every mapping, chase every API change. Forever.
AI has gotten remarkably good at papering over this, and I think that's actually made the underlying problem less visible. An LLM can read docs, figure out auth, and generate API calls. MCP gives AI agents a clean way to interact with services. But it's still brute force, just faster brute force. The LLM is guessing at structure that could be declared, parsing docs that could be machine-readable, inferring equivalence that could be verified.
The actual problem is fragmentation
Vendor lock-in is a symptom. The disease is fragmentation. Providers can't be interchangeable because there's no shared language for describing what they do and how to talk to them. Every tool that tries to solve this builds another layer on top, and the mess is still there underneath, waiting for the next breaking change.
@qrcey put it well: "instead of a standard for everything we would possibly need (terraform), it feels like theres a need for a unified interface to run whats effectively needed, anywhere." That's exactly the distinction. Not a better abstraction on top. A standard underneath.
What I built
That's what OpenBindings is. An open specification for describing what a service can do, portable across protocols and environments. The core of it is a document called an OBI (OpenBindings Interface). Here's a minimal one:
{
"openbindings": "0.1.0",
"name": "My Storage Service",
"operations": {
"createBucket": {
"input": { "type": "object", "properties": { "name": { "type": "string" } } },
"output": { "type": "object", "properties": { "id": { "type": "string" } } }
}
},
"sources": {
"rest": { "format": "openapi@3.1", "location": "./openapi.yaml" }
},
"bindings": {
"createBucket.rest": {
"operation": "createBucket",
"source": "rest",
"ref": "#/paths/~1buckets/post"
}
}
}Operations define what a service can do, with input and output schemas. Sources point to existing spec artifacts, an OpenAPI doc, a proto file, an MCP server, whatever. Bindings map operations to specific entry points in those sources. The whole point is separating meaning from access. The operation is defined once, then bound to whatever protocols the provider actually uses.
Two providers can implement the same interface and a tool can verify their compatibility without running a single request. Like Go interfaces, but at the network boundary.
Services publish their OBI at /.well-known/openbindings, the same way websites serve index.html at /. Tools discover it, understand it, interact with it, no hard-coded knowledge of any specific provider needed. This changes the AI story too. An LLM that discovers an OBI gets machine-readable operations, schemas, and bindings instead of having to guess at endpoints or infer equivalence from docs.
This is a different layer from tools like AWS Smithy or Microsoft's TypeSpec, which are model-first frameworks for API providers. Define your service once, generate OpenAPI docs, gRPC stubs, clients for multiple protocols. They're great at what they do. But the output is still your API. There's no concept of cross-provider equivalence. No way to say "my storage API satisfies the same interface as yours." Smithy or TypeSpec could be great inputs to OpenBindings though: define your service in either, generate your artifacts, then wrap them in an OBI.
Why now
Specs like OpenAPI, AsyncAPI, gRPC, and MCP already did the hard work of protocol-level description. None of them need to be replaced. What's been missing is a layer above them that connects descriptions together and says "these are the same capability." Let each spec do what it's good at.
I know how this sounds. The xkcd about 15 competing standards is probably already in your head. But OpenBindings structurally can't replace OpenAPI, gRPC, or MCP. An OBI without sources and bindings that point to those specs is just an unbound contract, not an actionable interface. The dependency runs one way. Those specs are inputs, not competitors.
And yeah, this has been tried before. WSDL, UDDI, CORBA. They all failed, but for reasons that don't quite apply here. WSDL was built for the SOAP era and never escaped it. UDDI required manual registration into a central registry. CORBA needed runtime infrastructure on both ends, and vendors couldn't even agree on that. Each of them required shared infrastructure. An OBI is a JSON file. The other difference is timing. In the WSDL era, most services spoke one protocol. Today a single service routinely exposes REST, gRPC, MCP, and GraphQL simultaneously. That multi-protocol reality is new, and it's the specific problem OpenBindings is designed for.
What one OBI gives you today
You don't need ecosystem adoption to get value from this. If you have an existing spec artifact, the ob CLI generates an OBI from it:
$ ob create openapi.json
Created my-service.obi.json (12 operations, 1 source, 12 bindings)Same for a gRPC server, an MCP endpoint, or a proto file. One command, and your service has a portable interface document. When your source spec changes, you regenerate. It's a build step, not a manual mapping.
That single OBI gives you concrete things right now:
- Unified execution across protocols. One operation name, one input schema. The binding executor for each format handles the protocol details. Your tools never contain protocol code.
- Discoverability. Publish at
/.well-known/openbindingsand any tool can find your operations, schemas, and access methods without reading docs or guessing at endpoints. - Machine-readable structure for AI. An LLM that discovers your OBI gets typed operations and bindings instead of parsing documentation. It can call your service correctly on the first try.
That's the full on-ramp. No providers need to opt in. No ecosystem needs to exist. One service, one file, immediate value.
The bigger picture
The immediate value is for one service, but the architecture is designed for what comes next.
Think about what switching a payment processor looks like today. You read two sets of docs. You map "create charge" in one API to "create payment" in another. You rewrite the integration, update error handling, test everything, and hope the edge cases match. Every step is manual and none of the knowledge transfers to the next migration.
Now consider what it looks like when both processors publish OBIs. If their operations have matching names and compatible schemas, a tool can detect that structurally, no coordination required. That's the duck typing payoff from earlier. Two services that independently define a createPayment operation with compatible input and output schemas are interchangeable, whether or not they've ever heard of each other.
For the explicit path, OBI has a concept called a role, a declaration that says "my operations satisfy this other interface." A provider adds a role reference to their OBI, maps their operations to the ones the interface defines, and tools can verify the claim by comparing schemas. Roles are useful when you want to be deliberate about conformance, but they're not required. The structural match works either way.
Both paths give you the same result. A tool can verify schema compatibility before you write a line of code. Your client targets the interface, not the provider. The happy path becomes a configuration change instead of a rewrite.
OpenBindings doesn't ship a payment-processor interface, or a storage interface, or an auth interface. It ships the foundation. Instead of one integration per provider, you maintain one binding executor per protocol. A single OpenAPI executor works for every service that speaks OpenAPI. The interfaces themselves come from the people who know each domain. Stripe publishes an OBI, other payment providers declare a stripe.payments role, or a community defines a neutral payment-processor interface. No central registry, no governance bottleneck.
Try it
Install the CLI and run the built-in demo to see this working in under a minute:
brew install openbindings/tap/ob
ob demo # starts a coffee shop service on six protocols
ob op exec http://localhost:8080 getMenu # calls it via REST
ob op exec http://localhost:8080 getMenu --binding getMenu.grpcServer # same operation, gRPCThe CLI fetched the OBI from localhost:8080/.well-known/openbindings, found the operation, selected a binding, and made the call. No configuration, no prior knowledge of the service, no protocol code.
The spec is open. The ob CLI handles creating, validating, and executing OBI documents. SDKs are available for Go and TypeScript, with binding executors for OpenAPI, gRPC, MCP, GraphQL, Connect, and AsyncAPI. See Creators and Executors for how the plugin architecture works.
Start here: Getting started guide.
Matthew Clevenger Creator of OpenBindings · @clevengermatt