Skip navigation
hand pointing to "API" Alamy

How Event-Native API Management Unlocks Full Potential of Today's Data

Here's how event-native API management solves the problems with conventional approaches to API management.

In the real world, information changes continuously. But you wouldn't know that if you look at the way conventional approaches to API management work. Traditional API management solutions are capable of processing data only via a request-response model — a model that makes it impossible to ensure data that consumers receive updates in real time.

Fortunately, there's a solution to this limitation. It's called event-native API management, and it's increasingly central for businesses that want to make sure their APIs can move as fast as their actual data.

This article unpacks the problems with conventional approaches to API management, explains how event-native API management solves them, and highlights some of the key use cases for event-native API management for today's businesses.

The Limitations of Synchronous APIs

Traditional APIs work in a synchronous way. A client makes a request to get data, and a server responds to the request. If the client wants updated data, it has to issue a new request, and the server has to respond again.

This approach works well enough if you have data that is updated only periodically, such as a list of local real estate transactions or patient data that doctors modify whenever an office visit takes place. In those use cases, data doesn't change frequently, so there is no need for consumers to make repeated API requests on an ongoing basis to keep their data up-to-date.

However, synchronous API management doesn't work well at all in situations where data changes continuously. If you want to provide data consumers with streams of information about the availability of free spots in a parking garage, for example, or allow them to process payments in true real time, you need a way of ensuring that API clients are aware of data changes within milliseconds of when they occur. A delay of even just a few seconds could become problematic in contexts where processes need to happen in true real time. The only way to keep up to date is to consume the data as a stream, rather than through requests and responses.

The Streaming Data Age

The limitations of synchronous API management wouldn't be so pressing if most data sources didn't change constantly. But the fact is that data streams are increasingly becoming the primary way in which organizations seek to expose information.

To be sure, non-streaming data sources — or sources that don't change frequently — still exist. But today, gaining a competitive edge often means having the ability to stream data so that you can make it available to your customers, partners, internal stakeholders, and anyone else who needs to access it as quickly and easily as possible.

What's more, the shift toward distributed, decentralized application architectures and hosting environments — a trend commonly summed up as the "cloud-native computing" — doubles down on the importance of being able to expose and consume data as streams. Streaming makes it possible to decouple data from specific endpoints. That way, data consumers don't need to worry about which endpoints they are connected to, or having their data access broken if a particular endpoint goes down. When businesses can expose data as a stream, anyone who wants to consume the data can, without worrying about the back-end configuration that drives the data stream.

The Limitations of Data Streaming

Recognizing the importance of being able to expose data in streams is one thing. Finding a way to do it — and, in particular, to do it securely, scalably, and efficiently — is another.

There are plenty of solutions — like Kafka and RabbitMQ, to name a couple of popular open source options — that can expose data within an organization's own IT estate as data streams. A major limitation of these technologies, however, is that they're not designed to expose data streams to the world. You can't, in other words, securely make your Kafka stream available to the world at large. And even if you ignored the security risks of doing so, Kafka itself wouldn't give the fine-grained controls you'd need to determine exactly how data is formatted and reshaped as it's streamed to external consumers.

The Advent of Event-Native API Management

That's where event-native API management solutions come in. Event-native API management is the ability to expose and control streaming data via secure APIs, which external data consumers can connect to in order to access continuously updated information. Event-native API management brings traditional API management policies, such as rate limits, quotas, and traffic shaping, to asynchronous data streams.

As a result, exposing data streams to the outside world becomes far easier. Developers can simply connect back-end data streams, like Kafka, to an event-native API gateway, then let the gateway do the heavy lifting necessary to expose that data externally and securely.

Of course, some data consumers will still want to access data using synchronous request-response models. Event-native API managers support that approach, too, even as they simultaneously make data available as a continuously updated stream. Even better, it's possible to mix and match protocols to get the best of both worlds in contexts where it makes sense. For example, you can adopt a traditional REST synchronous protocol on top of a Kafka instance.

Conclusion: Event-Native API Management and the Future of Data

When it comes to the future of data, two trends are clear: Data will continue to grow in volume, and the pace at which it changes will accelerate.

Event-native API management empowers businesses to embrace these trends and leverage the greatest possible value from large-scale data streams. By shifting to event-native API management, businesses gain maximum flexibility over how they share data — and their customers and partners enjoy the same flexibility over how they consume it. Instead of settling for periodic updates to information, data consumers can receive updates in true real time, while data producers maintain the ability to determine exactly how data is shared.

About the author: Rory Blundell is the Founder and CEO of Gravitee.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish