Axual Platform Documentation
Welcome to the Axual documentation. Use the menu on the left to navigate through the various sections.
In the latest Axual release we have added some nice features which helps users to be truly self sufficient. You can more easily find out if your producer or consumer works as expected by using Stream Browse & Search and upload schemas with the new Schema Upload feature. Lastly, REST Proxy allows you to easily produce and consume data to Axual via REST. Obviously there are a lot of updates done to the documentation, and we have greatly improved the Getting Started, linking to our collection of open sourced examples.
|For urgent production issues, we are on standby 24/7 to answer your questions. Please use the standby number communicated to you.|
For non-urgent requests, you’ll find our support options here.
As soon as you understand the basic concepts, and you want to get started building your application, creating a schema, or just want a guide to follow along to, please refer to the Getting started section.
Wherever you are, information is all around you. It is created by people, devices, applications, processes etcetera. From a bank transfer between accounts, to a car approaching a camera operated parking garage entrance, to an energy meter emitting energy measurements, information is generated continuously. Moreover, this information is not just being stored, it is being used to put things in motion. In your personal banking app you want the transaction to appear as soon as it has taken place, you want the garage door to open as soon as you approach it as an authorized person, the energy company wants to do billing based on energy meter readings of its customers. You could consider these examples as being one-offs. As soon as the bank transfer is processed, it’s ok to forget about it, you parked your car so why bother to use this information again, and once the energy bill has been generated, why store the meter reading?
If you look closely at processes happening within an organization, it appears that the same information appears to be reused A LOT! This happens in different ways. APIs might be called multiple times to obtain the same information, or database queries might be executed for the same reason. Wouldn’t it be great if you could grasp this piece of information the moment it occurs? If you could use it in real-time, with the purpose you have?
This is where streaming data comes into play. If you hear the term streaming it makes you think of water, or streaming audio or video, as offered on Spotify or Netflix. There are similarities, but when you talk about streaming data, it’s basically data (or pieces of information) created and made available close to the moment it has started to exist. The moment a transaction has taken place, some piece of information is created describing the transaction. As soon as you approach a garage sensor with your car, an approaching-door-event is generated by the device’s sensors, and when your energy meter performs a reading of the current meter values, it sends out a energy-meter-read event. When we mention streaming data, it is this continuous flow of information we are talking about.
Let’s assume you build an application that wants to detect fraudulent bank transfers or start billing as soon as the last energy meter reading for a customer’s contract period comes in. It would be great if you could get access to the relevant data stream in real-time, so you can act when it’s relevant. The streaming platform is the central place within any organization responsible for capturing data streams and making them available to anyone authorized to access it. On platform level, stability, high availability and security is guaranteed, it’s the job of producers and consumers to create value for the business.
Messages on data streams are created by producers. In essence, producers are the "listeners" or the "observers", noticing events taking place or pieces of information being born, and sending them to the streaming platform. Producers don’t care who is interested in their information. Their motto is: fire and forget! Consumers, on the other hand, subscribe to the data streams they are interested in, and use the messages on those data streams to "do something" for the business. Whether this is opening a garage door, or billing a customer.
The cool part here is that, the moment you have this data stream available on the platform, and the producer is actively producing data, you can have 1..n consumers subscribing to the same stream, each of them having their distinct use case to do so. This decoupling of producer and consumer and easy reuse of information, is where the true value of the streaming platform lies for the business, and its consumers.
Producers might not care about who is reading their messages, they do need to speak in a language which is understood by the other party, the consumers. In other words, the messages they produce to a data stream need to adhere to a specific schema, comparable to an API specification or a database table structure. You might conclude that strictly speaking, there isn’t really decoupling between producer and consumer, because they need to agree on what schema to use for messages on a particular data stream. This is partly true. Producers are allowed to change schemas taking into account the backwards compatibility of the schemas.
The examples above you could describe as being reactive patterns. Consumers respond in their way to an event taking place. This pattern has the added benefit that the consumer can not only respond to those events in its own way, it can also do so whenever this consumer wants to (within certain limitations). Asynchronous communication, in a streaming way, is used more and more to replace existing request-response means of communication, e.g. between APIs.