We have already discussed on our blog how APIs are able not only to streamline integrations and the internal IT architecture, but also enable new business strategies.
Nevertheless, there is a recurrent concern about how to integrate legacy systems with new strategic technologies (IoT, AI+Analytics, etc.), without requiring a massive impact on the backend and minimizing risks. APIs can help!
We know that it is not always possible to work in a green field – the ideal scenario where we can start everything from scratch.
Many corporate environments have legacy systems that are crucial for their activity, but which also present difficulties in terms of evolution and integration.Legacy systems carry a certain stigma as being monolithic, problematic and difficult to evolve and having long delivery cycles and zombie technologies (those that nobody masters anymore, but are still rolling). This is, however, not always a bad thing. The system can operate in a stable, satisfactory manner and with a low cost/benefit ratio for replacements.
Even so, the growing complexity and opportunities in digital business require high connectivity between a variety of systems, protocols and applications in the cloud, as well as agility in new implementations, with security and compatibility.
By exposing APIs, legacy systems crucial for their activity can be maintained and integrated with systems from other platforms, in a secure, low coupling, standardized, simple and interoperable way.There are some approaches to expose APIs. First, we must talk about the anti-pattern, also known as “what not to do.”
That is to say, grabbing a market tool that makes some connection with its interface or database and exposing the APIs.
This design will expose developers to the issues present in the legacy system. The ideal exposure of APIs requires it to be focused on the developers that will use them, be simple to use, follow good practices, respond to what they need, and offer documentation and easy-to-access support.
These tools can even provide agility when creating an MVP, but we do not recommend this approach as a definitive solution.
First, we must define what the best design will be for the APIs, keeping in mind that it is not only for a machine. It is aimed at developers, so it must be easy for them to understand and use.
Thus, we suggest establishing a layer that we call API-front, connected to the legacy system and composed of 2 sub-layers:
API Facade – it abstracts all legacy and allows you to compose, for example, SaaS APIs with a mainframe, exposing it in a suitable format.
Mediate – routes from the Facade API to the backend. In this way, we want to expose the API in an ideal format for the different devices and frontends.
For the legacy backend, there are 2 lines of strategies that can be followed:
Both present their advantages and disadvantages, and the choice depends a lot on how architecture of the legacy.
If you can enter your legacy code and can expose standardized APIs (HTTP/REST), do so.
This is the strategy with a more accurate solution.
Example 1 – Web or client-server architecture
In this type of architecture, there will probably be a way to connect to the server, via some type of protocol (TCP Socket, Corba, SOAP, etc.).If you have a gateway for your application (we call it “messed-up API,” that is, it is outside a standardized design), then it is possible to adopt the API-First strategy, establishing the API-front layer to expose the APIs with a standardized protocol (basically, REST and JSON).
In this architecture, you have access protocols (HPR-IP, TCP-IP, etc.) to the mainframe, and the system generally returns a positional string. The API-front layer works on that string and puts it in the appropriate format.
One point of attention is scalability. It may be the case that the backend or the protocol are not scalable. In that case, it is necessary to consider other architectures, more modern and with good scalability.
If there are no access protocols to applications, a possible way is to make a direct connection to the database and expose the API in a standardized way.
This strategy consists of simulating navigation in the app and taking the data from the Web interface.
For these cases, there is the Node.js X-Ray module, which allows the manipulation of the DOM dynamically and can prove very useful.
Again, all strategies have their advantages and disadvantages, and choosing the best option will depend a lot on the architecture present in the backend.
Want to know more about how to create an API-front? What is the most appropriate strategy for your environment? Have any more questions about how to expose APIs?* Article based on the lecture “Exposing legacy and trapped backends APIs,” presented at QCon 2016 by Fábio Rosato, Professional Services Manager at Sensedia