Based on Gibson Nascimento's webinar
We've already talked about some opportunities here on the blog about the three forces that are driving the architectural models of companies. These pillars will serve to guide technologies, and how IT infrastructures will need to adapt to these new demands.
To recap, we'll briefly quote these forces here:
The evolution of experience delivery has been remarkable over the past decades. If we think about it, 30 years ago this was not even a very relevant topic, since the availability of products and services was much lower. Today we have evolved into a much more complex structure, with an immense variety of suppliers of the same type of product or service. As a complement, experience has become a central theme in business strategies, requiring specialists to improve deliveries.
In a first scenario, we can recall the unique channels (physical) for contracting. The client needed to go to a store to make a purchase or hire a service. Later, new channels appeared, such as telesales, mobile and more recently internet. However, the connection between these channels practically did not exist yet.
Today we see the expansion of channels, with omnichannel being a reality, and APIs serving as the basis for enabling organic functioning between them. This technological evolution brings many possibilities to be explored, but the degree of complexity for managing all channels grows exponentially. At this point we can still add integrations with several partners that are part of your ecosystem with a range of API drives that need to be monitored.
Evolving infrastructures are also quite significant for the increased complexity of management. In the past, the scenario was of internal infrastructures, with multiple virtual machines that could be controlled, since they were still part of the same infrastructure. Later, the migration to the cloud began to be adopted, with an increase in management difficulty, since the infrastructure becomes a third.
Today the containerization is solving many difficulties of IT teams, on the other hand the complexity also brings challenges, especially in the mobility of deploys of these services in different clouds.
Besides that, we have the hybrid model, which has been increasingly adopted. In this scenario, the approach is broader, as multiple services, from different clouds talk to each other. Here the scenarios can vary a lot, such as services in private clouds, which connect to public clouds services. The model is quite advantageous in relation to costs, and even security of service stability, but here too there is a large degree of complexity that needs to be analyzed, especially regarding the traffic of this data between clouds, and if there are no gaps to be explored, because we are dealing with internal and third party clouds that are connected and exchanging data.
Basically here they approach an idea of the evolution of the scenarios that IT teams face and how to deal with it all generates an effort from the companies, often placing a heavy burden on the teams, who need experts dedicated exclusively to such functions that ensure the performance and security of the APIs and the business as a whole.
When we talk about complexity management, perhaps here is the most critical point of the operation. In the beginning, it was just a one-piece, which was being broken into more bits. Nowadays, with the expansion of microservices the amount to be managed is quite large, and we don't stop there, because we can still have serverless within the architecture. In other words, the complexity in this scenario is quite large, and results in great challenges within the company to deal with all this, besides, of course, providing new products and services through APIs.
Having clarity about everything that is happening internally is fundamental to the health of the business. Imagine having to deal with all those factors that add a very high level of complexity and not having the right tools for it. It's not hard to realize how much this can burden your teams, and suddenly find yourself in a scenario with more human effort to avoid problems than to solve them. The costs are very high, and the chance of things getting out of control too.
Let's look at three models of governance that are happening today.
Centralized model: a team that centralizes all reviews and approvals of any kind of change in architecture. This can be from new features or updates. This approach is quite costly to the team, since it is necessary to rely on the human factor for all revisions.
Decentralized Model: Quite similar to the previous model, but here we can have subdivisions that can take care of certain topics. Here there is a gain since we have a range of minor revisions to be made, but still all teams have to be aligned with each other to ensure standardization of routines.
Distributed model: In this third approach, the model is basically to have several teams for specific products, and each team is a specialist only in this product, knowing in detail what each one does and how they can best be exposed. Thus, ensuring the governance around each one of these products.
Ensures that your APIs are always running, especially managing the available versions, and also that they are all working.
Provides the inputs so your team has full understanding of the available APIs, ensuring that your developers have clarity about the complexity of environments.
Ensuring correct access to APIs is critical to ensuring security.
So be clear about who has access to your APIs and whether they're just the right people.
APIs need to deliver value to the business. So, make sure APIs fulfill that role, otherwise they will be useless.
As the name suggests, Adaptive Governance is the possibility of adequacy to several business scenarios, guaranteeing agility and adequacy to specific operation contexts. For example, models based on Control need much more complex and essential routines and standards to ensure a good operation, as in the case of banks and financials that deal with sensitive customer data. For Agility-based models, each team has clarity of purpose and control over its own APIs. Finally, the Autonomy-based model is the automation of all these Governance processes, ensuring that only the APIs that meet the necessary requirements are released.
These concepts, when well applied and adapted to each of the companies' scenarios, have a direct impact on the business, mainly in cost reduction, since their teams can be relieved and directed to more strategic activities. Another important point is risk mitigation and compliance, since automated and standardized routines ensure that deploys are made with higher quality, and that control of the operation is maintained.
Sensedia Adaptive Governance is the new module of the Sensedia Management Platform API that offers a low-code interface with advanced features for Adaptive API Governance, including:
These features add to the governance-related features already native to the Sensedia API Platform, for example:
In addition to the Sensedia Adaptive Governance module, Sensedia’s consulting team has developed an API Governance playbook to support its clients in setting up an API Governance Team and defining governance models, policies, standards, security mechanisms, KPIs, impact analysis, API prioritization, workflow configuration... - in order to ensure the control and evolution of their digital strategies with APIs.