AI Is Not a Silver Bullet: Lessons from Cloud Migration and Legacy Systems
Source: AI Cyber Magazine
A Q&A WITH FILIPE TORQUETO
Filipe, let’s start with a foundational concept. What does “API-First AI” truly mean in practice, and why is this approach a fundamental shift from how most companies have built their AI systems to date?
API First is not new, it has been around for almost a decade. The difference now is that AI brings another challenge. We usually talk about syntax in the API space, but now we are shifting into semantics. If API First was important with traditional APIs, it is even more critical now. Without fixing fundamental issues in API enablement and human API consumption, there will be hurdles in the AI era.
We are using specific protocols and tools not only to document APIs, but also to provide context about what the API is and the data it shares. This is not just about maintaining the importance of API First for software development, it is becoming indispensable in the AI era because APIs form the link between the deterministic and non-deterministic worlds, shaping how we build APIs while AI demands how we use them. That is the real shift from the API
First of a decade ago to API First with AI, where integration challenges are not going away but simply being amplified.
While you were explaining, you mentioned hurdles, can you describe just one of those hurdles for us?
Right now, everybody wants to implement AI. The big hurdle is not implementation but control. AI without control is just a toy, and nobody wants a toy at the enterprise level. What we want is productivity, automation, and real production gains. No enterprise wants an autonomous agent going wild so APIs and API guardrails are central here. Expanding governance means putting in place the right guardrails, the right security levels, and ensuring AI agents work in a proper manner. That is the bigger hurdle today.
“AI without control is just a toy”
You’ve seen the good, the bad, and the ugly of digital transformation. What’s the biggest misconception that business leaders have about integrating AI, and what are the immediate risks of treating AI as just another application to plug in?
We have seen this pattern before with cloud migration. The belief was, “I will migrate to the cloud and all my problems will be solved.” Now the topic has shifted to AI and the same expectation is repeated: “I will apply AI and everything will be solved.” The reality is far more brutal: there are no silver bullets in technology. We deal with legacy systems, disparate systems, cloud and personal data centers, and all types of sensitive data. AI only adds more complexity and abstraction. To use AI securely, you cannot skip the baseline.
Another challenge is the neglect of integration. Excitement surrounds new models or business functions, but rarely integration lifecycle management. Many businesses
assume the cloud or CRM has solved the problem, but they fail to see the touchpoints or the origins of their data until leakage or outage occurs.
Point-to-point integrations without management remain common, yet this neglect of APIs and integrations is a huge mistake.
“You must know where your data is going, how it moves, and who is accessing it.”
Cloud tools and vendors often present integrations as ready to go, but this is misleading. Take CRM as an example. The biggest challenge is not the CRM itself but the data. Digital channels must be integrated to feed the CRM before it can create value.
Sensedia focuses on governed APIs. Why is governance, the security, trust, and explainability you mention, the single most critical factor for AI adoption, and what are the consequences of getting it wrong?
The consequences of getting it wrong can be brutal. Data leakage and security breaches can hurt you, but it can also be something simpler like when an agent you built and deployed can derail, start to hallucinate, or fail because the data lacks context. You may also not know where the data is, or it may be too hard for the agent to reach it.
There are many ways to derail an AI project without proper integration. Failures can range from security breaches and damaged reputation to an agent or final product not working as expected because governance is weak and the integration layer is flawed. It is not just about syntax in an API or getting an API sorted. APIs now need context, which is the link between these systems.
The risks span from wasted money and no results to the worst-case scenario of a full security breach. Everything in between is possible. This is why integrations for AI must be taken seriously.
AI adoption is often a C-suite mandate. What’s the one piece of advice you’d give a C-level executive who wants to lead an API-First AI strategy but doesn’t have a technical background?
AI is a gigantic topic. If there is one piece of advice, it is to put AI into pieces. Before you adopt AI, decide where, how, and what outcome you want. Saying “I want to put AI in my business” without knowledge or preparation takes you from zero to major risk which is not a good idea.
You can experiment with AI, but respect the journey. Decide where you want to implement it, learn the terms and know what you want to achieve. For example, you can start with AI in your internal development team using co-pilots. This is one implementation. Afterward, you could move to something more conversational for internal teams, like support or chat. Only then should you study where AI can enhance your main business.
Do not go from zero to applying AI in your critical business. Yes, you may succeed, but more often you will fail because you lack experience in implementation.
“Before you adopt AI, decide where, how, and what outcome you want.”
You’ve pioneered a roadmap for MCP implementation. How is this protocol different from the API strategies companies have been using for the last decade, and can you break down the crucial first step of “context mapping” for our audience?
It is not rocket science, it is simple. Instead of dealing only with API syntax, you are also dealing with semantics. Imagine you need to explain to a four-year-old what an API needs to do and what data it contains. This is what the MCP is doing. MCP is the door for AI to understand what an API is for and the semantics of the data, so the model or AI agent can use the API in the right context.
For example, if you create an MCP for your account API, your agent knows it has an integration point to manage account data. You no longer need to build specific integrations to that API. This is where governance becomes more complex. In the past, APIs were built for humans, supported by developer portals and documentation, where context was naturally understood. With AI, context must be explicit. The right protocol for the right job matters, and MCP provides that.
We are no longer developing APIs only for human consumption but for agents and autonomous models as well. The challenge is real, but the concept is straightforward. It is semantics: why the API exists and what it is doing. MCP was created for this purpose.
Before MCP, integrations were built by hand so agents and models could understand the APIs. MCP streamlines this for the market. While it is simple to explain, it is not simple to implement.
Let’s talk about the transition from “context mapping” to “optimization and scaling.” What is the single biggest technical challenge companies face at this stage, and how can a robust API management strategy solve it?
The challenges are many and depend on the maturity cycle of the company. If you already have API governance and solid integrations, implementing MCP is not hard, since it only adds semantics to what you know. The bigger hurdle is shifting the mentality from developing for humans to developing machine readable interfaces and giving proper semantics to the MCP server. Some of the most common hurdles are implementing an MCP server with the right security points, getting the right guardrails, and implementing observability and this is because it is new.
My advice is do not rush: first understand your integrations, map APIs, and Define your API governance.Going from zero to MCP is too much because Legacy systems will not disappear tomorrow. If you think API governance is not important, think again. They it is crucial if you want to implement MCP.
Could you provide a tangible example of a company that has successfully moved through your MCP roadmap, and what kind of competitive advantage they’ve gained as a result?
We have customers implementing MCP today who had already built their API governance with us, and when MCP arrived, they began to experiment. Because they experiment quickly, we can see clear differences. One of the early problems with AI was that customers with thousands of APIs were not going to ignore them to implement AI. Those APIs remain the entrance point, and the question was how MCP provides the answer.
Take healthcare as an example. A patient in triage is seen in general terms such as insurance status.The same patient in the ICU is understood completely differently, where heartbeat and breathing factors matter far more than insurance. APIs may have the same patient structure, but governance ensures agents understand the difference. This is where guardrails matter.
An AI agent can screen patients at triage with little risk, but in an ICU, autonomous decision-making is a no-go. Governance here requires a human in the loop as non-negotiable.It also requires strict permission mapping and ensuring data is not exposed than necessary.
Governance is exploding in complexity, and the meaning of governance itself is changing. AI governance, API governance, and integration governance are now mixing together. The protocol is well designed and evolving, but governance frameworks for AI agents are still developing. The main hurdle is linking API governance with AI governance.
I was discussing with a CISO the other day who was worried about employees setting up shadow MCP servers. What would be your advice to that CISO to detect this in an enterprise?
AI moves fast, and doing bad things with AI is just as fast. Another shift is that AI now allows almost anyone to vibe code. I am not saying it is right or wrong, but people can now create code and integrations without really understanding what they are doing. This is a huge risk. Someone can build a “cool” feature with AI, link it to a production integration, and suddenly a company is exposed while the CISO does not even know it happened.
Everyone wants to experiment with AI, but if you do not fully understand what you are putting into production and its implications, the answer is simple: stop. AI makes great demos, but enterprise-level software is not a demo. Without discipline, it can lead to lawsuits. Train people first, because the biggest risk is untrained users who want to play with AI without knowing the consequences. When it comes to software and guardrails, shadow MCP and shadow APIs exist. We have tooling to deal with shadow APIs and are evolving tools to avoid shadow MCP servers, prompt injection, and related issues. But everything is still new. The key is to know where implementations are happening. Choose platforms built for enterprise-level software. Yes, this may be slower, but slower and safe is better than fast and reckless.
One simple question always cuts through the noise: is this AI functionality actually going to help the business? Put ROI, cost, risk, data, and training on paper. When you do the math with enterprise standards in mind, you find what will really work. That is the path to
success.
“The biggest risk is untrained users who want to play with AI without knowing the consequences.”
The concept of “autonomous integration” sounds like something out of a sci-fi movie. Can you explain Agent-to-Agent (A2A) communication in simple terms, and what’s the most exciting real-world application you’re seeing right now?
In practice, you will have different agents with different responsibilities. A fraud detection agent cannot serve as customer support. But imagine a scenario where a customer support agent is handling a complaint about suspected credit card fraud. That agent must communicate with the fraud detection agent to retrieve the necessary data. This is where agent-to-agent communication comes in; it allows agents to talk to each other effectively, in a machinable,readable way, without friction.
The concept is simple: agents need to communicate. But the challenge lies in governance. How do you observe their communication? How do you ensure the conversation is accurate? And how do you make sure those agents do not run wild? That is where guardrails come in.
What’s the biggest security threat introduced by autonomous, agent-to-agent communication, and could A2A communication become the new insider threat vector if not governed properly?
MCP opens your front door, while A2A opens your side door. MCP is public and visible, so it carries external risks. However, A2A exposes internal threats. Without proper governance and guardrails, an internal agent could go wild, sharing user data with another user and causing a leakage. That is why a human in the loop is non-negotiable.
I don’t see this due diligence sometimes; some believe an agent can do everything and even replace support teams. But agents can invent programs or misrepresent systems. Imagine a credit card company with a clear reward structure. If an agent tells a customer the rewards work differently, a lawsuit could follow. When AI agents are used for training and deliver wrong information,employees may follow rules that do not exist and this results in confusion, misconceptions, and unnecessary risk.
What is one controversial opinion you hold about the future of API management in the age of AI?
When a customer says they want to implement an API, the first question I ask is, how are your APIs? Often, they start talking about apps. This is the first glitch. We have a lot of APIs in place, and they are not going anywhere. Now that we are talking about AI, API management will no longer focus only on APIs. Concepts like AI gateways are becoming essential, not just for traditional APIs but also for LLM APIs. This leads to a common misconception. AI gateways are not just about AI. They still deal with APIs, but at a different layer. Autonomous agents consume A2A, MCP links deterministic to non-deterministic, and all of this requires security and governance. A key question we face is how to guarantee that an agent has the correct permissions. Humans can be authenticated and authorized with credentials, but how will we authenticate and authorize agents?
The shift is already happening, AI governance is mixing with API governance, and common practices like Point-to-point integration make no sense today. If you are still doing it, you have no visibility into what is happening. A simple example: if ten systems must work together through point-to-point integration, just do the math. It becomes impossible to manage.
Do you think regulators will eventually require companies to adopt governed APIs for AI accountability, just like financial audits today?
In my perspective, regulations are inevitable. They will come, just as PCI compliance, ISO certifications, and data privacy laws did. Even with AI, PII and PCI data remain PII and PCI data. If a breach comes from an AI tool, it is still a breach. Regulation for AI will be inevitable because, like with all technology, we cannot let everything go wild. We need rules and guardrails to ensure we build properly and to help adoption.
Right now, everybody is somewhat lost. Nobody can say with certainty that this is the one way to implement AI. Regulations will help by setting guardrails, clarifying legal responsibilities, and establishing accountability.
If something goes wrong with a model, who is responsible—the builder, the implementer, or the user? Regulations will not hinder progress; they will clear the fog and guide safe, responsible implementation.
How should organizations restructure their teams, not just their technology, to successfully implement an API-First AI strategy?
I always say that every problem eventually is an integration problem. Take any business issue. Whether it is building a new app or adding AI functionality, the real issue is where the data is coming from and to whom. Businesses often assume integration is solved by using the cloud, but it is not. You must think about integrations and APIs in a proper manner because they can either make your life easier or frustrate it entirely. A simple example is the omni-channel perspective. A client expects the same experience across every channel, whether on a website, in a shopping mall, through a web app, or on a mobile app. Yet many companies still fail at this. The is because most of them do not think about integrations. They let systems grow wild, developing one thing in one way and another in a different way. The result is systems that do not talk to each other.
Now place this in the AI era, it is still the same problem but it is amplified. How many agents will you have? What will the experience look like? Where will the data come from and where will you see it? From an observability perspective, when an agent breaks at some point, how will you debug it? How will you see it? Who will be alerted?
But Filipe, if you had to project five years into the future, what’s the single biggest disruption to API management space that will be caused by AI in your opinion?
This concept of AI, APIs, and integrations will only grow more intricate. We are going to see more use cases that require additional integration layers, abstractions, and protocols. The market is already disrupted, but it will continue to evolve as this new AI conception, with autonomous agents and related technologies, demands new ways of thinking about integrations, integration governance, and CPIs.
Simply saying you have API management in place is no longer true, because API management itself is evolving. The future will be more intricate, more complex, and more
exciting. The non-deterministic world will coexist with the deterministic one without replacing it. The question is how and when this balance will happen, and the truth is that nobody really knows.
What is the one thing every company should be doing today to prepare for a world of autonomous, real-time AI systems?
Again, pay attention in your integrations. It is not only because we are an API management company, but from my experience working in many companies, integrations are becoming even more important. Today, data flows and integrations sit at the core of how businesses operate.
In the age of AI, will human developers become more important or less important in the integration process?
Human developers are more important than ever. Those who say software development is going to end are not going anywhere. If you deep dive into AI, you see it repeats the past, it does not build the future. Humans build the future: we feed AI with the past, so it can only throw back the past, something already done and built.
When you ask AI to create a new book, it is not completely new. You can see the references it uses to build the book. Developers are not going anywhere. What will change is the busy work, the plumbing, and the non-functional requirements. These tasks, while necessary today, will become easier and faster with AI. That speed will free developers to focus on what truly matters: building business capabilities.
Begin your API journey with Sensedia
Hop on our kombi bus and let us guide you on an exciting journey to unleash the full power of APIs and modern integrations.
Related content
Check out the content produced by our team.
Embrace an architecture that is agile, scalable, and integrated
Accelerate the delivery of your digital initiatives through less complex and more efficient APIs, microservices, and Integrations that drive your business forward.
.png)
