Why OCI Dedicated Region Is the Missing Piece for Agentic Workloads

Why OCI Dedicated Region Is the Missing Piece for Agentic Workloads

In my last blog post, I explored how OCI Dedicated Region helps enterprises retrofit AI workloads into their existing data centers. We discussed how bringing Oracle’s cloud infrastructure on-premises addresses challenges such as GPU availability, latency, and data sovereignty, thereby removing many barriers to AI adoption.

Today, I want to take this further and explore the next wave of AI evolution, agentic AI, which not only responds to prompts but also takes autonomous actions. This isn’t just about having powerful models, it’s about embedding intelligence where it counts most: right next to your critical legacy systems.

The Rise of Agentic AI and Why It’s Different

Agentic AI represents a shift from passive AI tools to systems that can observe, decide, and act independently. Imagine AI agents that don’t just answer questions but manage workflows, orchestrate cloud resources, or automate incident response. This means giving AI the ability to interact with APIs, monitor real-time data streams, and adjust systems dynamically without human intervention.

The challenge? Most organizations’ critical data and applications still live in legacy platforms or tightly controlled environments. These environments were never built with autonomous AI in mind. Simply putting agentic AI in the public cloud and hoping it will integrate smoothly is not realistic. The physical and architectural distance creates latency, security risks, and compliance headaches that slow down adoption.

Legacy Systems and the Limits of Retrofitting

In my previous article, I described how OCI Dedicated Region helps organizations retrofit their existing infrastructure to support AI workloads by providing cloud-native GPU compute and AI services on-premises. While this approach is a game changer for many pilot projects and inference jobs, agentic AI demands something more foundational.

Agentic AI needs to be deeply integrated into the operational fabric of an enterprise. It requires direct, low-latency connections to databases, enterprise resource planning systems, and mission-critical applications that govern day-to-day business. Integrating AI compute into existing traditional infrastructure is a good first step, but it frequently results in complicated networks and security setups that raise operational risks.

Beyond Retrofit – OCI Dedicated Region as a Fully-Integrated AI Platform

OCI Dedicated Region is not just an add-on for AI, it’s a cloud region deployed inside your data center, delivering the same cloud services and infrastructure as Oracle’s public cloud, but physically under your control. This means you get a fully operational cloud region with high-performance computing, GPU acceleration, storage, networking, and AI services—all seamlessly integrated and ready to connect with your existing systems.

This is a fundamental shift. Instead of adapting your legacy environment to AI, you now place a full cloud region right next to your workloads. The AI agents you deploy can access real-time data, interact with legacy applications through native APIs, and operate within your strict security and compliance boundaries.

This close proximity eliminates latency and trust issues that come with remote public cloud AI deployments. It also reduces the need for complex VPNs or data synchronization layers, making agentic AI not just possible but practical.

Why Proximity Matters for Autonomous AI

Agentic AI thrives on context and immediacy. The closer it is to the systems it manages, the better decisions it can make and the faster it can act. For instance, if an AI agent detects a fault in a manufacturing control system or a spike in financial transaction anomalies, it must respond quickly to minimize disruption.

Running these AI systems in a public cloud region thousands of miles away adds delays and potential security risks, which can be unacceptable in regulated industries or mission-critical environments. OCI Dedicated Region removes those barriers by bringing the cloud to you.

By combining cloud agility with on-premises control, you get a hybrid environment where agentic AI can operate with the speed, reliability, and security enterprises demand.

The Strategic Advantage of OCI Dedicated Region

Most organizations aren’t looking for AI experiments, they want to operationalize AI at scale and embed it within their core processes. OCI Dedicated Region provides the infrastructure foundation to do just that.

It offers enterprise-ready cloud services inside your data center, enabling agentic AI to interact naturally with legacy systems without requiring costly or risky migrations. This means AI-powered automation, orchestration, and decision-making become achievable realities instead of distant goals.

If you want to move beyond retrofitting and truly modernize your AI journey, keeping the cloud close to your data, and your data close to the cloud, is essential. OCI Dedicated Region delivers exactly that.

Retrofitting AI Workloads with OCI Dedicated Region

Retrofitting AI Workloads with OCI Dedicated Region

As AI adoption becomes a strategic priority across nearly every industry, enterprises are discovering that scaling these workloads isn’t as simple as adding more GPUs to their cloud bill. Public cloud platforms like AWS and Azure offer extensive AI infrastructure, but many organizations are now facing steep costs, unpredictable pricing models, and growing concerns about data sovereignty, compliance, long-term scalability, and operational complexity. There are also physical challenges: most enterprise data centers were never designed for the high power, cooling, and interconnect demands of AI infrastructure.

A typical GPU rack can draw between 40 and 100 kW, far beyond the 5-10 kW that traditional racks can handle. Retrofitting a legacy data center to support such density requires high-density power delivery, advanced cooling, reinforced flooring, low-latency networking, and highly parallel storage systems. The investment often ranges from $4-8M per megawatt for retrofits and up to $15M for greenfield builds. Even with this capital outlay, organizations still face integration complexity, deployment delays, and fragmented operations.

This creates a challenging question: how can enterprises gain the agility, scale, and services of the public cloud for AI, without incurring its spiraling costs or rebuilding their entire infrastructure?

Oracle Cloud Infrastructure (OCI) Dedicated Region presents a compelling answer. It delivers the full OCI public cloud experience, including GPU compute, AI services, and cloud-native tooling, within your own data center. Oracle operates and manages the region, while you maintain full control. The result: public cloud performance and capabilities, delivered on-premises, without the compromises.

The Infrastructure Challenge of AI at Scale

AI workloads are no longer experimental, they are driving real business impact. Whether it’s training foundation models, deploying LLMs, or powering advanced search capabilities, these workloads require specialized infrastructure.

Unlike traditional enterprise IT, AI workloads place massive demands on power density, cooling, networking, and storage. GPU racks housing Nvidia H100 or A100 units can exceed 100 kW. Air cooling becomes ineffective, and liquid or hybrid cooling systems become essential. High-throughput, low-latency networks like 100/400 Gbps Ethernet or InfiniBand, are needed to connect compute clusters efficiently. AI workloads also rely heavily on large datasets and require high-bandwidth storage located close to compute.

In many enterprise data centers, this level of performance is simply out of reach. The facilities can’t provide the power or cooling, the racks can’t carry the weight, and the legacy networks can’t keep up.

The High Cost of Retrofitting for AI

For organizations considering bringing AI workloads back on-premises to manage costs, retrofitting is often seen as the obvious next step. But it rarely delivers the value expected.

Upgrading power infrastructure alone demands new transformers, PDUs, backup systems, and complex energy management. Cooling must shift from traditional air-based systems to liquid-based cooling loops or immersion techniques, requiring structural and spatial changes. Enterprise-grade racks are often too lightweight or densely packed for GPU servers, which can weigh over a ton each. Existing data center floors may need reinforcement.

Meanwhile, storage and networking systems must evolve to support I/O-intensive workloads. Parallel file systems, NVMe arrays, and tightly coupled fabrics are all essential, but rarely available in legacy environments. On top of that, most traditional data centers lack the cloud-native software stack needed for orchestration, security, observability, and automation.

Retrofits cost between $4-8M per megawatt. A greenfield build costs $11-15M per megawatt. These figures exclude operational overhead, integration timelines, training, and change management. For many, this is a non-starter.

OCI Dedicated Region – A True Public Cloud in Your Data Center

OCI Dedicated Region sidesteps these challenges. Oracle delivers a complete public cloud region, fully managed and operated by Oracle, inside your own facility. You get all the same infrastructure, services, and APIs as OCI’s public regions, with no loss of capability.

This includes GPU-accelerated compute (think of any Nvidia GPU), AI Services (like Data Science, Generative AI, and Vector Search), high-performance block and object storage, Oracle Autonomous Database, Exadata, analytics, low-latency networking, and full DevOps toolchains.

You also benefit from service mesh, load balancing, Kubernetes (OKE), serverless, observability, and zero-trust security services. From a developer perspective, it’s the same OCI experience – tools, SDKs, Terraform modules, and management consoles all work identically.

Importantly, data locality and sovereignty remain fully under your control. You manage access policies, audit trails, physical security, and compliance workflows.

Shifting from Capital Investment to Operational Efficiency

OCI Dedicated Region transforms infrastructure investment into an operating model. Rather than pouring capital into facilities, power systems, and integration, enterprises consume cloud resources on a predictable subscription basis. This eliminates hidden costs. No GPU spot market pricing, no surprise egress fees, no peak-hour surcharges.

Deployment is significantly faster compared to building or retrofitting infrastructure. Oracle delivers the region as a turnkey service, with pre-integrated compute, storage, AI, networking, and security. This minimizes integration complexity and accelerates time to value.

Operations are also simplified. OCI Dedicated Region maintains service parity with public OCI, which means your teams don’t need to adapt to different environments for hybrid or multi-cloud strategies. Everything runs on a consistent stack, which reduces friction and operational risk.

This model is particularly well-suited to highly regulated industries that require absolute control over data and infrastructure without losing access to modern AI tools.

Built for the Future of AI

OCI Dedicated Region supports a broad range of next-generation AI architectures and operational models. It enables federated AI, edge inference, and hybrid deployment strategies, allowing enterprises to place workloads where they make the most sense, without sacrificing consistency.

For instance, organizations can run real-time inference close to data sources at the edge (for example with Oracle Compute Cloud@Customer connected to your OCI Dedicated Region), while managing training and orchestration centrally. Workloads can burst into the public cloud when needed, leveraging OCI’s public regions without migrating entire stacks. Container-based scaling through Kubernetes ensures policy-driven elasticity and workload portability.

As power and cooling demands continue to rise, most enterprise data centers will be unable to keep the pace. OCI Dedicated Region is designed to absorb these demands, both technically and operationally.

Conclusion – Cloud Economics and Control Without Compromise

AI is quickly becoming a core part of enterprise infrastructure, and it’s exposing the limitations of both traditional data centers and conventional cloud models. Public cloud offers scale and agility, but often at unsustainable cost. On-prem retrofits are slow, expensive, and hard to manage.

OCI Dedicated Region offers a balanced alternative. It provides a complete cloud experience, GPU-ready and AI-optimized, within your own facility. You get the innovation, scale, and flexibility of public cloud, without losing control over data, compliance, or budget.

If your cloud bills are climbing and your infrastructure can’t keep up with the pace of AI innovation, OCI Dedicated Region is worth a serious look.

The State of Application Modernization 2025

The State of Application Modernization 2025

Every few weeks, I find myself in a conversation with customers or colleagues where the topic of application modernization comes up. Everyone agrees that modernization is more important than ever. The pressure to move faster, build more resilient systems, and increase operational efficiency is not going away.

But at the same time, when you look at what has actually changed since 2020… it is surprising how much has not.

We are still talking about the same problems: legacy dependencies, unclear ownership, lack of platform strategy, organizational silos. New technologies have emerged, sure. AI is everywhere, platforms have matured, and cloud-native patterns are no longer new. And yet, many companies have not even started building the kind of modern on-premises or cloud platforms needed to support next-generation applications.

It is like we are stuck between understanding why we need to modernize and actually being able to do it.

Remind me, why do we need to modernize?

When I joined Oracle in October 2024, some people reminded me that most of us do not know why we are where we are. One could say that it is not important to know that. In my opinion, it very much is. Something has fundamentally changed in the past that has led us to our situation.

In the past, when we moved from physical servers to virtual machines (VMs), apps did not need to change. You could lift and shift a legacy app from bare metal to a VM and it would still run the same way. The platform changed, but the application did not care. It was an infrastructure-level transformation without rethinking the app itself. So, the transition (P2V) of an application was very smooth and not complicated.

But now? The platform demands change.

Cloud-native platforms like Kubernetes, serverless runtimes, or even fully managed cloud services do not just offer a new home. They offer a whole new way of doing things. To benefit from them, you often have to re-architect how your application is built and deployed.

That is the reason why enterprises have to modernize their applications.

What else is different?

User expectations, business needs, and competitive pressure have exploded as well. Companies need to:

  • Ship features faster
  • Scale globally
  • Handle variable load
  • Respond to security threats instantly
  • Reduce operational overhead

A Quick Analogy

Think of it like this: moving from physical servers to VMs was like transferring your VHS tapes to DVDs. Same content, just a better format.

But app modernization? That is like going from DVDs to Netflix. You do not just change the format, but you rethink the whole delivery model, the user experience, the business model, and the infrastructure behind it.

Why Is Modernization So Hard?

If application modernization is so powerful, why is not everyone done with it already? The truth is, it is complex, disruptive, and deeply intertwined with how a business operates. Organizations often underestimate how much effort it takes to replatform systems that have evolved over decades. Here are 6 common challenges companies face during modernization:

  1. Legacy Complexity – Many existing systems are tightly coupled, poorly documented, and full of business logic buried deep in spaghetti code. 
  2. Skill Gaps – Moving to cloud-native tech like Kubernetes, microservices, or DevOps pipelines requires skills many organizations do not have in-house. Upskilling or hiring takes time and money.
  3. Cultural Resistance – Modernization often challenges organizational norms, team structures, and approval processes. People do not always welcome change, especially if it threatens familiar workflows.
  4. Data Migration & Integration – Legacy apps are often tied to on-prem databases or batch-driven data flows. Migrating that data without downtime is a massive undertaking.
  5. Security & Compliance Risks – Introducing new tech stacks can create blind spots or security gaps. Modernizing without violating regulatory requirements is a balancing act.
  6. Cost Overruns – It is easy to start a cloud migration or container rollout only to realize the costs (cloud bills, consultants, delays) are far higher than expected.

Modernization is not just a technical migration. It’s a transformation of people, process, and platform (technology). That is why it is hard and why doing it well is such a competitive advantage!

Technical Debt Is Also Slowing Things Down

Also known as the silent killer of velocity and innovation: technical debt

Technical debt is the cost of choosing a quick solution now instead of a better one that would take longer. We have all seen/done it. 🙂 Sometimes it is intentional (you needed to hit a deadline), sometimes it is unintentional (you did not know better back then). Either way, it is a trade-off. And just like financial debt, it accrues interest over time.

Here is the tricky part: technical debt usually doesn’t hurt you right away. You ship the feature. The app runs. Management is happy.

But over time, debt compounds:

  • New features take longer because the system is harder to change

  • Bugs increase because no one understands the code

  • Every change becomes risky because there is no test safety net

Eventually, you hit a wall where your team is spending more time working around the system than building within it. That is when people start whispering: “Maybe we need to rewrite it.”  Or they just leave your company.

Let me say it: Cloud Can Also Introduce New Debt

Cloud-native architectures can reduce technical debt, but only if used thoughtfully.

You can still:

  • Over-complicate microservices

  • Abuse Kubernetes without understanding it

  • Ignore costs and create “cost debt”

  • Rely on too many services and lose track

Use the cloud to eliminate debt by simplifying, automating, and replacing legacy patterns, not just lifting them into someone else’s data center.

It Is More Than Just Moving to the Cloud 

Modernization is about upgrading how your applications are built, deployed, run, and evolved, so they are faster, cheaper, safer, and easier to change. Here are some core areas where I saw organizations are making real progress:

  • Improving CI/CD. You can’t build modern applications if your delivery process is stuck in 2010.
  • Data Modernization. Migrate from monolithic databases to cloud-native, distributed ones.
  • Automation & Infrastructure as Code. It is the path to resilience and scale.
  • Serverless Computing. It is the “don’t worry about servers” mindset and ideal for many modern workloads.
  • Containerizing Workloads. Containers are a stepping stone to microservices, Kubernetes, and real DevOps maturity.
  • Zero-Trust Security & Cybersecurity Posture. One of the biggest priorities at the moment.
  • Cloud Migration. It is not about where your apps run. it is about how well they run there. “The cloud” should make you faster, safer, and leaner.

As you can see, application modernization is not one thing, it’s many things. You do not have to do all of these at once. But if you are serious about modernizing, these points (any more) must be part of your blueprint. Modernization is a mindset.

Why (replatforming) now?

There are a few reasons why application modernization projects are increasing:

  • The maturity of cloud-native platforms: Kubernetes, managed databases, and serverless frameworks have matured to the point where they can handle serious production workloads. It is no longer “bleeding edge”
  • DevOps and Platform Engineering are mainstream: We have shifted from siloed teams to collaborative, continuous delivery models. But that only works if your platform supports it.
  • AI and automation demand modern infrastructure: To leverage modern AI tools, event-driven data, and real-time analytics, your backend can’t be a 2004-era database with a web front-end duct-taped to it.

Conclusion

There is no longer much debate: (modern) applications are more important than ever. Yet despite all the talk around cloud-native technologies and modern architectures, the truth is that many organizations are still trying to catch up and work hard to modernize not just their applications, but also the infrastructure and processes that support them.

The current progress is encouraging, and many companies have learned from the experience of their first modernization projects.

One thing that is becoming harder to ignore is how much the geopolitical situation is starting to shape decisions around application modernization and cloud adoption. Concerns around data sovereignty, digital borders, national cloud regulations, and supply chain security are no longer just legal or compliance issues. They are shaping architecture choices.

Some organizations are rethinking their cloud and modernization strategies, looking at multi-cloud or hybrid models to mitigate risk. Others are delaying cloud adoption due to regional uncertainty, while a few are doubling down on local infrastructure to retain control. It is not just about performance or cost anymore, but also about resilience and autonomy.

The global context (suddenly) matters, and it is influencing how platforms are built, where data lives, and who organizations choose to partner with. If anything, it makes the case even stronger for flexible, portable, cloud-native architectures. So you are not locked into a single region or provider.

Sovereign Clouds and Sovereign AI

Sovereign Clouds and Sovereign AI

The concept of sovereign AI is gaining traction. It refers to artificial intelligence systems that are designed, deployed, and managed within the borders and legal frameworks of a specific nation or region. As the global reliance on AI intensifies, the necessity to establish control over these powerful systems has become of importance for governments, businesses, and citizens alike. The rise of sovereign AI is not only a technological shift – it is a rethinking of data sovereignty, privacy, and national security. 

Data Sovereignity

Sovereign AI offers a solution by ensuring that data, AI models, and the insights derived from them remain within the jurisdiction of the entity that owns them. This is particularly critical for industries such as finance, healthcare, defense, and public administration, where data is not only sensitive but also strategically important.

However, achieving data sovereignty is not without challenges. Traditional cloud and data management systems often involve storing and processing data across multiple jurisdictions, making it difficult to ensure compliance with local laws. We need to ensure that data remains within a specific legal framework, while still benefiting from the scalability, performance, and flexibility of cloud-based solutions.

Building the Infrastructure for Sovereign AI

Does sovereign AI imply that enterprises and national clouds need to be truly “sovereign”? I am not so sure about that, but the underlying infrastructure must be robust, secure, and compliant with local regulations. IT teams and CIOs need to think about the deployment of cloud and data management solutions that are tailored to the needs of specific regions, ensuring data sovereignty is maintained without sacrificing the benefits of cloud computing:

  • Localized Cloud Infrastructure: Think about an infrastructure that ensures data does not leave the country’s borders. Different private data centers in different regions must offer the same level of performance, security, and availability.
  • Data Security: Here we talk about end-to-end encryption, access controls, and continuous monitoring to prevent unauthorized access.
  • Compliance: Infrastructure must be built with compliance in mind, which means adhering to local laws or regulations regarding data protections, privacy, and AI ethics.
  • Interoperability and Integration: The goal here is to achieve a balance between control and an adaptable cloud infrastructure.

Use AI to provide Artificial Intelligence

What happened to compute, storage, and networking is happening with data (management) as well. We see different vendors enhancing their platform and database offerings with artificial intelligence. The basic idea is to use AI and machine learning to provide automation, which increases speed and security.

Think about self-optimizing intelligence-driven databases and systems that use AI to monitor performance, identify bottlenecks, and make adjustments without human intervention, ensuring that data is always available and secure. Systems and DBs that automatically detect and respond to threats based on anomaly detection. 

Balancing Innovation with Responsibility

One of the key challenges of sovereign AI is finding the right balance between innovation and responsibility (not only regulation). While it is important to protect data and ensure compliance with local laws, it is also essential that AI systems remain sustainable, innovative and able to leverage global advancements. 

Large Language Models (LLMs) have become a cornerstone of modern AI, enabling machines to generate human-like text, understand natural language, and perform a wide array of tasks from translation to summarization. These models, built on huge datasets and advanced neural architectures, represent a significant leap in AI capabilities. However, the creation and deployment of LLMs come with substantial costs in terms of time, financial investment, and environmental impact.

Green AI initiatives are one promising approach, focusing on reducing the environmental impact of AI development by using renewable energy sources, designing energy-efficient infrastructures, and promoting transparency around the energy consumption and carbon footprint of AI models. Collaboration and open research are also key, allowing the AI community to share resources, reduce duplication of effort, and accelerate the development of more efficient and sustainable models.

Conclusion

Recent trends indicate a growing emphasis on localized cloud infrastructure, where providers are building new data centers within national borders to comply with data sovereignty laws. This trend is driven by a combination of factors, including the rise of GDPR-like regulations and growing concerns over foreign surveillance and cyber threats. Additionally, the Digital Operational Resilience Act (DORA), introduced by the European Union, emphasizes the need for robust digital infrastructure resilience, pushing organizations to adopt sovereign cloud solutions that can guarantee operational continuity while adhering to regulatory requirements. This involves not only the localized deployment of AI models but also the creation of AI governance frameworks that ensure transparency, accountability, and fairness.

The integration of sovereign cloud and sovereign AI will likely become a standard practice for public sector organizations and industries dealing with sensitive data. The latest advancements in edge computing, federated learning, and secure multi-party computation are further enabling this shift, allowing AI systems to process data locally while maintaining global collaboration and innovation.

Navigating the AI Buffet – Strategies and Metrics for Successful Enterprise Implementations

Navigating the AI Buffet – Strategies and Metrics for Successful Enterprise Implementations

Artificial intelligence (AI) is gaining momentum everywhere. We see new solutions, partnerships and even reference architectures popping up almost daily. Additionally, organizations, lawyers and country leaders are looking for the right balance between business value and compliance needs. Without going too much into details, I said to myself, that artificial intelligence has e lot in common with cloud computing and multi-clouds. Just because it is out there everywhere, does it mean we should / are allowed to use it? Organizations are going to use both public and private clouds to host their non-AI and AI workloads, but what is their strategy? How do enterprises implement and successfully manage AI-based technologies and processes in order to generate a sustainable strategy and long-term competitive advantages?

What I won’t do

So, I asked myself: What is my role in this whole (crazy) AI world? What do I need to know? What do I have to do?

First, let me tell you what I won’t or cannot do:

  • I do not have 4+ years of experience working with machine learning
  • I have no competencies to write ML code using TensorFlow, PyTorch or Keras
  • Python? No, no experience, sorry
  • I do not do data engineering as well
  • I understand storage and compute, yes, but no clue when it comes to correlating models with parameter and data
  • No, I don’t have real knowledge of Large Language Models (LLM) or HuggingFace models
  • I do not understand a full MLOps technical stack
  • I cannot fine-tune or tweak AI models
  • No, I don’t fully understand the possibilities of confidential computing or confidential AI

All the things above? That is not me.

What are my questions?

I think most of us start at the same place. First, when this hype started, we had to figure out what AI really means, where it is coming from and what types of AI exist.

After that, how did you continue? Probably like me and many others, you tried out ChatGPT, read about LLMs and generative AI (genAI). Eventually, you also tried out new plugins or tools to enhance your productivity.

A few months ago, I had a short conversation with a CTO from a large bank. A really large bank.

Guess what? He could not tell me how they move forward with the topic “artificial intelligence”. They have not figured out or decided yet what to do in terms of data privacy and control.

Decision-Makers and Data Scientists

This conversation led me to two important questions, and I believe this is what I want to do in the next few months and coming years:

  1. What does it take to implement AI in organizations?
  2. How can the success of an AI strategy and implementation be measured?

These are the topics I want to specialize in. This is the homework I and many others need to do first. These are the conversations I want to have with my customers first before we talk about infrastructure, data, and reference architectures.

My focus

I would like to get a better understanding of how organizations plan to get value with artificial intelligence. It is important, like we had to learn with cloud computing and hybrid or multi-cloud architecture over the past decade or so, to get a complete view and understanding of the opportunities and risks, as well as an understanding of the financial and organizational resources an enterprise might need.

What are the business models and frameworks one has to implement? What is a “good” strategy and how do you manage and measure that? What are the KPIs? What about feasibility and cost-effectiveness?

I want to understand the best practices and how some decision-makers have implemented a successful long-term strategy including processes, culture and technology.

I recently learned that artificial intelligence and machine learning implementations require a huge software stack. Do we really need to understand all the options and the solutions from different vendors? If not, who has got this knowledge? Data scientists?

Conclusion

In conclusion, the journey of implementing artificial intelligence in enterprises mirrors the experience of navigating an all-you-can-eat buffet.

I (still) have so many questions. My mission is to find answers and opinions to these questions, and I would not be surprised if it takes between 12 and 24 months.

The history of AI is more than 70 years old, but it seems we just have started now. While I understand that we live with AI every day now, I also want to understand how this field will develop and what is next. What are the trends?

As enterprises continue to embrace the AI buffet, it is not just about filling plates with technology. It is about crafting a menu that satisfies the hunger for innovation and excellence.

Note: The images for this article have been created with the help of artificial intelligence