From Monolithic Data Centers to Modern Private Clouds

From Monolithic Data Centers to Modern Private Clouds

Behind every shift from old-school to new-school, there is a bigger story about people, power, and most of all, trust. And nowhere is that clearer than in the move from traditional monolithic data centers to what we now call a modern private cloud infrastructure.

A lot of people still think this evolution is just about better technology, faster hardware, or fancier dashboards. But it is not. If you zoom out, the core driver is not features or functions, it is trust in the executive vision, and the willingness to break from the past.

Monolithic data centers stall innovation

But here is the problem: monoliths do not scale in a modern world (or cloud). They slow down innovation, force one-size-fits-all models, and lock organizations into inflexible architectures. And as organizations grew, the burden of managing these environments became more political than practical.

The tipping point was not when better tech appeared. It was when leadership stopped trusting that the monolithic data centers with the monolithic applications could deliver what the business actually needed. That is the key. The failure of monolithic infrastructure was not technical – it was cultural.

Hypervisors are not the platform you think

Let us make that clear: hypervisor are not platforms! They are just silos and one piece of a bigger puzzle.

Yes, they play a role in virtualization. Yes, they helped abstract hardware and brought some flexibility. But let us not overstate it, they do not define modern infrastructure or a private cloud. Hypervisors solve a problem from a decade ago. Modern private infrastructure is not about stacking tools, it is about breaking silos, including the ones created by legacy virtualization models.

Private Cloud – Modern Infrastructure

So, what is a modern private infrastructure? What is a private cloud? It is not just cloud-native behind your firewall. It is not just running Kubernetes on bare metal. It is a mindset.

You do not get to “modern” by chasing features or by replacing one virtualization solution with another vendor. You get there by believing in the principles of openness, automation, decentralization, and speed. And that trust has to start from the top. If your CIO or CTO is still building for audit trails and risk reduction as their north star, you will end up with another monolithic data center stack. Just with fancier logos.

But if leadership leans into trust – trust in people, in automation, in feedback loops – you get a system that evolves. Call it modern. Call it next-gen.

It was never about the technology

We moved from monolithic data centers not because the tech got better (though it did), but because people stopped trusting the old system to serve the new mission.

And as we move forward, we should remember: it is not hypervisors or containers or even clouds that shape the future. It is trust in execution, leadership, and direction. That is the real platform everything else stands on. If your architecture still assumes manual control, ticketing systems, and approvals every step of the way, you are not building a modern infrastructure. You are simply replicating bureaucracy in YAML. A modern infra is about building a cloud that does not need micro-management.

Platform Thinking versus Control

A lot of organizations say they want a platform, but what they really want is control. Big difference.

Platform thinking is rooted in enablement. It is about giving teams consistent experiences, reusable services, and the freedom to ship without opening a support ticket every time they need a VM or a namespace.

And platform thinking only works when there is trust as well:

  • Trust in dev teams to deploy responsibly
  • Trust in infrastructure to self-heal and scale
  • Trust in telemetry and observability to show the truth

Trust is a leadership decision. It starts when execs stop treating infrastructure as a cost center and start seeing it as a product. Something that should deliver value, be measured, and evolve.

It is easy to get distracted. A new storage engine, a new control plane, a new AI-driven whatever. Features are tempting because they are measurable. You can point at them in a dashboard or a roadmap.

But features don’t create trust. People do. The most advanced platform in the world is useless if teams do not trust it to be available, understandable, and usable. 

So instead of asking “what tech should we buy?”, the real question is:

“Do we trust ourselves enough to let go of the old way?”

Because that is what building a modern private cloud is really about.

Trust at Scale

In Switzerland, we like things to work. Predictably. Reliably. On time. With the current geopolitical situation in the world, and especially when it comes to public institutions, that expectation is non-negotiable.

The systems behind those services are under more pressure than ever. Demands are rising and talent is shifting. Legacy infrastructure is getting more fragile and expensive. And at the same time, there is this quiet but urgent question being asked in every boardroom and IT strategy meeting:

Can we keep up without giving up control?

Public sector organizations (not only in Switzerland) face a unique set of constraints:

  • Critical infrastructure cannot go down, ever
  • Compliance and data protection are not just guidelines, they are legal obligations
  • Internal IT often has to serve a wide range of users, platforms, and expectations

So, it is no surprise that many of these organizations default to monolithic, traditional data centers. The logic is understandable: “If we can touch it, we can control it.”

But here is the reality though: control does not scale. And legacy does not adapt. Staying “safe” with old infrastructure might feel responsible, but it actually increases long-term risk, cost, and technical debt. There is a temptation to approach modernization as a procurement problem: pick a new vendor, install a new platform, run a few migrations, and check the box. Done.

But transformation doesn’t work that way. You can’t buy your way out of a culture that does not trust change.

In understand, this can feel uncomfortable. Many institutions are structured to avoid mistakes. But modern IT success requires a shift from control to resilience, and it is not about perfection. It is only perfect until you need to adapt again.

How to start?

By now, it is clear: modern private cloud infrastructure is not about chasing trends or blindly “moving to the cloud.” It’s about designing systems that reflect what your organization values: reliability, control, and trust, while giving teams the tools to evolve. But that still leaves the hardest question of all:

Where do we start?

First, ransparency is the first ingredient of trust. You can’t fix what you won’t name.

Second, modernizing safely does not mean boiling the ocean. It means starting with a thin slice of the future.

The goal is to identify a use case where you can:

  • Show real impact in under six months

  • Reduce friction for both IT and internal users

  • Create confidence that change is possible without risk

In short, it is about finding use cases with high impact but low risk.

Third, this is where a lot of transformation efforts stall. Organizations try to modernize the tech, but keep the old permission structures. The result? A shinier version of the same bottlenecks. Instead, shift from control to guardrails. Think less about who can approve what, and more about how the system enforces good behavior by default. For example:

  • Implement policy-as-code: rules embedded into the platform, not buried in documents

  • Automate security scans, RBAC, and drift detection

  • Give teams safe, constrained freedom instead of needing to ask for access

Guardrails enable trust without giving up safety. That’s the core of a modern infrastructure (private or public cloud).

And lastly, make trust measurable. Not just with uptime numbers or dashboards but with real signals:

  • Are teams delivering faster?

  • Are incidents down?

  • etc.

Make this measurable, visible, and repeatable. Success builds trust. Trust creates momentum.

Final Thoughts

IT organizations do not need moonshots. They need measured, meaningful modernization. The kind that builds belief internally, earns trust externally, and makes infrastructure feel like an asset again.

The technology matters, but how you introduce it matters even more. 

Private Cloud Autarky – You Are Safe Until The World Moves On

Private Cloud Autarky – You Are Safe Until The World Moves On

I believe it was 2023 when the term “autarky” was mentioned during my conversations with several customers, who maintained their own data centers and private clouds. Interestingly, this word popped up again recently at work, but I only knew it from photovoltaic systems. And it kept my mind busy for several weeks.

What is autarky?

To understand autarky in the IT world and its implications for private clouds, an analogy from the photovoltaic (solar power) system world offers a clear parallel. Just as autarky in IT means a private cloud that is fully self-sufficient, autarky in photovoltaics refers to an “off-grid” solar setup that powers a home or facility without relying on the external electrical grid or outside suppliers.

Imagine a homeowner aiming for total energy independence – an autarkic photovoltaic system. Here is what it looks like:

  • Solar Panels: The homeowner installs panels to capture sunlight and generate electricity.
  • Battery: Excess power is stored in batteries (e.g., lithium-ion) for use at night or on cloudy days.
  • Inverter: A device converts solar DC power to usable AC power for appliances.
  • Self-Maintenance: The homeowner repairs panels, replaces batteries, and manages the system without calling a utility company or buying parts. 

This setup cuts ties with the power grid – no monthly bills, no reliance on power plants. It is a self-contained energy ecosystem, much like an autarkic private cloud aims to be a self-contained digital ecosystem.

Question: Which partner (installation company) has enough spare parts and how many homeowners can repair the whole system by themselves?

Let’s align this with autarky in IT:

  • Solar Panels = Servers and Hardware: Just as panels generate power, servers (compute, storage, networking) generate the cloud’s processing capability. Theoretically, an autarkic private cloud requires the organization to build its own servers, similar to crafting custom solar panels instead of buying from any vendor.
  • Battery = Spares and Redundancy: Batteries store energy for later; spare hardware (e.g., extra servers, drives, networking equipment) keeps the cloud running when parts fail. 
  • Inverter = Software Stack: The inverter transforms raw power into usable energy, like how a software stack (OS, hypervisor) turns hardware into a functional cloud.
  • Self-Maintenance = Internal Operations: Fixing a solar system solo parallels maintaining a cloud without vendor support – both need in-house expertise to troubleshoot and repair everything.

Let me repeat it: both need in-house expertise to troubleshoot and repair everything. Everything.

The goal is self-sufficiency and independence. So, what are companies doing?

An autarkic private cloud might stockpile Dell servers or Nvidia GPUs upfront, but that first purchase ties you to external vendors. True autarky would mean mining silicon and forging chips yourself – impractical, just like growing your own silicon crystals for panels.

The problem

In practice, autarky for private clouds sounds like an extreme goal. It promises maximum control. Ideal for scenarios like military secrecy, regulatory isolation, or distrust of global supply chains but clashes with the realities of modern IT:

  • Once the last spare dies, you are done. No new tech without breaking autarky.
  • Autarky trades resilience for stagnation. Your cloud stays alive but grows irrelevant.
  • Autarky’s price tag limits it to tiny, niche clouds – not hyperscale rivals.
  • Future workloads are a guessing game. Stockpile too few servers, and you can’t expand. Too many, and you have wasted millions. A 2027 AI boom or quantum shift could make your equipment useless.

But where is this idea of self-sufficiency or sovereign operations coming from? Nowadays? Geopolitical resilience.

Sanctions or trade wars will not starve your cloud. A private (hyperscale) cloud that answers to no one, free from external risks or influence. That is the whole idea.

What is the probability of such sanctions? Who knows… but this is a number that has to be defined for each case depending on the location/country, internal and external customers, and requirements.

If it happens, is it foreseeable, and what does it force you to do? Does it trigger a cloud-exit scenario?

I just know that if there are sanctions, any hyperscaler in your country has the same problems. No matter if it is a public or dedicated region. That is the blast radius. It is not only about you and your infrastructure anymore.

What about private disconnected hyperscale clouds?

When hosting workloads in the public clouds, organizations care more about data residency, regulations, the US Cloud Act, and less about autarky.

Hyperscale clouds like Microsoft Azure and Oracle Cloud Infrastructure (OCI) are built to deliver massive scale, flexibility, and performance but they rely on complex ecosystems that make full autarky impossible. Oracle offers solutions like OCI Dedicated Region and Oracle Alloy to address sovereignty needs, giving customers more control over their data and operations. However, even these solutions fall short of true autarky and absolute sovereign operations due to practical, technical, and economic realities.

A short explanation from Microsoft gives us a hint why that is the case:

Additionally, some operational sovereignty requirements, like Autarky (for example, being able to run independently of external networks and systems) are infeasible in hyperscale cloud-computing platforms like Azure, which rely on regular platform updates to keep systems in an optimal state.

So, what are customers asking for when they are interested in hosting their own dedicated cloud region in their data centers? Disconnected hyperscale clouds.

But hosting an OCI Dedicated Region in your data center does not change the underlying architecture of Oracle Cloud Infrastructure (OCI). Nor does it change the upgrade or patching process, or the whole operating model.

Hyperscale clouds do not exist in a vacuum. They lean on a web of external and internal dependencies to work:

  • Hardware Suppliers. For example, most public clouds use Nvidia’s GPUs for AI workloads. Without these vendors, hyperscalers could not keep up with the demand.
  • Global Internet Infrastructure. Hyperscalers need massive bandwidth to connect users worldwide. They rely on telecom giants and undersea cables for internet backbone, plus partnerships with content delivery networks (CDNs) like Akamai to speed things up.
  • Software Ecosystems. Open-source tools like Linux and Kubernetes are part of the backbone of hyperscale operations.
  • Operations. Think about telemetry data and external health monitoring.

Innovation depends on ecosystems

The tech world moves fast. Open-source software and industry standards let hyperscalers innovate without reinventing the wheel. OCI’s adoption of Linux or Azure’s use of Kubernetes shows they thrive by tapping into shared knowledge, not isolating themselves. Going it alone would skyrocket costs. Designing custom chips, giving away or sharing operational control or skipping partnerships would drain billions – money better spent on new features, services or lower prices.

Hyperscale clouds are global by nature, this includes Oracle Dedicated Region and Alloy. In return you get:

  • Innovation
  • Scalability
  • Cybersecurity
  • Agility
  • Reliability
  • Integration and Partnerships

Again, by nature and design, hyperscale clouds – even those hosted in your data center as private Clouds (OCI Dedicated Region and Alloy) – are still tied to a hyperscaler’s software repositories, third-party hardware, operations personnel, and global infrastructure.

Sovereignty is real, autarky is a dream

Autarky sounds appealing: a hyperscale cloud that answers to no one, free from external risks or influence. Imagine OCI Dedicated Region or Oracle Alloy as self-contained kingdoms, untouchable by global chaos.

Autarky sacrifices expertise for control, and the result would be a weaker, slower and probably less secure cloud. Self-sufficiency is not cheap. Hyperscalers spend billions of dollars yearly on infrastructure, leaning on economies of scale and vendor deals. Tech moves at lightning speed. New GPUs drop yearly, software patches roll out daily (think about 1’000 updates/patches a month). Autarky means falling behind. It would turn your hyperscale cloud into a relic.

Please note, there are other solutions like air-gapped isolated cloud regions, but those are for a specific industry and set of customers.

OCI Dedicated Region – The Next-Generation Private Cloud

OCI Dedicated Region – The Next-Generation Private Cloud

Private clouds and IT infrastructures deployed in on-premises data centers are going through the next evolution. We see vendors and solutions shifting from siloed private clouds more towards a platform approach. A platform that does not consist of different solutions (products) and components anymore but rather something that provides the right foundation, a feature set and interfaces that let you expose and consume services like IaaS, PaaS, DBaaS, DRaaS etc.

If we talk about a platform, we usually mean something that is unified and that is not just “integrated” or stitched together. Integrated would imply that we still have different products (could also be from the same vendor), and this is becoming less popular now. Except this is your way to attract talent by using a best-of-breed approach. Do not forget: It increases your technical debt and hence the complexity massively.

This article highlights a private cloud platform that brings true public cloud characteristics to private clouds. As a matter of fact, it brings the public cloud to your on-premises data center: OCI Dedicated Region

The Cloud Paradox

We could start an endless discussion about technical debt, the so-called public cloud sprawl, and the wish for cloud repatriation. Many people believe that “the” public cloud has failed to deliver its promise. Organizations and decision-makers are still figuring out the optimal way for their business to operate in a multi-cloud world.

In my opinion, the challenge today is that you have so many more topics to consider than ever before. New technologies, new vendors, new solutions, new regulations, and in general so many new possibilities for how to deliver a solution.

IT organizations have invested a lot of money, time, and resources over the past few years to familiarize themselves with these possibilities: hybrid cloud, multi-cloud, application modernization, security, data management, and artificial intelligence.

The public cloud has not failed – it is just failing forward, which means it is still maturing as well!

Other (private) cloud and virtualization companies grew by developing homegrown products and by acquiring different companies to close some feature gaps, which then led to heavy integration efforts. Since the private cloud and the related vendors are also still evolving/maturing, but also still trying to fix the technical debt that they have delivered to their customers and partners, there seems not to be a single private cloud vendor in the market that can provide a true unified platform for on-premises data centers.

Interoperability, Portability, Data Gravity

In 2010, different companies and researchers have been looking for ways to make the private and public clouds more interoperable. The idea was a so-called “intercloud” that would allow organizations to move applications securely and freely between clouds at an acceptable cost. While this cost problem has not been solved yet, the following illustration from 2023 (perhaps not accurate, please verify) should give you an idea where we stand:

Source: https://medium.com/@alexandre_43174/the-surprising-truth-about-cloud-egress-costs-d1be3f70d001 

Constantly moving applications and their data between clouds is not something that CIOs and application owners want. Do not forget: We are still figuring out how to move applications to the right cloud based on the right reasons.

Thought: AI/ML-based workload mobility and cost optimization could become a reality though but that is still far away.

That brings us to interoperability. The idea almost 15 years ago was based on standardized protocols and common standards that would allow VM/application mobility, which then can be seen as cloud interoperability.

So, how are cloud providers trying to solve this challenge? By providing their proprietary solutions in other clouds.

While these hybrid or hybrid multi-cloud solutions bring advantages and solve some of the problems, depending on an organization’s strategy and partnerships, we face the next obstacle called data gravity.

The larger a dataset or database is the more difficult it is to move, which incentivizes organizations to bring computing resources and applications closer to the data, rather than moving the data to where the processing is done. That is why organizations are using different database solutions and DBaaS offerings in their private and public cloud(s).

Distributed Cloud Architecture

Oracle’s distributed cloud architecture enables customers to run their workloads in geographically diverse locations while maintaining a consistent operational model across different environments:

  • Oracle Cloud Infrastructure (OCI). Oracle has built OCI to deliver high-performance computing and enterprise-grade cloud services with global availability across its various regions.
  • Hybrid Cloud and Interoperability. Oracle’s hybrid cloud capabilities, such as Exadata Cloud@Customer and OCI Dedicated Region, enable organizations to run Oracle Cloud services in their own data center. These services give customers the full benefits of Oracle Cloud Infrastructure while maintaining data within their data centers, which is ideal for industries with strict data residency or security policies.
  • Multi-Cloud. Oracle is the first hyperscaler that offers databases in all the major public clouds (Azure, Google Cloud and AWS). Then there is HeatWave MySQL on AWS and the different interconnect options (Google Cloud, Azure).

These offerings address the mobility, interoperability, egress costs, and data gravity challenges mentioned above. In my opinion, there is no other vendor yet who achieved the same level of partnerships and integrations that brings us closer to cloud interoperability.

This is the Gartner Magic Quadrant (MQ) for Distributed Hybrid Infrastructure from August 2023:

Gartner Magic-Quadrant-for-Distributed-Hybrid-Infrastructure

I do not know when the next MQ for Distributed Hybrid Infrastructure comes out (Update: the 2024 Gartner MQ for DHI came out on October 10), but I guess that Oracle will even be positioned better then, because of the Oracle CloudWorld 2024 announcements and the future release of OCI Dedicated Region 25. If you missed the Dedicated Region 25 announcement, have a look at this interview:

Let us park OCI Dedicated Region for a minute and talk about data centers quickly.

Monolithic Data Centers for Modern Applications

As many of us know, the word “monolithic” describes something very large, and difficult to change. Something inflexible.

It is very interesting to see that so many organizations talk about modern applications, but are still managing and maintaining what one could call a “monolithic” data center. I had customers discussing a modern infrastructure for their modern (or to be modernized) applications. With “modern” they were referring to a modern infrastructure which means “public cloud” for them.

So, it still surprises me that almost nobody talks about monolithic infrastructures or monolithic private clouds. Perhaps this has something to do with the mostly (still) monolithic applications, which implies that these workloads are running on a “legacy” or monolithic infrastructure. 

So, what happens to the applications that have to stay in your data center, because you cannot or do not want to migrate them to the public cloud?

Some of those apps are for sure still important to the business, need to be lifecycled and patched, and some of them need to be modernized for you to stay competitive with the market.

What about a modern private cloud?

If your goal is to put modern applications on a modern platform, what are the reasons for stopping you and not investing in a more modern platform that can not only host your modern apps, but also legacy apps, and anything that might come in the future?

Where do you deploy your AI-based workloads and data services if such applications/workloads and their data have to stay in your private cloud?

And what is Gartner saying about the trend for public services spend?

All segments of the cloud market are expected to see growth in 2024. Infrastructure-as-a-service (IaaS) is forecast to experience the highest end-user spending growth at 25.6%, followed by platform-as-a-service (PaaS) at 20.6%…

Why do I mention this?

Because some people think that virtual machines and IaaS are legacy, and then come to the false conclusion that an on-premises cloud is obsolete. If that would be true, why does Gartner regularly forecast the highest spending growth with IaaS? And wouldn’t it mean that the modern public cloud is hosting a huge number of legacy workloads, and hence become obsolete as well?

I do not think so. 😀

The Next Generation

One of the main challenges with existing private clouds is the operating model. Imagine how organizations have built data centers in the past. You started to virtualize compute, then networking, and then storage. A little bit later you had to figure out how you automate, deploy, integrate, and maintain these components without forgetting security in the end.

A few years later, you had to start thinking about container orchestration and go through the same process again: how to build, run, connect, and secure container-based workloads.

Why? Because people believe that on-premises data centers will disappear, applications must be cloud-native, containerized, and therefore be orchestrated with Kubernetes. That’s the very short and extremely simplified version of 20 years of virtualization history.

So, suddenly, you are stuck in both worlds, the monolithic data center and the modern public cloud, with different people (engineering, architecture, operations), processes, and technologies. Different integrations (ecosystem), strategic partnerships and operating models for different clouds.

What are the options at this point? Well, there are not so many:

  1. Stretch the private cloud to the public cloud (e.g., VMware Cloud Foundation, Nutanix)
  2. Stretch the public cloud to your data center (AWS Outposts, Azure Stack, OCI Dedicated Region or Oracle’s Cloud@Customer offerings)
  3. Leave all as it is and try to abstract the underlying infrastructure, but build a control plane on top for everything (Azure Arc, Google Anthos, VMware Tanzu)

The (existing) private cloud will always be seen as the legacy and outdated private cloud, if nobody changes the processes and the capabilities that the data center platform can deliver.

Note: But that might be okay depending on an organization’s size and requirements

What am I trying to say here? It is not only the operating model that has to change but also how the private cloud services are consumed by developers and operators. Some of the key features and “characteristics” they seek include:

  • Elastic scalability: The ability to automatically scale resources up and down based on demand, without the need for manual intervention or hardware provisioning.
  • Cost transparency and efficiency: Pay-as-you-go pricing models that align costs with actual resource consumption, improving financial efficiency.
  • Cloud-native services: Access to a wide range of managed services, such as databases, AI/ML tools, and serverless platforms, that accelerate application development and deployment.
  • Low operational overhead: Outsourcing the management of underlying infrastructure to reduce operational complexity and allow teams to focus on business outcomes.
  • Compliance and data sovereignty: The ability to meet strict regulatory requirements while ensuring that data and workloads remain under the enterprise’s control.

This brings me to option number 2 and OCI Dedicated Region, because Oracle is the only public cloud provider, who can bring the same set of public cloud services to an enterprise data center.

What is OCI Dedicated Region?

OCI Dedicated Region (previously known as Oracle Dedicated Region Cloud@Customer aka DRCC) provides the full suite of Oracle cloud services (IaaS, PaaS, and SaaS) for deployment in one or more customer-specified physical locations. This solution allows customers to maintain complete control over their data and applications, addressing the strictest security, regulatory, low latency, and data residency requirements. It is ideal for mission-critical workloads that may not move to the public cloud.

Diagram of OCI in a dedicated region, description below

OCI Dedicated Region provides the same services available in Oracle’s public cloud regions. It is also certified to run Oracle SaaS applications, including ERP, Financials, HCM, and SCM, making it the only solution that delivers a fully integrated cloud experience for IaaS, PaaS, and SaaS directly on-premises.

Key features of DRCC:

  • Full Public Cloud Parity: DRCC offers the same services, APIs, and operational experience as Oracle’s public cloud. This includes Oracle Autonomous Database, Exadata, high-performance computing (HPC), Kubernetes, and more.
  • Private Cloud: The infrastructure is deployed within the customer’s data center, meaning all data stays on-premises, which is ideal for industries with strict data privacy or residency requirements.
  • Managed by Oracle: Oracle is responsible for managing, monitoring, updating, and securing the infrastructure, ensuring it operates with the same level of service as Oracle’s public cloud.
  • Pay-as-you-go: DRCC operates under a consumption-based pricing model, similar to public cloud services, where customers pay based on the resources they use.

Oracle Alloy

Oracle Alloy is a cloud infrastructure platform designed to allow service providers, independent software vendors (ISVs), and enterprises to build and operate their own customized cloud environments based on Oracle Cloud Infrastructure.

Becoming an Oracle Alloy partner  diagram, description below

Some key features of Oracle Alloy:

  • Customizable Cloud: Oracle Alloy allows organizations to brand, customize, and offer their own cloud services to customers using Oracle’s OCI technology. This enables service providers and enterprises to create tailored cloud environments for specific industries or regional needs.
  • Full Control: Unlike DRCC, which is managed entirely by Oracle, Alloy provides organizations with full control (of operations) over the infrastructure. They can operate, manage, and upgrade the environment as they see fit.
  • White-label Cloud Services: Oracle Alloy allows organizations to build and offer cloud services under their own brand. This is especially useful for telcos, financial institutions, governments or regional service providers who want to become cloud providers themselves.

In addition, partners can set their own pricing, rate cards, account types, and discount schedules. They can also define support structure and service levels. With embedded financial management capabilities from the Oracle Fusion Cloud ERP offering, Oracle Alloy enables partners to manage the customer lifecycle, including invoicing and billing their customers.

Final Words

Just because organizations call the combination of their data center solutions (even the components are coming from the same vendor) a private cloud, doesn’t mean that they have the right capabilities (people, processes, technology – not only technology!) and private cloud maturity to enable business transformations.

So, if you want to bring your on-premises environment to the next level with a true private cloud and a cloud operating model, why don’t you bring a complete public cloud region into your data center? 🙂