OpenStack Architecture in Cloud Computing: A Practical, Modern Reference

OpenStack Architecture in Cloud Computing A Practical, Modern Reference

OpenStack architecture in cloud computing is the blueprint of how OpenStack’s control plane services (identity, scheduling, networking, storage APIs) coordinate the data plane (hypervisors, network forwarding, storage backends) to deliver IaaS.

In this guide, you will see the reference architecture, the core components, and three real request flows (VM launch, tenant networking, volumes and images) plus the design decisions that make deployments reliable. Let’s walk through it together and make the architecture feel obvious by the end.

What OpenStack Is (In Cloud Computing Terms): Control Plane vs Data Plane

In cloud computing, OpenStack is an Infrastructure-as-a-Service (IaaS) platform: it exposes APIs for compute, networking, and storage so you can deliver a self-service private cloud (or a hybrid design) with policy, quotas, and multi-tenancy.

If you want a quick refresher on why IaaS is still the foundation layer for many enterprise cloud strategies, see this explainer on the advantages of Infrastructure as a Service, and how it compares to higher layers in the difference between PaaS and Infrastructure as a Service.

Architecturally, OpenStack is easiest to understand as two big halves:

  • Control plane: APIs, identity, scheduling, policy, orchestration, everything that decides what should happen
  • Data plane: hypervisors actually running VMs, network forwarding, and storage I/O, everything that makes the workload actually happen

If you remember only one thing: OpenStack’s power comes from how the control plane coordinates work across many agents and backends in the data plane.

The OpenStack Reference Architecture (Conceptual View)

Here’s a simple four-layer mental model you can use to explain OpenStack to anyone, from IT leadership to your network team.

  1. Physical layer
    Servers, NICs, switches, storage arrays (or SDS like Ceph), plus power, cooling, and racks.
  2. Virtualization layer
    Hypervisor (often KVM), virtual switching, and host networking. This is where the data plane executes.
  3. Control plane services
    API endpoints plus databases plus message bus plus schedulers plus controllers that coordinate.
  4. Data plane agents and backends
    Compute services on each host, network agents, storage backends, and the physical network and storage they operate on.

Diagram callout (include in final article): Build a single-page reference diagram with: User/API → Keystone → Nova/Neutron/Cinder/Glance → Message Queue/DB → Agents on compute and network nodes → Backends (hypervisor, SDN, storage). This diagram becomes your map so readers can place every flow and decision you explain.

Core Components and What They Do

OpenStack has many projects, but most architectures revolve around a consistent core.

Identity and access: Keystone

  • Authenticates users and services and issues tokens
  • Publishes the service catalog (where Nova and Neutron endpoints live)

Compute and resource placement: Nova plus Placement

  • Nova manages the VM lifecycle: boot, stop, migrate, resize
  • Placement tracks resource inventory and supports scheduling decisions

Networking: Neutron

  • Creates tenant networks, subnets, routers, security groups, floating IPs
  • Delegates execution to agents and backends (for example L2 and L3, DHCP, metadata, or an OVN-driven model depending on design)

Storage and images: Cinder plus Glance (Swift optional)

  • Cinder: block storage volumes (attach and detach to VMs)
  • Glance: VM images (boot sources)
  • Swift (optional): object storage for unstructured data, backups, artifacts

UI and orchestration (optional but common)

  • Horizon for human administration
  • Heat for OpenStack-native orchestration templates (many teams also use Terraform or Ansible)

If you’re specifically designing a private cloud footprint, this guide to OpenStack private cloud complements this article as an operating model view.

Node Roles and Real-World Topologies

When people say OpenStack is complicated, they often mean: I understand the services, but how do I place them on real machines?

A practical deployment uses these roles:

  • Controller nodes: host API services, schedulers, identity, and often the database and message queue (clustered in production)
  • Compute nodes: run the hypervisor and VM workloads
  • Network nodes (sometimes separate): host routing, NAT, or L3 services depending on Neutron design
  • Storage nodes (optional): if using a distributed storage backend

Single-AZ vs multi-AZ

  • Single AZ: simplest, good for small and mid-size private clouds
  • Multi-AZ or multi-site: stronger fault isolation but higher network, storage, and operational complexity

A good architecture starts with capacity and failure-domain clarity. If you’re planning growth or consolidation, tie this back to IT infrastructure capacity planning because OpenStack scales when you design around real limits: compute oversubscription, network throughput, storage IOPS, and control plane bottlenecks.

Request Flow #1: What Happens When You Launch a VM (Step-by-Step Trace)

This is the flow that reveals whether an OpenStack architecture is designed well.

VM launch sequence

  1. User authenticates
    The user (or automation) requests a token from Keystone.
  2. User calls Nova API
    Nova receives a boot instance request with flavor, image, network, and optional volume requirements.
  3. Scheduling and placement
    Nova consults Placement and its own scheduling logic to pick a suitable compute host.
  4. Control plane coordination via message bus
    Nova uses the message bus and DB coordination so the right compute service receives the request.
  5. Compute node spawns the instance
    On the compute node, the hypervisor is instructed to create the VM and allocate CPU, RAM, and disk.
  6. Networking is wired
    Neutron ensures the VM gets ports, security groups, DHCP and metadata access, and routing.
  7. Optional storage is attached
    If boot-from-volume or extra volumes are requested, Cinder attaches block volumes to the VM.

VM launch request trace table

StepService/API touchpointCommon dependencyAgent/data plane actionOutcome
1KeystoneIdentity backendNoneToken issued
2Nova APIDBNoneInstance request created
3Nova Scheduler plus PlacementPlacement DBNoneCompute host selected
4Nova Conductor or MQMessage queue plus DBNoneInstruction dispatched
5Nova Compute (on host)Hypervisor plus local resourcesCompute service spawns VMVM created
6NeutronNeutron DB or MQL2 or L3 or DHCP or metadata agents apply configPorts, IP, routes, security groups applied
7Cinder (optional)Cinder DB or MQStorage backend maps volumeVolume attached

What architects should notice:

  • Control plane dependencies (DB and message bus) are critical. If you don’t design them for HA, you don’t have a resilient cloud.
  • Networking is not a checkbox. It is a system of policy plus agents plus physical design.

Request Flow #2: Creating a Tenant Network (and Why Neutron Design Choices Matter)

Tenant networking is where OpenStack architecture becomes very real.

Tenant network creation sequence

  1. User requests a network and subnet (Neutron API)
  2. Neutron records intent and allocates resources (DB)
  3. Neutron triggers the backend and agents
  4. DHCP and metadata are prepared so instances can boot cleanly
  5. If routers or floating IPs are involved, routing and NAT rules are applied

Tenant networking request trace table

StepService/API touchpointCommon dependencyAgent/data plane actionOutcome
1Neutron APINeutron DBNoneNetwork/subnet/router intent recorded
2Neutron plugin/backendMQ plus DBNoneBackend workflow selected
3L2 mechanismNoneL2 config applied (bridges/ports/overlays)Tenant network created
4DHCP/metadataNoneDHCP namespaces plus metadata path preparedInstances can get IP and metadata
5L3/routing (if used)NoneRouting and NAT policies enforcedNorth-south connectivity works

Why Neutron design choices matter

Two architectures can both work, but behave very differently under load, during failures, or when you add hybrid connectivity.

  • Performance: east-west traffic depends on how switching and overlays are implemented
  • Operability: troubleshooting becomes easier when your backend model is consistent and well-instrumented
  • Security: security groups and segmentation must align with how your network is built

If your OpenStack needs to connect cleanly to other environments (public cloud, branch sites, regional systems), it helps to think in hybrid patterns early. This guide to hybrid cloud providers in Singapore is framed for a specific market, but the architectural lessons generalize well: connectivity design, latency expectations, and compliance-driven placement. For broader platform-to-platform alignment, you may also find value in understanding the interoperability of inter-cloud services across different platforms.

Request Flow #3: Volumes and Images

Storage flows are where user experience meets architecture reality: boot speed, attach reliability, performance, and recovery.

Volume attach sequence

  1. User creates a volume (Cinder API)
  2. Storage backend allocates the block device
  3. User attaches volume to a server
  4. Compute host maps the device and the VM sees it as a disk

Image-to-boot path

  1. User selects an image (Glance)
  2. Nova requests the image and prepares boot artifacts
  3. The compute host retrieves and caches the image as needed
  4. VM boots from the image (or from a volume created from an image)

Storage request trace table

StepService/API touchpointCommon dependencyAgent/data plane actionOutcome
1Cinder API (create volume)Cinder DB or MQBackend allocates block storageVolume exists
2Glance API (select image)Glance DBImage served to Nova/computeImage available
3Nova plus Cinder (attach)MQ plus DBHost maps volume to VMDisk attached
4Data plane I/OStorage backendReads and writes occurWorkload runs

If your audience mixes “cloud storage” and “cloud computing” terms, a quick clarifier reduces confusion: cloud computing and cloud storage are not the same thing, and OpenStack touches both (compute lifecycle and persistent data).

Architecture Decision Points: The Design It Right Section

This is the part many blog posts skip, yet it’s what practitioners actually need.

Minimum viable HA (high availability)

A resilient OpenStack architecture typically includes:

  • Multiple controller nodes to avoid single points of failure
  • Clustered database because almost everything depends on state
  • Resilient message queue because service coordination depends on it
  • Load balancing for API endpoints
  • Operational discipline: backups, upgrades, monitoring

Networking backend choices

Your Neutron backend choice affects:

  • Day-2 ops (troubleshooting, upgrades)
  • Feature availability (routing, security behavior)
  • Performance at scale

Storage backend choices

Pick based on workload patterns:

  • VM-heavy transactional workloads: prioritize IOPS consistency and recovery
  • Large unstructured data: object storage patterns
  • Backup and DR expectations: define RPO and RTO early

Automation is not optional

OpenStack rewards teams who treat infrastructure as software. If you’re aligning your platform build with repeatable operations, it helps to revisit Infrastructure as Code vs Infrastructure as a Service so the build becomes reproducible and upgradeable, not a one-time configuration.

Because architecture is inseparable from controls, plan security into the design early (identity flows, network segmentation, secrets management, auditability). If you want a structured approach, this overview of cloud security consulting services in Southeast Asia includes frameworks that translate well globally.

OpenStack vs VMware (Why This Comes Up So Often in “)

Many teams don’t choose OpenStack from scratch. They arrive here because they’re reassessing virtualization strategy, cost structures, or lock-in risk.

At a high level:

  • VMware is a tightly integrated virtualization ecosystem
  • OpenStack is a service-oriented control plane that can orchestrate a virtualized private cloud across many nodes and backends

If you’re exploring options broadly, the comparison set often starts with VMware alternatives. And if storage migration is part of your migration story, this guide on VMware storage migration helps you think through data movement implications without treating storage as an afterthought.

Singapore-Based, Global-Ready Implementation Notes

Even for a global audience, a Singapore-based delivery lens can be valuable because it forces clarity around real-world constraints: compliance, latency, data residency, and regional availability.

For teams operating across Southeast Asia, workload placement and data center tier decisions can matter more than people expect. This is where guidance like when U.S. companies should choose Singapore-based Tier 2 data centers in Southeast Asia can be used as a thought model: define decision criteria, then validate against your own constraints.

If you’re prototyping quickly, the pilot environment question comes up fast. This field guide on Singapore cloud VPS speed, cost, and compliance is a useful template for evaluating early-stage environments without locking yourself into poor architecture later.

How Accrets Helps

If you’ve read this far, you’re likely in one of these positions:

  • You’re designing a private cloud and want a reference architecture validated
  • You’re planning a migration from a virtualization stack and need a realistic path
  • You want OpenStack but don’t want the operational risk of learning in production

Accrets supports OpenStack journeys end-to-end, from architecture and implementation to modernization and managed operations. If your driver is a VMware exit strategy, you might start with a focused migration route like escaping VMware lock-in with an Accrets OpenStack migration approach. If your priority is steady day-2 operations (monitoring, patching, reliability, cost control), consider what a managed cloud service provider operating model looks like for your environment.

Get a Reference Architecture Review

If you want a second set of expert eyes on your design, whether it’s HA, networking backend choices, storage architecture, or a migration plan, fill the form below for free consultation with Accrets Cloud Expert for openstack architecture in cloud computing through the Accrets contact page.

Frequently Asked Question About OpenStack Architecture in Cloud Computing: A Practical, Modern Reference

Is OpenStack still relevant in 2026?

Yes. OpenStack remains a strong private cloud IaaS option when you need multi-tenant infrastructure with control over data residency, networking, and storage backends. Many teams pair it with modern automation and platform layers, but the architectural value is still the same: an API-driven control plane coordinating a scalable data plane.

Do I need dedicated network nodes?

Not always. Some architectures use dedicated network nodes for routing and NAT, while others distribute networking functions differently depending on the backend model and scale goals. The right choice depends on throughput needs, failure domains, and operational preference.

What is the minimum HA setup for production?

Most production-ready designs start with multiple controller nodes, a clustered database, a resilient message queue, and load-balanced API endpoints. Then you add monitoring, backups, and an upgrade plan so the control plane stays healthy during change.

What’s the simplest way to explain OpenStack architecture to a stakeholder?

Use the control plane vs data plane model. The control plane is the decision and coordination layer (APIs, identity, scheduling), while the data plane is the execution layer (hypervisors, networking, storage). The three request flows in this article show how they interact.

Share This

Get In Touch

Drop us a line anytime, and one of our service consultants will respond to you as soon as possible

 

WhatsApp chat