If you are searching for “openstack virtual machine”, here is the short answer. An OpenStack virtual machine is a running instance created from an image, flavor, network, and optional volumes inside an OpenStack cloud. In this guide we will walk together from the core concepts to step by step creation in Horizon and the CLI, then move into security, backup, and how this fits into a Singapore and Southeast Asia strategy. Stay with me until the end so you can confidently design, launch, and operate OpenStack virtual machines in production.
If you are looking at OpenStack and wondering, “So how do I actually run a virtual machine on this thing?”, you are not alone.
Most documentation jumps straight into images, flavors, networks, and security groups without clearly connecting the dots. In this guide, we will walk through what an OpenStack virtual machine (VM) really is, how OpenStack actually runs it under the hood, and how to launch, secure, and operate those VMs in a production-ready way.
You will see both Horizon (GUI) and CLI flows, plus what it takes to scale from a single test VM to a reliable service, especially if you are running workloads in Singapore and Southeast Asia.
Table of Contents
ToggleWhat Is an OpenStack Virtual Machine, Really?
Let us start with a small but important clarification: in OpenStack, you will not see many buttons labeled “Create VM”. Instead, you will see “Create Instance”.
For practical purposes, an instance in OpenStack is your virtual machine:
- It has virtual CPU, RAM, and disk (defined by a flavor).
- It boots from an image (your OS template, such as Ubuntu, CentOS, Windows).
- It connects to one or more networks using Neutron.
- It may have additional volumes attached for persistent storage.
So the lifecycle looks like this:
- You prepare or select an image (OS template).
- You select a flavor (size and shape of the VM).
- You attach it to a network.
- OpenStack schedules and boots that image as a running instance on a hypervisor.
If you are new to cloud concepts overall, it can help to review the broader building blocks of cloud platforms. Resources like a guide to the fundamentals of cloud computing or cloud computing business applications can give your team a shared vocabulary before you dive into OpenStack specifics.
How OpenStack Actually Runs a Virtual Machine (Architecture in Plain English)
Under the covers, OpenStack is a collection of services working together to make your VM come to life. At a high level:
- Nova (Compute) decides where your VM runs and instructs the hypervisor to boot it.
- Glance (Image) stores and serves the VM images your instances boot from.
- Neutron (Networking) sets up virtual networks, subnets, routers, and IP addressing.
- Cinder (Block Storage) provides volumes, which are persistent virtual disks you can attach to VMs.
- Keystone (Identity) handles authentication and authorization.
- Horizon (Dashboard) is the web UI that sits in front of all of this.
A simplified flow when you hit “Launch Instance”:
- You submit a request via Horizon or the CLI.
- Nova receives the request and asks Glance for the image, Neutron for networking, and Cinder for any volumes.
- Nova scheduler picks a compute node (a physical server running a hypervisor such as KVM).
- On that compute node, the hypervisor boots the image as a virtual machine, attaches virtual NICs, connects to the virtual network, and mounts any volumes.
- Once the VM passes basic checks, the instance shows as ACTIVE in Horizon or the CLI.
If you are running this in your own data centers, the physical environment matters as well. Many organisations deploying OpenStack as a private or hybrid cloud in the region will look at Singapore’s position as a hub with strong connectivity and resilient facilities, topics covered in resources like tier 2 data centers in Southeast Asia and when US companies should choose Singapore or this practical overview of OpenStack private cloud.
When Does an OpenStack Virtual Machine Make Sense for Your Workloads?
An OpenStack VM is just one way to run workloads. You could go with AWS EC2, GCP, Azure, or stay on VMware. So where does OpenStack shine?
Common scenarios where OpenStack VMs make sense:
- Regulated or sensitive workloads where you need full control over location, compliance, and tenancy, such as finance, healthcare, or government.
- Latency-sensitive applications for users in Southeast Asia where hosting in a Singapore data center provides better performance and jurisdiction than hosting in the US or EU.
- VMware exit or diversification strategies, where OpenStack is a common part of the mix when exploring VMware alternatives.
- Hybrid or cloud-adjacent architectures, where OpenStack acts as your on-prem or colocation cloud layer, complemented by public cloud services, as discussed in hybrid cloud providers in Singapore for US-based teams.
If you want the flexibility and economics of cloud but need higher control over where and how your workloads run, OpenStack virtual machines are a strong option, especially when deployed in a well-designed, certified facility such as a tier 3 data center or higher.
Prerequisites: What You Need Before Launching an OpenStack VM
Before you can click “Launch Instance” and expect success, a few things must already be in place:
- A working OpenStack cloud with Nova, Glance, Neutron and optionally Cinder configured.
- At least one image uploaded, for example Ubuntu 22.04 cloud image with cloud-init.
- A network and subnet, often one internal network plus a router providing external connectivity.
- At least one flavor defined, such as m1.small or m1.medium.
- A project or tenant and user with access to Horizon or the CLI.
- A key pair for SSH (for Linux) or RDP credentials (for Windows).
In many organisations, putting this foundation in place is not just a technical exercise, it is an infrastructure and governance question. If your team does not want to own everything end to end, you might look at models like IT infrastructure management services or IT outsourcing services to delegate the care and feeding of the underlying OpenStack platform while your engineers focus on the workloads.
Step-by-Step: Launching Your First OpenStack Virtual Machine in Horizon (GUI)
Let us walk through launching a simple Linux VM via Horizon, the OpenStack dashboard.
6.1 Prepare or Choose an Image
From the Project → Compute → Images section, make sure you have at least one image available. For a first test:
- A cloud image such as Ubuntu, CentOS, or Rocky Linux with cloud-init support is ideal.
- Check that the image is in a format your environment supports, commonly QCOW2 or RAW.
If you upload your own image, verify:
- It boots correctly in a test environment.
- SSH works via a key, not hard coded passwords.
- Basic security hardening is in place.
Treat images as “golden templates”. You will eventually want to standardise them as part of your broader infrastructure security in cloud computing approach.
6.2 Choose the Right Flavor (CPU, RAM, Disk)
Next, pick a flavor that defines your VM’s size:
- Small flavors, for example 1 vCPU and 1 to 2 GB RAM, are good for test apps or jump hosts.
- Medium flavors, such as 2 to 4 vCPU and 4 to 8 GB RAM, are typical for web or application servers.
- Large flavors are for databases, analytics workloads, or high traffic services.
Choosing too small a flavor means your VM will be constantly under memory or CPU pressure. Choosing something large for every workload wastes capacity and can affect consolidation ratios, a topic that often comes up in IT infrastructure capacity planning.
6.3 Network, Security Groups, and Key Pairs
Under the Networking tab:
- Attach the VM to the appropriate internal network.
- If your cloud uses floating IPs for external access, you will later associate one from the public pool.
Under Security Groups:
- Use a security group that allows SSH on port 22 or RDP on port 3389 from your management network.
- Start restrictive and allow only what you need, from where you need it.
Under Key Pair:
- Select an existing SSH key pair or create a new one.
- This will inject your public key into the VM so you can log in securely.
Security groups and OS level hardening are your first line of defense, and they should align with any cloud security policies you have adopted or developed with partners offering cloud security consulting services in Southeast Asia.
6.4 Boot the Instance and Log In
Finally, review the settings and click Launch Instance.
- Watch the instance status. It should move from Build to Active.
- Once ACTIVE, if your environment uses floating IPs:
- Allocate a floating IP and associate it with the instance.
- Allocate a floating IP and associate it with the instance.
Use your SSH client to connect:
ssh -i your-key.pem ubuntu@<floating-ip>From here, you can install your application stack, configure services, and treat this VM as you would any other Linux or Windows server, with OpenStack managing the infrastructure beneath it.
Launching the Same VM with the OpenStack CLI
For repeatable operations and automation, you will want to know the CLI equivalent of what you just did in Horizon.
7.1 Authenticate and Select a Project
First, source your OpenStack RC file:
source project-openrc.sh
openstack token issueThis sets environment variables so the openstack CLI knows which project and credentials to use.
7.2 Core Commands to Create an Instance
List available images, flavors, and networks:
openstack image list
openstack flavor list
openstack network listThen create a VM:
openstack server create \
--image ubuntu-22-04 \
--flavor m1.small \
--network internal-net \
--key-name my-key \
my-first-openstack-vmYou can then check status:
openstack server list
openstack server show my-first-openstack-vm7.3 Allocate and Associate a Floating IP
If you need external access:
openstack floating ip create public-net
openstack server add floating ip my-first-openstack-vm <floating-ip>At this point, you have reproduced the Horizon workflow using the CLI. It is a short step from here to incorporating these commands into scripts or pipelines, or moving toward infrastructure as code. If you are still clarifying operating models internally, resources like infrastructure as code vs infrastructure as a service and advantages of infrastructure as a service can help align your architecture and operations teams.
From Single VM to Reliable Service: Storage, Backup, and Disaster Recovery
A single VM is easy. A reliable service is harder.
Key considerations:
- Use Cinder volumes for any data you care about, not just the root disk.
- Take snapshots before major changes or updates.
- Design backup policies per workload, for example daily, weekly, or more frequent depending on RPO and RTO.
- Test restores, not just backups.
When OpenStack is part of a larger continuity strategy, you will usually store backup copies in a different availability zone, data center, or even cloud. That is where solutions such as cloud computing service providers in Singapore for backup and disaster recovery and platform offerings like managed backup services or IT DR-as-a-service come into play.
Hardening and Operating OpenStack VMs in Production
Once the VM is running, the work is not over. For production workloads, you should think about:
- Security
- Lock down security groups to only required ports and source ranges.
- Harden the OS, including SSH key only authentication, patching, CIS baselines, and application firewalls where needed.
- Lock down security groups to only required ports and source ranges.
- Monitoring and logging
- Collect system metrics such as CPU, memory, disk, and network.
- Forward logs to central platforms for analysis and audit.
- Collect system metrics such as CPU, memory, disk, and network.
- Capacity and lifecycle
- Regularly review VM utilisation.
- Clean up unused instances, volumes, and images.
- Regularly review VM utilisation.
These topics connect directly with broader infrastructure security and capacity planning practices. Many organisations in Singapore and the region pair OpenStack with guidance like an infrastructure security playbook for multicloud or an IT infrastructure capacity planning framework to avoid over or under provisioning.
Scaling and Automating OpenStack Virtual Machines (Heat, Terraform, and Beyond)
Once you know how to spin up a single VM, it is natural to ask: how do I do this for dozens or hundreds of servers without endless clicking?
OpenStack has a native orchestration service, Heat, and is also well supported by tools like Terraform.
You can, for example:
- Use Heat templates to define stacks with multiple servers, networks, and load balancers.
- Use Terraform to codify your instances, volumes, networks, and security groups as code in Git.
This moves you into a more modern operating model, where your OpenStack VMs participate in the same infrastructure as code and CI or CD ecosystem as your public cloud workloads. If you are evaluating where this fits against other options, you may find it useful to compare infrastructure as a service vendors and read about how to build a cloud computing infrastructure without wrecking your budget or security.
Where OpenStack VMs Fit in a Singapore and Southeast Asia Strategy
For global teams, location and latency are not theoretical concerns.
- If your users and data are in Southeast Asia, placing OpenStack workloads in Singapore can significantly reduce latency while keeping you in a stable regulatory environment.
- For US based companies expanding into the region, it is often more efficient to host in a Singapore data center than to backhaul all the way to the US.
If you are comparing options, you might look at a Singapore cloud VPS field guide or strategies for tier 2 data centers in Southeast Asia to decide where your OpenStack platform should live.
Sectors like banking and government are especially sensitive to jurisdiction and control. That is why you will frequently see OpenStack and related technologies mentioned in discussions around cloud banking solutions in Singapore and Southeast Asia and government cloud in Singapore as part of larger digital transformation programs, such as those highlighted in Singapore’s government digital transformation and broader ASEAN digital transformation.
When to Bring in a Partner: How Accrets Helps You Design, Run, and Scale OpenStack VMs
Running a handful of test VMs is one thing. Running mission critical workloads on OpenStack across multiple data centers and regions is something else entirely.
A partner like Accrets, as a managed cloud service provider and IT infrastructure specialist based in Singapore, can help you:
- Design and implement an OpenStack based private or hybrid cloud.
- Migrate workloads from legacy infrastructure or VMware, aligning with your digital transformation roadmap.
- Operate the platform day to day with managed IT services and managed cloud services.
- Integrate OpenStack with DR, backup, connectivity, and security solutions across Southeast Asia.
If you are still deciding whether to build in house capability or partner, it is worth understanding the nuances of managed vs cloud services and which you actually need and why partnering with a managed cloud services provider matters in 2025. These considerations tie directly into why many enterprises work with managed service providers in Singapore and broader IT companies in Singapore to accelerate outcomes.
Dandy Pradana is an Digital Marketer and tech enthusiast focused on driving digital growth through smart infrastructure and automation. Aligned with Accrets’ mission, he bridges marketing strategy and cloud technology to help businesses scale securely and efficiently.




