Once you know which workloads belong in VMs, the next question is “what does a standard Linux VM look like?” With a template in place, you no longer rethink CPU types, disk buses, or Cloud-Init settings every time. This article targets Ubuntu/Debian workloads and walks through each option that new Proxmox users most often misconfigure—plus the reasoning behind every “safe default.”
How this post flows
- CPU type and core mapping
- Memory, ballooning, and HugePages defaults
- Disk bus, cache, and VirtIO parameters
- Network devices and the QEMU Guest Agent
- Cloud-Init plus the template workflow
- Common mistakes and a verification checklist
Terms introduced here
- x86-64-v2/v3: Virtual CPU profiles that bundle modern instruction sets.
- Ballooning: A feature that dynamically adjusts guest memory, useful for overcommit.
- VirtIO SCSI single: The default SCSI controller choice that combines good performance with TRIM support.
- Discard/Trim: Lets guests return unused blocks to LVM-Thin so space can be reclaimed.
- Cloud-Init: The initialization layer that applies users, SSH keys, and network config on first boot.
Reading card
- Estimated time: 24 minutes
- Prereqs: Ubuntu or Debian ISO plus a Proxmox 8.x host with ISO storage
- After reading: you can create and reuse a standardized Linux VM template.
CPU type and core mapping
| CPU type | What it exposes | When to pick it |
|---|---|---|
kvm64 |
Minimal instruction set, maximum compatibility | Legacy guests that refuse newer flags |
x86-64-v2-AES |
AVX + AES, Proxmox 8 default | Baseline for mixed or older hardware |
x86-64-v3 |
Adds AVX2/VAES | Only if all hosts are Zen 3 / 11th-gen Core or newer |
host |
Copies the physical CPU flags | Highest performance, zero migration safety |
- CPU type decision: Run
lscpu | grep Flagson every Proxmox host. If any node lacksavx2, stick tox86-64-v2-AES. Pickingx86-64-v3on the template and later migrating to an older CPU causes subtle segfaults. Document the chosen profile in your runbook. - Core count template: Keep templates at 2 vCPUs (1 socket, 2 cores). After cloning, scale cores and sockets to match the host’s core ratio so NUMA pinning stays predictable. If
numactl -Hshows multiple NUMA nodes on the host, enable awareness withqm set <ID> --numa 1and set CPU affinity (qm set <ID> --cpulimit 4 --cpuaffinity 0-3) so critical workloads stay on a single socket. Single-socket homelabs can skip this entirely. - Changing CPU types: When you switch CPU types on an existing VM, Linux detects “new hardware,” and Windows often demands reactivation. Make the decision before running
qm templateso every clone inherits a known-good profile. - Nested virtualization caveat: If you intend to run nested KVM/Docker, confirm the guest sees
vmx/svmflags (grep vmx /proc/cpuinfo). Some CPU types mask these flags; switch tohostonly when you are certain migration is irrelevant.
Memory and ballooning
- Static allocation: Give the template 2 GB so clones boot on any host. Immediately resize per workload (e.g., nginx reverse proxy → 2 GB, PostgreSQL → 6 GB). Document both “minimum” and “burst” values.
- Ballooning: Only enable after verifying the guest has
virtio_balloonloaded (lsmod | grep virtio_balloon) and swap sized to at least 50% of the balloon maximum (swapon --show). Without monitoring, ballooning delays OOM symptoms and confuses beginners, so the default should be off (balloon minimum = maximum). When you eventually turn it on, send alerts whenballoondrops below 70% of the maximum so you notice pressure. - HugePages: Leave disabled on the template. If a workload needs it, reserve huge pages on the host (GRUB
default_hugepagesz=1G hugepagesz=1G hugepages=8) and setqm set <ID> --hugepages 1024. Also disable transparent huge pages (echo never > /sys/kernel/mm/transparent_hugepage/enabled) to avoid split TLB behavior.
Disk bus, cache, and VirtIO
| Disk bus | Pros | Cons |
|---|---|---|
| SATA | Works with anything | Lower throughput, no discard |
| VirtIO Block | Simple single-disk setup | No multi-queue support |
| VirtIO SCSI | Best performance + TRIM + multi-queue | Slightly more initial clicks |
- Controller configuration: Open Hardware → SCSI Controller and pick
VirtIO SCSI(single-queue fits most homelabs; switch to multi-queue plusiothread=1only when a VM has >4 vCPUs or sustained 100k+ IOPS). Attach every disk via SCSI, then enable Discard and SSD emulation per disk. - Cache mode per storage:
- Local SSD or Ceph RBD:
Write backis safe only with a UPS or battery-backed RAID cache. - Local HDD or NFS:
Write throughavoids corruption at the cost of lower throughput. - Benchmark-only:
Write back (unsafe)orDirectsync—expect data loss if power fails, so never use them on production data.
- Local SSD or Ceph RBD:
- Host requirements for discard: Enabling guest-side discard only works if the host LVM-Thin pool issues discards. In the UI, open Datacenter → Storage → → Edit → Advanced and confirm Issue discard is checked. After enabling, test inside a guest:
On the host, watchfallocate -l 100M /tmp/trim-test && rm /tmp/trim-test sudo fstrim -v /lvs -o lv_name,data_percent <vg>to confirm the thin pool’s data% drops. If it never decreases, discard is not working. - VirtIO drivers: Ubuntu/Debian ship them by default, but keep
virtio-win.isoon shared storage for the occasional Windows guest. If you later mix OSes, attach the ISO before templating so clones boot with the driver handy.
Network and QEMU Guest Agent
Network devices
- Bridge choice: Most labs use
vmbr0for LAN traffic. If you segment VLANs, create tagged bridges (vmbr0.10) and note the default in your template doc so future clones land on the intended network. - NIC model: Pick
VirtIO (paravirtualized)for every template. Set Multiqueue tomin(vCPU count, 4); more queues than CPUs provide no benefit and add overhead. Validate inside the guest withethtool -l ens18. - Firewall defaults: Decide if Proxmox’s VM firewall is on by default. Templates that leave it disabled force you to remember per-clone.
QEMU Guest Agent
- Install inside the guest first:
sudo apt install -y qemu-guest-agent sudo systemctl enable --now qemu-guest-agent - Then enable on the host:
qm set <ID> --agent enabled=1. Verify connectivity withqm agent <ID> ping. Without the running guest service, Proxmox silently falls back to ACPI power commands. - What you gain: Correct IP reporting,
qm shutdownreliability, and file copy viaqm agent <ID> file-read. Templates missing the agent leave those features broken until you log in manually.
Cloud-Init and the template workflow
- Prefer official cloud images: Download the Ubuntu Cloud image (
ubuntu-22.04-server-cloudimg-amd64.img), import it withqm importdisk, and attach it. They already includecloud-initand netplan hooks. If you do a manual ISO install, disable legacy netplan configs (sudo rm /etc/netplan/00-installer-config.yaml) so Cloud-Init networking takes over cleanly. - Update and sanitize:
Optionally runsudo apt update && sudo apt dist-upgrade sudo truncate -s0 /etc/machine-id sudo rm -f /var/lib/dbus/machine-id sudo rm -rf /var/lib/cloud/instances/* /var/lib/cloud/instance sudo rm /etc/ssh/ssh_host_* sudo cloud-init clean --seed --logs --allsudo systemd-firstboot --setup-machine-idon first boot via user-data so each clone gets a unique ID. - Attach the Cloud-Init drive: Hardware → Add → Cloud-Init Drive. Always prefer shared storage (
cephfs,nfs-templates, shared LVM-Thin): if the drive lives onlocal-lvm,qm migratefails because the new node cannot see the ISO. Only single-node homelabs that accept this risk should use local storage. Inside the guest, confirmcloud-init query --format '%(datasource_type)s'returnsConfigDrive; if it does not, check that/dev/sr0exists and the drive is mounted. - Convert to a template:
Remember: template disks become read-only. Clone the template if you ever need to edit it later.qm shutdown <ID> qm template <ID> - Clone and customize:
Add SSH keys and users via the Cloud-Init tab or a user-data snippet rather than the CLI, which logs arguments. After filling hostname, DNS, user, and keys, boot the VM and verifyqm clone <TEMPLATE-ID> <NEW-ID> --name web-01 qm set <NEW-ID> --ipconfig0 "ip=192.168.10.20/24,gw=192.168.10.1"cloud-init status --wait,hostnamectl,ip addr, and/var/log/cloud-init-output.logto confirm successful customization. Ifcloud-init statushangs, rerun with--timeout 15to avoid locking the validation step.
Common mistakes and checklist
- CPU type: Setting the template to
hostpins you to the current hardware. Verify the oldest host before you pickx86-64-v3. - Ballooning: Enabling without swap or monitoring masks memory pressure, then everything OOMs at once. Keep it off until you measure workloads.
- Guest Agent: Installing on some clones but not others means
qm shutdownbehaves inconsistently. Bake it into the template instead. - Cloud-Init drive: Forgetting it forces you to hand-edit
/etc/netplanevery time. Attach before templating. - Discard: Checking the box but skipping host-side
discardorfstrimlets the thin pool fill silently.
Checklist (tick every time you cut a template):
- Chosen CPU type documented and validated on all hosts
- Template memory size, ballooning policy, and NUMA flag recorded
- Disk controller, cache mode, discard, and
fstrimschedule noted - vmbr/VLAN choice, NIC queues, firewall default, and Guest Agent install script included
- Cloud-Init drive lives on the correct storage;
machine-id, SSH keys, and cloud caches cleared - Post-clone validation steps (
cloud-init status,hostnamectl, new SSH fingerprints) performed
Wrap-up
A standardized Linux VM accelerates every future workload and keeps multiple operators aligned. Once CPU, memory, disk, and network defaults are in place, each clone only needs workload-specific tweaks. Templates also reduce the chance of forgetting VirtIO drivers or Cloud-Init settings.
Next, we will connect these VM and LXC setups to a backup-and-restore routine so they remain recoverable.
💬 댓글
이 글에 대한 의견을 남겨주세요