[Proxmox Series Part 9] Building a Complete Self-Hosted Web Infrastructure on Spare Hardware

한국어 버전

Proxmox often gets introduced as a “virtualization tool,” but its real value is turning spare hardware into a coherent operating model. To close this series, we will walk through a practical scenario: building a self-hosted web infrastructure on Proxmox using a mini PC (or two), with an application server, reverse proxy, monitoring stack, optional Windows box, and an OPNsense-powered private network plus VPN expansion.

How this post flows

  1. Hardware layout and resource allocation
  2. Role-based VM/LXC design (app, proxy, monitoring, Windows)
  3. Network design with bridges, OPNsense, private subnets, and VPN
  4. Creating repeatable deployment steps without heavy IaC
  5. Operating routines for homelabs and small internal teams

Terms introduced here

  1. OPNsense: A FreeBSD-based firewall/router distribution that works well as a Proxmox VM for private network control.
  2. Jump VM: A bastion-like entry point that funnels SSH or VPN traffic into the internal environment.
  3. Provisioning template: A Proxmox VM or LXC converted into a template for rapid, repeatable deployments.

Architecture overview

Required vs. optional pieces

  • Required: Proxmox host, reverse proxy, application server, monitoring stack, OPNsense-based internal LAN (so production workloads are never bridged directly to WAN)
  • Optional: Windows test VM, Jump VM, second Proxmox host, NAS/PBS, VLAN segmentation, dedicated OPNsense hardware for failover

Topology diagram

                 ┌───────────── Internet / ISP router ─────────────┐
                 │                                                 │
            (physical NIC 1)                                   (physical NIC 2)
                 │                                                 │
             [vmbr0 - WAN]                                  [vmbr1 - LAN]
                 │                     ┌──────────────────────────┴──────────────┐
                 │                     │  OPNsense VM (WAN=vmbr0, LAN=vmbr1)      │
                 │                     └──────────────┬──────────────────────────┘
                 │                                    │  DHCP / DNS / WireGuard
   Proxmox UI, SSH, PBS                               │
                 │          ┌──────────────┬──────────┴──────────┬─────────┐
                 │          │              │                     │         │
             External     Reverse       App Server         Monitoring   Windows / Jump VM
             exposure     Proxy (80/443) (vmbr1)           (vmbr1)      (vmbr1)

Reading card

  • Estimated time: 22 minutes
  • Prereqs: familiarity with creating Proxmox VMs/LXCs and configuring bridges
  • After reading: you can explain how to assemble and operate a self-hosted web stack on spare hardware.

Hardware example

Role Hardware Key resources
Proxmox host A 11th-gen i5 mini PC, 6–8 cores, 32 GB RAM, 1 TB NVMe Main apps/proxy/monitoring
Proxmox host B (optional) Used desktop, 16 GB RAM, 512 GB SSD PBS or test/HA experiments
NAS/PBS (optional) Low-power mini PC + 4 TB HDD Backups and archives

One host is enough to start, but a dedicated NAS or PBS makes recovery dramatically simpler. The biggest prerequisite is having at least two NICs.

  • NIC 1 → vmbr0 (WAN/management) → household router, Proxmox UI/SSH exposure
  • NIC 2 → vmbr1 (internal LAN) → OPNsense LAN and all private workloads

If you only have one NIC, add a USB NIC or start with VLAN tagging on the single interface, then migrate once more hardware arrives.

Suggested resource split (32 GB RAM baseline):

Workload vCPU RAM Disk Notes
OPNsense 2 4 GB 20 GB Increase to 6 GB if you terminate dozens of VPN clients
Reverse proxy 2 2 GB 20 GB TLS termination, ACME hooks
App server 4 8 GB 120 GB Docker/Podman, 2–3 services, expand RAM as traffic grows
Monitoring 2 2 GB 40 GB Prometheus + Grafana + logs
Windows test (optional) 2 6 GB 80 GB Keep powered off when not in use
Jump VM (optional) 1 1 GB 10 GB Tailscale/WireGuard client
Free headroom ~9 GB Proxmox services + temporary restore buffers

Role-based workloads

1. Application server (VM)

  • Ubuntu 24.04 LTS, 4 vCPU / 8 GB RAM / 120 GB disk (assumes ~50 concurrent users and 2–3 containers)
  • Runs Docker or Podman for web apps and backing services
  • Capture baseline provisioning (packages, users, services) in a lightweight Ansible playbook or shell script to keep redeployments consistent.

2. Reverse proxy (LXC or VM)

  • Debian LXC, 2 vCPU / 2 GB RAM / 20 GB disk (good for ≤1 Gbps HTTPS traffic)
  • Runs Caddy or Nginx Proxy Manager for TLS termination, HTTP→HTTPS redirects, WebSocket pass-through
  • ACME DNS-01 lets you renew wildcard certificates without exposing HTTP-01 ports, but secure the DNS API credentials stored on the proxy.

3. Monitoring + logging (LXC)

  • 2 vCPU / 2 GB RAM / 40 GB disk
  • Prometheus + Grafana plus Loki or Vector for logs
  • Deploy node exporters and remote journald shipping from hosts and guests

4. Windows test box (VM)

  • Windows 11 Pro, 2 vCPU / 6 GB RAM / 80 GB disk
  • Used for browser testing or RDP-only internal tools
  • Powered off when unused; rely on snapshots to keep a clean baseline

5. OPNsense firewall (VM)

  • 2 vCPU / 4 GB RAM / 20 GB disk (bump to 6 GB if you push a lot of VPN traffic)
  • Needs at least two NICs: vmbr0 (WAN) and vmbr1 (LAN). The LAN interface hands out DHCP/DNS/WireGuard configuration to every internal VM.
  • Because OPNsense runs as a VM, double-check bridge assignments—miswiring vmbr1 back to WAN defeats the isolation.

Every workload should be rebuildable from a template. App and proxy VMs benefit from VM templates, while monitoring and helper services fit well into LXC templates.

Network design

  1. vmbr0 (WAN/management): Bridge the NIC that faces your household router. Restrict Proxmox UI/SSH exposure at the router’s firewall and treat this as the only path to PBS or remote management.
  2. vmbr1 (LAN): Bridge the NIC that connects to OPNsense’s LAN interface. All internal workloads attach here and use OPNsense as their gateway. If you only have one NIC, add a USB NIC or use VLAN tagging (e.g., vmbr0 untagged for WAN, tag 10 for LAN) until you can add real hardware.
  3. OPNsense policies: Explicitly configure NAT, firewall rules, and WireGuard. Example: WAN 443 → reverse proxy 443; allow LAN → WAN for specific ports only; expose WireGuard 51820 and force all admin access through VPN. Without these policies you are still exposing Proxmox or other services inadvertently.
  4. Jump VM (optional): Keep WireGuard/Tailscale clients on a dedicated VM so administrators can reach internal services without poking additional holes in the firewall. The Jump VM lives on vmbr1 and connects outward to OPNsense’s VPN.

Document the desired flows (e.g., “only reverse proxy sees WAN 80/443”) so you can verify compliance during audits.

Repeatable deployment without heavy IaC

  1. Day 0 – networking: Create vmbr0/vmbr1, deploy OPNsense first, and confirm DHCP/DNS/WireGuard works on the LAN bridge.
  2. Day 1 – build templates: Install OS, patch, harden, then Convert to Template. Keep one template per OS version to avoid drift.
  3. Day 1 – add cloud-init: Use Proxmox’s cloud-init integration (Hardware > Cloud-Init) to inject hostnames, SSH keys, and IPs so new VMs boot in minutes.
  4. Day 2 – tagging + naming: Set tag conventions (role=proxy, env=prod) and VM ID ranges so the Datacenter view stays manageable.
  5. Day 2 – automate provisioning: Capture Docker installs, user creation, monitoring agents, etc., in small Ansible playbooks or scripts.
  6. Day 3 – automate protection: Configure Datacenter > Backup jobs targeting PBS (daily for app/proxy/OPNsense, weekly for monitoring/Windows) and wire the job notifications into email or ChatOps.

Operating routines for small teams

  • Snapshot before risky changes: Grab a manual snapshot before major updates on the app server or proxy.
  • Weekly report: Summarize PBS backup logs, Proxmox task logs, and monitoring screenshots to keep visibility high.
  • Monthly restore drill: Follow the exact steps below and target a 30-minute completion time.
    1. Pause the current app server and select the latest PBS backup.
    2. Restore into a new VM ID (app-restore-YYYYMM), connect it only to vmbr1.
    3. Start services, run synthetic health checks (HTTP 200, DB migrations), and log the timestamps.
    4. Update the operations log with total restore time and any blockers.
  • Expansion review: Decide when to add a second host for HA or move OPNsense onto dedicated hardware for deeper network segmentation.

These routines build resilience without needing enterprise-grade IaC.

Expansion ideas: private network and VPN

  1. OPNsense + WireGuard: Provide remote access via mobile/desktop WireGuard clients.
  2. VLAN separation: Segment apps, management, IoT, and guest traffic using VLANs on Proxmox bridges and assign tagged VLANs per VM/LXC.
  3. Hardware firewall failover: Run OPNsense on a low-power x86 appliance as well as in Proxmox, sync configuration via CARP/HA, and fail over when the VM host reboots. This requires at least two WAN/LAN ports and adds operational complexity.

Wrap-up

With these pieces in place, one Proxmox node becomes more than a lab—it turns into a self-hosted web platform with application, proxy, monitoring, Windows, OPNsense, and backup workflows that reinforce each other. Remember that a single node is still best suited for homelabs or small internal tools; true HA requires at least three Proxmox nodes (for quorum) plus shared storage. The lesson throughout this series is consistent: small hardware becomes practical infrastructure when you apply a clear operating model. Treat Proxmox as that model, combine it with disciplined storage, networking, backup, and deployment habits, and you have a repeatable stack ready for homelab or small internal use.

💬 댓글

이 글에 대한 의견을 남겨주세요