[OPNsense Series Part 2] Building the OPNsense VM on Proxmox and Drawing the WAN/LAN Line

한국어 버전

Running OPNsense as a VM on Proxmox lets you collapse firewalling, routing, and VPN entry points into a single control plane. The catch is deciding how vmbr bridges split WAN and LAN, which guest NIC types to expose, and which installer switches keep the network boundary intact. Skip those, and you end up with workloads that hop around the firewall entirely.

This chapter starts with the single-NIC baseline, extends the mental model to dual-NIC and VLAN trunks, and then walks through the exact VM creation flow so that every internal hop must traverse OPNsense. Merely creating bridges is not enough—you must also point every guest’s default gateway at the OPNsense LAN interface so that all Layer 3 traffic flows through the firewall. We assume vmbr0 and vmbr1 already exist; if you have not created them, finish Part 1 or Proxmox’s networking guide first.

Before You Continue: Prerequisite Checklist

  1. The Proxmox host has at least one physical NIC and already exposes vmbr0 (WAN) and vmbr1 (LAN) in /etc/network/interfaces. Otherwise, follow the bridge section in Part 1 first. How to verify: open Datacenter > Node > System > Network and confirm you have a vmbr0 with a physical uplink in Bridge ports and a vmbr1 with bridge_ports none for the internal segment.
  2. You have uploaded an ISO into a Proxmox storage (for example local (iso)) before. If not, skim the ISO images docs.
  3. You are comfortable with the FreeBSD/OPNsense console (arrow keys, Tab, Space). No mouse input is needed.

Reading Card

  • Estimated time: 18 minutes
  • Requires: vmbr creation experience, FreeBSD console basics, ISO upload experience
  • Outcome: You can map WAN/LAN bridges, create the VM with the right NICs, and validate that no guest skips the firewall.

Terms You Will See

  1. WAN bridge: The Linux bridge (vmbr0) that uplinks to the ISP router and feeds OPNsense’s WAN interface.
  2. LAN bridge: Another Linux bridge (vmbr1) that carries only internal VMs/LXCs and points to OPNsense’s LAN interface.
  3. VirtIO NIC: Paravirtualized NIC driver in Proxmox. FreeBSD 13+ includes it and it minimizes CPU burn.
  4. Serial installer: The -serial.iso image that lets Proxmox’s serial0 console drive the full install without VNC.
  5. VLAN-aware bridge: A bridge configured to pass VLAN tags so a single NIC can expose multiple logical networks.

Flow of This Article

  1. Preconditions and why WAN/LAN separation matters
  2. Single-NIC baseline vs. dual-NIC/VLAN expansion
  3. VM creation and installer choices
  4. Virtual NIC, MTU, and storage considerations
  5. Post-install validation and common mistakes

Inspect vmbr0 and vmbr1

auto vmbr0
iface vmbr0 inet static
    address 192.0.2.10/24
    gateway 192.0.2.1
    bridge_ports enp6s0
    bridge_stp off
    bridge_fd 0

auto vmbr1
iface vmbr1 inet static
    address 10.10.0.1/24
    bridge_ports none
    bridge_stp off
    bridge_fd 0

vmbr0 uplinks to the ISP edge; vmbr1 is an internal-only bridge. Confirm both exist before touching the VM. From any LAN guest, run ip route | head -n1 (Linux) or netstat -rn | head -n1 (BSD) and ensure the default gateway equals the OPNsense LAN IP; if it still points at the ISP router, fix your DHCP scope or static settings before proceeding.

Why Split WAN and LAN

OPNsense applies firewall policies, NAT, DHCP, VPN, and MTU tweaks per interface. If WAN and LAN live on the same bridge—or if guests keep the ISP router as their gateway—then:

  1. The firewall loses directionality. Rules can’t tell inbound from outbound.
  2. Guests may pick the ISP router as their default gateway and run outside the firewall. This destroys the Layer 3 boundary you are trying to enforce.
  3. PPPoE, VLAN tags, or WAN MTU changes bleed into the LAN and drop sessions.

So we dedicate vmbr0 to WAN uplinks and vmbr1 to internal guests.

Architecture: Single NIC as the Baseline

This entire chapter assumes the following layout. Proxmox itself still rides vmbr0 to reach the internet, while every guest hits OPNsense first. That’s what we call “no-bypass wiring,” meaning OPNsense is the only Layer 3 router between LAN guests and the outside world.

ISP Router / ONTPhysical NIC 1vmbr0 (WAN)uplink onlyOPNsense VMWAN = vmbr0LAN = vmbr1vmbr1 (LAN)internal-only bridgeProxmox UI / SSHReverse ProxyInternal Apps / LXC

Key points:

  • The physical NIC only belongs to vmbr0; Proxmox can keep the ISP router as its default gateway.
  • vmbr1 has no physical port. Every internal guest connects there and uses the OPNsense LAN IP as the gateway.
  • The OPNsense VM carries two VirtIO NICs, one per bridge.

Host path note In this baseline the Proxmox host (UI, SSH, backups) still exits via vmbr0, so it is not protected by OPNsense. To shield it as well, add a management NIC/VLAN on vmbr1 and point the host’s gateway at the OPNsense LAN IP. Choose based on operational recovery vs. security: staying on vmbr0 keeps out-of-band access simple but requires hardening (VPN or IPMI), while moving to vmbr1 increases safety at the cost of relying on console access when the firewall misbehaves. The rest of this guide keeps host/guest paths separate.

Dual NIC or VLAN Expansion

WAN uplinkNIC 1 (WAN)NIC 2 or VLAN trunkvmbr0 (WAN)vmbr1 (LAN / VLAN aware)OPNsense VMManaged Switch / APInternal Servers
Profile Physical NICs VLAN needed Use case
Single NIC baseline 1 No Homelab, lab-only guests
Dual NIC 2 No Pull physical NAS, AP, or servers behind OPNsense
VLAN trunk 1 (tagged) Yes Managed switch, multiple subnets needed

Regardless of the profile, the rule stays: all internal traffic must exit through OPNsense. Even with VLANs, don’t let guests keep the ISP router as their default gateway.

Building the VM

Step 1: Upload the Serial ISO

  1. Download the latest OPNsense-XX.X-dvd-amd64-serial.iso.
    • -serial.iso: Runs entirely over the Proxmox serial0 console, which is the workflow assumed here.
    • -vga.iso: Requires opening a separate VNC window; skip it unless you truly need graphical output.
    • -dvd.iso: Intended for physical installs with a keyboard/monitor.
  2. Proxmox UI → local (iso)Upload.

Step 2: BIOS, Machine, Disk

Setting Value Rationale
BIOS SeaBIOS Best FreeBSD compatibility unless you need ZFS boot mirrors
Machine q35 Proper VirtIO/NVMe enumeration
SCSI controller VirtIO SCSI Reliable throughput, TRIM support
Disk 8–16GB VirtIO Block Enough room for logs; thin provisioning works

Give 2 vCPUs, at least 4GB RAM, and disable ballooning—FreeBSD’s memory manager cannot reclaim pages mid-flight, so ballooning often starves state tables and forces kernel panics.

Step 3: Wire the NICs and Record MACs

  1. Net0: VirtIO (paravirtualized) + vmbr0
  2. Net1: VirtIO (paravirtualized) + vmbr1
  3. After the VM is created, open the Hardware tab, click each NIC, and copy its MAC address value. Paste them into a scratchpad so you can match vtnet0/vtnet1 during install. If you must troubleshoot an older OPNsense image that lacks VirtIO support, temporarily pick E1000 and plan to return to VirtIO afterward.
  4. The installer shows them as vtnet0/vtnet1 as follows:
Available network interfaces:
  vtnet0 - 52:54:00:12:34:00 (vmbr0) ← use this for WAN
  vtnet1 - 52:54:00:12:34:01 (vmbr1) ← use this for LAN

VirtIO is the default recommendation (built-in driver, lower CPU use). Only fall back to E1000 if you troubleshoot an old guest OS.

Step 4: Boot Order and Console

ISO first, disk second. Ensure serial0 exists (Add → Serial port) so the Proxmox console mirrors the installer.

Step 5: Run the Installer

Boot the VM, open the Proxmox console, and you should see the serial UI. If it is blank, you likely uploaded the non-serial ISO.

Installer Choices

  1. Keymap: Stay on US; handle locale input via the web UI later.
  2. Disk layout: Guided Installation > UFS unless you explicitly plan ZFS mirrors.
  3. Interface assignment: vtnet0 = WAN, vtnet1 = LAN. Match the MAC addresses above.
  4. Admin password: Save it immediately to your secrets manager.
  5. DHCP: Enable it during install, but leave the range blank for now; you’ll define 10.10.0.100–200 later in the UI.

Post-Install Tuning (Optional)

Driver Pros Cons Use case
VirtIO Native driver, low CPU, high throughput Needs modern OS Default choice
E1000 Works on older OS High CPU, 1Gbps bottlenecks Legacy compatibility
Realtek 8139 Almost universal 100Mbps cap, even higher CPU Only for diagnostics
  • Multiqueue: On 1Gbps+ WAN links, allocate ≥4 queues per NIC and run ifconfig vtnet0 mq inside OPNsense.
  • MTU: VLAN tags eat four bytes. Keep MTU equal across Proxmox bridges and OPNsense interfaces (1500 or a deliberately lowered value).
  • Storage: VirtIO SCSI + SSD-backed storage keeps logs fast; schedule TRIM.

Quick Post-Install Validation

  1. In the console, run ifconfig vtnet0 and ifconfig vtnet1 to confirm IPs.
  2. In the web UI, Interfaces > Assignments must read WAN = vtnet0, LAN = vtnet1.
  3. From a LAN guest, traceroute 1.1.1.1 should show OPNsense as the first hop.
  4. Diagnostics > Ping inside OPNsense should hit both LAN guests and external DNS.
  5. Finally, set the LAN DHCP range (for example 10.10.0.100–200) under Services > DHCPv4 > LAN.

Validation checklist

  • Serial console shows the OPNsense login: prompt with WAN/LAN addresses on vtnet0/vtnet1.
  • https://<LAN IP> loads the UI with Interfaces > Assignments set to WAN = vtnet0, LAN = vtnet1.
  • A test LAN VM sees OPNsense as hop #1 in traceroute 1.1.1.1 and retrieves the WAN IP when running curl https://ifconfig.me.
  • If any item fails, use the troubleshooting section below before continuing.

Installation Troubleshooting

  1. Blank serial console: You uploaded the non-serial ISO or forgot to add Serial port 0. Replace the ISO and add the device, then reboot.
  2. Interfaces swapped: If vtnet1 received the WAN address, power off, swap Net0/Net1 in Proxmox, and boot again, using the recorded MACs to confirm.
  3. Installer freezes during disk prep: Verify the disk bus is VirtIO SCSI and that the target storage is writable (no readonly mounts).
  4. LAN hosts fail DHCP: The installer enables DHCP but leaves the pool empty. Go to Services > DHCPv4 > LAN, define a range such as 10.10.0.100-200, enable it, and restart the service.

Keep Everyone Going Through the Firewall

  1. Decide where the Proxmox management IP should live: keep it on vmbr0 if you rely on out-of-band recovery, or move it to vmbr1/a management VLAN once you have console or IPMI access through OPNsense.
  2. Force every VM/LXC to use the OPNsense LAN IP as the default gateway.
  3. Double-check that no VM still sits on vmbr0 unless it is part of the WAN DMZ.
  4. Review Firewall > Rules > WAN and only expose ports you truly need.
  5. Use Firewall > NAT > Outbound Hybrid mode and only add manual rules when required.
  6. If the Proxmox host ever keeps the ISP router as its gateway (for out-of-band recovery), make sure you have BMC/IPMI access or another path to recover when the OPNsense VM is down.

Common Mistakes

  1. LAN bridge exists but DHCP is off: Guests grab IPs from the ISP router and bypass the firewall.
  2. Proxmox management IP left on vmbr0: The host UI ends up exposed to the public internet.
  3. Uploaded the graphical ISO: The serial console stays blank; switch to the -serial.iso.
  4. Mixed up VirtIO NIC order: If WAN/LAN assignments are swapped, you lose access on first boot. Match MAC addresses carefully.

Wrap-Up and What’s Next

You now have OPNsense running as a Proxmox VM with a clear WAN/LAN split, installer choices that stay within FreeBSD’s comfort zone, and validation steps that prove no guest bypasses the firewall. Part 3 builds on this to configure NAT, port forwarding, and reverse proxy patterns.

💬 댓글

이 글에 대한 의견을 남겨주세요