[OPNsense Series Part 5] Using VLAN Segments to Expand Service, Management, and Guest LANs

한국어 버전

Parts 1–4 covered separating WAN/LAN inside Proxmox and isolating guests with additional bridges. The next challenge is extending that policy to physical switches, APs, and NAS boxes so every device follows the same rules. To do that safely, you must know exactly where VLAN segments begin and end, how Proxmox bridges feed OPNsense interfaces, and how the managed switch tags each port. This article focuses on defining service/management/guest LANs and wiring OPNsense plus the switch so those segments behave predictably.

What to prepare first

  • The IP range your ISP router or gateway currently uses (for example 192.168.0.0/24)
  • How many physical NICs your Proxmox host has and which NIC will become the trunk (plan for at least two; single-NIC setups keep WAN and LAN bound together)
  • A managed switch that supports IEEE 802.1Q tagging and exposes per-port tagged/untagged controls
  • OPNsense ISO or backup image for the current stable release (24.x at the time of writing)
  • A VLAN-ID plan (e.g., 10 = service, 20 = management, 30 = guest) written down with matching names
  • Confirmation from your switch UI that VLAN IDs 10/20/30/99 are free (adjust the plan now if they already exist)
  • Planned IP ranges and DHCP pools for each segment (a quick spreadsheet helps)
  • A recent OPNsense configuration backup (download via System > Configuration > Backups) so you can restore quickly if a mistake locks you out

How this post flows

  1. When VLANs and internal segments become necessary
  2. Planning IP/DHCP/DNS for each segment
  3. Rolling out Proxmox bridges, OPNsense interfaces, and firewall rules in order
  4. Tagging switch ports and locking down management paths
  5. Verifying, troubleshooting, and reviewing common mistakes

Terms introduced here

  1. VLAN Trunk: An IEEE 802.1Q uplink that keeps the 4-byte tag in place so multiple VLANs can ride one physical link.
  2. Service LAN: The zone where public-facing services and internal backends meet; OPNsense rules limit exposed ports.
  3. Management LAN: A narrow zone for Proxmox, OPNsense, backups, monitoring, and admin laptops only.
  4. Guest LAN: A quarantine zone for visitors or IoT devices so they cannot reach other segments.
  5. Tagged / Untagged ports: Switch behavior that keeps a VLAN tag intact or strips it before handing traffic to the next device.
  6. Trunk port: A switch port that preserves VLAN tags and carries multiple VLANs to another tag-aware device such as OPNsense or an upstream switch.
  7. Access port: A switch port that strips the tag and delivers a single VLAN to devices that do not speak VLANs (laptops, NAS, APs).

Reading card

  • Estimated time: 18 minutes
  • Prereqs: experience creating Proxmox vmbr bridges, assigning basic OPNsense interfaces, and touching VLAN settings on a managed switch at least once
  • Outcome: you can split service, management, and guest networks while coordinating Proxmox and a physical switch

When VLANs and internal segments become necessary

Running every guest on Proxmox vmbr1 is fine at the start. Things change when:

  1. Reverse proxies, app servers, databases, monitoring, and backups each demand different exposure policies.
  2. You want Proxmox UI, PBS, OPNsense UI, and DNS admin traffic away from external users.
  3. NAS boxes, APs, desktops, or IoT devices need to live behind OPNsense as well.

At that point, a single internal bridge no longer draws the boundary. Without tighter segmentation, a guest Wi-Fi client can wander into your Proxmox UI. You could make extra Proxmox bridges for VM-only segregation, but physical devices cannot hop between those bridges. A managed switch that understands VLAN tags extends the boundary to every copper port while OPNsense remains the policy brain.

The D2 diagram below shows three logical segments between an OPNsense VM and a managed switch. The table maps each diagram node to a tangible component so you can verify the wiring quickly.

Diagram node Meaning How to confirm
vmbr0 Proxmox WAN bridge Proxmox UI → Datacenter → Node → Network → vmbr0 bound to physical NIC 1
vmbr2 VLAN trunk bridge Run bridge vlan show vmbr2 to confirm VLAN awareness and allowed VLANs
nic_trunk Physical NIC 2 heading to the switch Blink LEDs with ethtool -p <iface> to match the cable
svc/mgmt/guest_vlan VLAN 10/20/30 interfaces inside OPNsense Interfaces > Assignments shows vmx1_vlan10 etc.
svc/mgmt/guest_hosts Devices attached to each segment Check Proxmox VM NIC VLAN tags and switch access-port settings
ISP / ONTPhysical NIC 1 (WAN)vmbr0 / WANOPNsense VMvmbr2 / VLAN TrunkPhysical NIC 2Managed Switch(VLAN-aware)VLAN 10Service LANVLAN 20Management LANVLAN 30Guest LANReverse Proxy / App VMsProxmox UI / PBS / Admin WorkstationGuest Wi-Fi / IoT

The idea is to turn vmbr2 into a VLAN trunk, attach the LAN-facing NIC of OPNsense to that tagged interface, and map the second physical NIC to a managed switch. In this diagram, vmbr2 is the software bridge inside Proxmox and nic_trunk is the actual copper port heading to the switch. Proxmox guests can also attach to vmbr2, set their VLAN tag, and share the same policies as physical devices.

Keep the NIC layout straight with these rules:

  • Proxmox physical NIC 1: tied to vmbr0, handles WAN only.
  • Proxmox physical NIC 2: tied to vmbr2, dedicated to the VLAN trunk.
  • OPNsense VM NIC 1: vmbr0 (WAN). NIC 2: vmbr2 (VLAN trunk). A third NIC for a separate management network is optional but out of scope here.

Criteria for service, management, and guest LANs

Define segments first so the policies remain readable. A practical minimum:

  • Service LAN (VLAN 10, 10.20.10.0/24): reverse proxy, app servers, cache layers, deployment helpers. Only the reverse proxy should expose ports; SSH/DB ports stay closed by default.
  • Management LAN (VLAN 20, 10.20.20.0/24): Proxmox management IP, OPNsense UI, backup servers, monitoring, admin laptops. Keep outbound rules tight and place the VPN client pool closest to this zone.
  • Guest LAN (VLAN 30, 10.20.30.0/24): guest Wi-Fi, IoT, lab devices. Allow outbound internet only and block everything else, or force traffic through a proxy.

For a real-world mapping: plug visitor Wi-Fi APs into VLAN 30, run reverse proxies and app servers on VLAN 10, and limit VLAN 20 to Proxmox UI, PBS, OPNsense UI, and trusted laptops. Treat IoT as guest-level to reduce lateral movement.

When planning:

  1. IP plan: give each VLAN its own /24 and keep the OPNsense interface IP consistent (e.g., 10.20.X.1).
  2. DHCP range: service VLAN can use a short pool since many nodes are static; management VLAN benefits from MAC reservations; guest VLAN should use short leases for easier tracking.
  3. DNS policy: confine internal domain lookups to service or management VLANs; send guest queries to public DNS or through a filtering proxy so they can’t resolve internal hostnames.

If your ISP router already uses 192.168.0.0/24, pick a totally different range such as 10.20.X.0/24 for OPNsense. That prevents overlap and keeps the address plan stable if you migrate to dedicated hardware later.

Laying out Proxmox + OPNsense + the switch

Before the five implementation steps, complete this preflight checklist:

  • Open Datacenter > <node> > Network and confirm vmbr0 (WAN) and vmbr2 (trunk) each map to the correct physical NIC. Ensure the VLAN aware checkbox is enabled on vmbr2. From the CLI, bridge vlan show vmbr2 should list the current VLANs (only 1 before configuration).
  • Edit the OPNsense VM in Proxmox so the second NIC uses a VLAN-capable model (VirtIO or VMXNET3). Change it while the VM is powered down to avoid packet loss.
  • In OPNsense, visit Interfaces > Assignments and verify the legacy LAN interface does not conflict with future VLAN interfaces. Disable the legacy LAN DHCP server before enabling the new VLAN DHCP servers.

Now revisit what the VLAN-aware bridge and OPNsense parent interface mean:

  • A VLAN-aware bridge keeps tags intact. If you forget to enable it, Proxmox strips every tag, all guests land on VLAN 1 (the bridge’s native VLAN), and inter-VLAN rules never trigger.
  • The parent interface is the NIC name inside the OPNsense VM (often vmx1 or vtnet1). That NIC maps to Proxmox vmbr2 and carries every tagged VLAN.
  • MTU note: VLAN headers add 4 bytes between the MAC header and EtherType, so mismatched MTUs silently drop frames. Keep MTU at 1500 end to end unless all devices support jumbo frames.

This walkthrough uses Proxmox bridges (vmbr) rather than PCI passthrough. If you passed NICs directly through to OPNsense earlier in the series, map the VLAN IDs to those interfaces instead of vmbr2.

If you only have one physical NIC, you can technically run both vmbr0 and vmbr2 over it with tagged sub-interfaces, but WAN and LAN will drop together when the host reboots and recovery requires console/IPMI access. This guide focuses on the safer two-NIC layout.

Step 1: Create a VLAN-aware bridge in Proxmox

Create vmbr2, enable VLAN aware, and add physical NIC #2 as its port. That NIC connects to the managed switch’s tagged trunk. Without the VLAN-aware toggle, the switch receives untagged traffic regardless of later settings.

Proxmox UI steps:

  1. Go to Datacenter > Node > Network > Create > Linux Bridge and create vmbr2.
  2. Enter the physical NIC name (e.g., eno2) under Bridge ports, enable VLAN aware, and save.
  3. Click Apply Configuration or run ifreload -a via SSH to reload the network stack.
  4. Run bridge vlan show vmbr2; seeing only VLAN 1 at this stage is expected.

Identify your OPNsense parent interface

Before adding VLANs, confirm which NIC name inside OPNsense maps to vmbr2:

  1. In Proxmox, open the OPNsense VM → Hardware. NIC #1 should reference vmbr0 (WAN) and NIC #2 should reference vmbr2 (LAN trunk).
  2. In OPNsense, go Interfaces > Overview and confirm the LAN interface reports the same driver (e.g., vmx1 or virtio1).
  3. If you still are unsure, SSH into OPNsense and run ifconfig -l. Then run ifconfig vmx1 (replace name) to confirm it has the LAN IP. Use that name as the Parent Interface below.

Step 2: Add VLAN interfaces inside OPNsense

Take these steps in the UI:

  1. Go to Interfaces > Other Types > VLAN and click + Add.
    • Parent Interface: vmx1 (or virtio1) — the OPNsense NIC that maps to Proxmox vmbr2. If you are unsure, open Interfaces > Overview and look for the interface currently holding your LAN IP, or check the Proxmox VM hardware list: NIC #2 typically maps to vmx1/virtio1.
    • VLAN Tag: enter 10, Description: SVC_LAN
    • Save, then repeat for tags 20 and 30.
  2. Open Interfaces > Assignments, click +, and add vmx1_vlan10, vmx1_vlan20, and vmx1_vlan30.
  3. Click each interface, check Enable, and set:
    • IPv4 Configuration Type: Static IPv4
    • IPv4 Address: 10.20.10.1/24, 10.20.20.1/24, 10.20.30.1/24
    • Description: SVC_LAN, MGMT_LAN, GUEST_LAN for quick filtering later
  4. Go to Services > DHCPv4 > [Interface], enable the DHCP server per VLAN, and set the pool range, default gateway (the interface IP), DNS server list, and lease time.

Avoid VLAN ID 1; many switches reserve it for internal use. Keeping IDs aligned with segment names (10/20/30) reduces mistakes later.

Step 3: Define DHCP scopes and firewall rules

After enabling the DHCP servers, create reusable network aliases under Firewall > Aliases (SVC_LAN, MGMT_LAN, GUEST_LAN, VPN_NET, etc.). OPNsense applies rules to traffic entering an interface, so blocking service → management traffic happens on the management interface.

Segment DHCP range DNS source Default firewall policy (rule location)
Service VLAN 10 10.20.10.50-150 OPNsense or internal DNS Firewall > Rules > SVC_LAN: allow SVC_LAN net -> WAN ports 80/443, block SVC_LAN net -> MGMT_LAN net
Management VLAN 20 10.20.20.10-30 (mostly static mappings) Internal DNS Firewall > Rules > MGMT_LAN: allow VPN sources only, deny all else by default
Guest VLAN 30 10.20.30.50-254 (1-hour leases) Public DNS (1.1.1.1, 9.9.9.9) Firewall > Rules > GUEST_LAN: allow GUEST_LAN net -> WAN ports 80/443/DNS, block GUEST_LAN net -> This Firewall

If Firewall > Settings > Advanced is set to Allow, add explicit deny rules first, then allow what you need. Enabling logging per rule helps verify packets in Firewall > Live View. For DNS isolation, add a rule on GUEST_LAN blocking Destination: This Firewall, Port: DNS, and follow with a pass rule to public resolvers.

Step 4: Tag managed switch ports

Keep the trunk port (e.g., port 1) tagged for VLANs 10/20/30. Change its native VLAN/PVID to an unused value (e.g., 99) so stray untagged frames never land in a production segment. The port numbers below are examples—replace them with the actual switch ports your devices use. Configure access ports like this:

Port Role VLAN config
2 Reverse proxy server Untagged VLAN 10
3 NAS (management only) Untagged VLAN 20
4 Guest AP Untagged VLAN 30
5 Additional Proxmox VM NIC Tagged 10/20 (trunk)

Now Proxmox VMs connected to vmbr2 can set their VLAN tags and land in the right segment immediately.

Security note: move the native VLAN to an unused ID such as 99 so it does not overlap with management traffic, and enable STP/RSTP to prevent accidental Layer2 loops.

A typical web UI flow looks like this:

  1. VLAN Management > VLAN Settings: create VLAN 10/20/30/99.
  2. VLAN Membership: mark each port as Tagged or Untagged per the table above.
  3. Port PVID: assign each access port’s PVID to match its untagged VLAN (e.g., port 2 → PVID 10).

If your switch uses a CLI, the equivalent commands look like switchport trunk allowed vlan 10,20,30 and switchport trunk native vlan 99. Always pair an access port with the correct PVID; otherwise the connected device falls into the wrong DHCP pool.

Step 5: Verify inter-VLAN routing

After writing the base firewall rules, confirm:

  1. Service LAN → Internet: allowed.
  2. Service LAN → Management LAN: denied except for specific monitoring ports.
  3. Management LAN → Service LAN: only SSH (22), HTTPS (443), and deployment automation ports.
  4. Guest LAN → Service/Management LAN: fully blocked.

Test in this order (stop if any step fails):

  1. Service → Internet: on a service VM run curl -I https://example.com (expect HTTP 200).
  2. Service → Management (blocked): on the same VM run nc -zv 10.20.20.1 22 (expect timeout/refused).
  3. Management → Service (allowed): from a management laptop/VM run ssh [email protected] (expect login prompt).
  4. Management → Internet: curl -I https://example.com (expect HTTP 200).
  5. Guest → Service (blocked): connect to the guest SSID and ping 10.20.10.1 (expect timeout).
  6. Guest → Internet: from the guest client run curl -I https://example.com (expect HTTP 200).

While running these, watch Firewall > Live View for ACCEPT/DROP logs and use Interfaces > Diagnostics > Packet Capture to ensure tags arrive as expected (captured packets should show the VLAN ID you configured).

Segment-specific rules and common traps

The most frequent problem is opening an unintended bypass that completely misses OPNsense. If the Proxmox host or any server still lives on vmbr0, it sits outside the firewall no matter how many VLANs exist.

A second trap is merging VPN clients directly into the management LAN. If VPN credentials leak, attackers land directly on your critical interfaces. Keep a dedicated VPN subnet and allow management access explicitly through firewall rules.

Remember that the moment you assign an IP address to a VLAN interface, OPNsense will happily route between VLANs unless a firewall rule blocks it. Default each VLAN interface to “deny all” and enable only the required directions. A permissive default undermines the entire segmentation effort the moment you add a new VLAN.

⚠️ VLANs are not crypto security. A compromised device on the same physical switch can spoof MAC addresses or inject 802.1Q tags. For highly sensitive networks, pair VLANs with features such as switch port security, DHCP snooping, or even physical separation.

The policy diagram below sums up the intended flows.

WireGuard VPN (10.30.0.0/24)OPNsense PolicyManagement LAN(VLAN 20)Service LAN(VLAN 10)Guest LAN(VLAN 30)internet Admin ports 22/443 onlyBlock lateral trafficAllow outbound per policy

Every segment routes through OPNsense, VPN users included.

Packet-flow example

Trace a guest smartphone browsing HTTPS:

  1. The phone joins the guest SSID; the AP provides untagged VLAN 30 and hands out a 10.20.30.x address.
  2. The AP plugs into switch port 4, whose PVID is 30, so traffic stays untagged.
    • ⚠️ If the PVID stays at 1 (the default on many switches), the phone falls into VLAN 1 and grabs the wrong DHCP pool.
  3. The switch forwards traffic to trunk port 1, tagging it as VLAN 30 before sending it to Proxmox physical NIC 2.
  4. Proxmox vmbr2 is VLAN-aware and passes the tag to the OPNsense VM’s NIC.
  5. OPNsense interface vmx1_vlan30 evaluates the firewall rule, and approved packets undergo NAT out of vmbr0.
  6. Replies follow the exact path back to the phone.

With that mental model, it becomes far easier to find where a tag disappeared.

Common mistakes

  1. Not tagging the Proxmox VM NIC: the VM sits on vmbr2 but without a tag it defaults to VLAN 1, often used for management or guest traffic, and receives the wrong DHCP range.
  2. Plugging the management NIC into the wrong switch: if Proxmox is still on a WAN-facing switch, it bypasses OPNsense entirely.
  3. Leaving the ISP router’s DHCP enabled: clients grab the ISP gateway as their default route instead of OPNsense.
  4. Only tagging one AP: if other APs or mesh nodes ignore VLANs, traffic blends back together. Confirm SSID-to-VLAN mapping on every radio.
  5. Changing the trunk to untagged during an outage: flipping the trunk port to an access port drops the other VLANs completely. Use Packet Capture to inspect tags before reconfiguring.
    • Recovery tip: connect through a dedicated management port (or console cable) and restore the port to tagged mode for VLANs 10/20/30, then reapply the correct PVID.

Managed-switch and Proxmox management paths

Give the switch itself a management IP on the management LAN. For example, reserve 10.20.20.254/24 and set its management port to receive VLAN 20 untagged or use a trunk port with PVID 20.

The Proxmox host should only be reachable from the management LAN as well. Two options:

  1. Add a logical interface on vmbr2 with VLAN tag 20 and set it to 10.20.20.5/24.
  2. Use a dedicated physical NIC tied to a management VLAN access port.

Either way, pin Proxmox UI (8006), SSH, and PBS behind the management LAN or VPN through firewall rules.

Troubleshooting checklist

Symptom A: a service VM receives an address from the wrong subnet

  1. Switch port: confirm its untagged VLAN is 10.
  2. Proxmox VM NIC: confirm it connects to vmbr2 and has VLAN tag 10 configured.
  3. OPNsense DHCP: check Services > DHCPv4 > VLAN10 and confirm the server is enabled.

Symptom B: VPN clients cannot reach the management LAN

  1. Ensure the VPN interface firewall rules allow traffic to MGMT_LAN net.
  2. Confirm the VPN client receives a route for 10.20.20.0/24.
  3. If you rely on Firewall > Aliases, make sure the management LAN CIDR is up to date.

Symptom C: a VLAN cannot reach the internet

  1. Verify the outbound NAT rule exists for that VLAN (automatic mode should cover it).
  2. Check that the VLAN’s LAN rules allow LAN net -> WAN.
  3. Capture packets to ensure the VLAN tag reaches OPNsense.
  4. Inspect the switch’s STP logs to confirm the port is not blocked.

Wrap-up

VLAN segments are not just for enterprise networks. With OPNsense running inside Proxmox and one managed switch, you can carve independent service, management, and guest LANs. The three pillars are a VLAN-aware bridge, OPNsense VLAN interfaces, and properly tagged switch ports.

Once those are in place, adding physical equipment no longer requires rethinking the policy. Decide which segment a device belongs to, plug it into a port with the right untagged VLAN (or tag the VM NIC), and you’re done. In Part 6 we will protect this layout with backups, updates, and recovery plans, and discuss when to move OPNsense to dedicated hardware.

💬 댓글

이 글에 대한 의견을 남겨주세요