Once you start splitting workloads across Proxmox VMs and LXC guests, another question appears quickly: who defines the boundary between public services, internal-only services, and administrator access over VPN?
That is where OPNsense enters the picture. OPNsense is not just a firewall that blocks a few ports. It is a network operating layer that helps define boundaries, control traffic, and organize remote access paths.
The pairing with Proxmox works especially well because the responsibilities stay clear. Proxmox acts as the compute host that runs multiple workloads. OPNsense becomes the control point that decides how those workloads connect, which paths are exposed, and which policies protect them. That is why the combination is so strong when you want to turn a mini PC or spare computer into a practical self-hosted web infrastructure.
This article introduces OPNsense not as “a firewall VM,” but as the network boundary layer that complements a Proxmox-based infrastructure.
How this post flows
- What OPNsense actually is
- Why it pairs well with Proxmox
- How a single-host layout usually looks
- How VPN and remote administration fit in
- When a virtual firewall is enough and where its limits begin
Terms introduced here
- Firewall: The policy layer that decides which traffic is allowed or blocked.
- NAT: Address translation that lets a private network talk to external networks.
- Port forwarding: A rule that sends inbound traffic on a specific port to an internal service.
- Bridge: A software switch in Proxmox that connects physical NICs and VM/LXC virtual NICs so guests can behave as if they are on the same Layer 2 network.
- VPN: An encrypted tunnel that lets remote users enter an internal network safely.
Reading card
- Estimated time: 16 minutes
- Prereqs: a basic grasp of Proxmox
vmbrbridges, private IP ranges, and home-router port forwarding- After reading: you can explain why OPNsense fits naturally beside Proxmox, and where a virtual firewall is practical versus where dedicated hardware becomes more appropriate.
What OPNsense Is
OPNsense is an open-source firewall and router distribution built on FreeBSD. In practice, though, its role is broader than “just a firewall.” It can manage per-interface policies, NAT, port forwarding, DHCP, DNS forwarding, VPNs such as WireGuard or OpenVPN, and traffic visibility from one operating model.
So using OPNsense is less about “blocking traffic” and more about “defining boundaries.” You can decide which services are public, which stay internal, and which management interfaces should only be reachable after entering through VPN.
That makes OPNsense especially useful in self-hosted environments and small internal infrastructures. As the number of services grows, exposure rules tend to become scattered. OPNsense gives that system a single policy anchor.
Why It Pairs Well with Proxmox
Proxmox is strong at dividing workloads. OPNsense is strong at dividing boundaries. Those roles fit together cleanly.
Imagine a Proxmox host running an application VM, a reverse proxy LXC guest, a monitoring VM, and a Windows test VM. If all of them are attached directly to the same bridge such as vmbr0, the network is still basically one flat space. The services may look separated, but exposure policy and access control remain blurry. In practice, the reverse proxy, the internal Grafana dashboard, and the Proxmox management plane can end up sharing the same entrance.
Once OPNsense is added, the structure changes.
- You can reduce the number of paths that talk directly to the outside world.
- You can separate internal services from public ones more clearly.
- You can place administrator access behind VPN.
- NAT, port forwarding, DNS, and DHCP policies can live in one place.
That is why the Proxmox + OPNsense combination goes beyond “running many VMs.” It is closer to turning spare hardware into an infrastructure that can actually be operated with intention. Proxmox becomes the compute layer, and OPNsense becomes the network boundary layer.
What the Layout Looks Like on a Single Host
In homelabs and small self-hosted setups, two layouts appear most often. The default one uses a single physical NIC. The expanded one adds another NIC or uses VLANs to extend the LAN beyond the Proxmox host.
Default layout: one physical NIC plus an internal-only vmbr1
This is the most common starting point.
The core logic is straightforward.
- The single physical NIC attaches only to
vmbr0and acts as the outside uplink. vmbr1is created as an internal-only bridge with no physical NIC behind it.- The OPNsense VM has two virtual NICs, with WAN on
vmbr0and LAN onvmbr1. - Internal VMs, LXC guests, and management services connect to
vmbr1and use the OPNsense LAN IP as their default gateway.
This default pattern works well because it is simple. If all internal guests live on vmbr1, and vmbr1 reaches the outside world only through OPNsense, the network boundary becomes very clear. It is especially practical when one spare mini PC holds the web stack, internal tools, monitoring, and an optional jump VM.
Expanded layout: two physical NICs or VLAN-based LAN expansion
Once you want real physical devices behind the OPNsense LAN, not just virtual guests, the design changes.
In this expanded version, vmbr1 does not stop inside Proxmox. It reaches a real switch through a second NIC or through a VLAN trunk. That means the OPNsense LAN now includes physical devices as well as virtual workloads.
So the practical rule is:
- If only internal guests need to sit behind OPNsense,
1 NIC + internal vmbr1is the default pattern. - If physical PCs, NAS systems, APs, or other servers should also sit behind the OPNsense LAN, you need
2 NICsor aVLAN-aware bridge + managed switch. - In either case, internal guests and management services only pass through firewall policy if they use the OPNsense LAN IP as their default gateway.
That third point is critical. Splitting guests across two bridges is not enough by itself. If an internal VM remains on vmbr1 but still points its default route at the home router, it may bypass the firewall design entirely.
This is especially useful for things like reverse proxies, backup systems, and internal dashboards. The reverse proxy can sit behind OPNsense and expose only the intended public paths, while the rest of the internal services stay private on vmbr1.
The Proxmox management UI should be treated the same way. Ideally Proxmox UI, SSH, and PBS stay reachable only from vmbr1 or a separate management VLAN, not directly from a public IP.
This distinction is about default versus expanded topology, not about HA yet. Real redundancy still requires separate design for OPNsense failover, uplinks, switching, and usually multiple Proxmox nodes.
How OPNsense Actually Controls Traffic
The real traffic flow is simpler than it first sounds.
- Internal guests on
vmbr1receive an IP configuration manually or through DHCP, with the OPNsense LAN IP set as the default gateway. - When those guests reach the internet, OPNsense performs outbound NAT.
- When outside users need to reach a public web service, OPNsense uses port-forwarding rules (DNAT) to send the traffic to an internal reverse proxy or service.
- VPN users terminate on OPNsense, then enter the private network through a controlled VPN subnet or LAN route.
For example, if an internal application server lives at 10.10.0.20 and only HTTPS should be public, OPNsense can receive WAN traffic on 443, forward it to an internal reverse proxy, and let the proxy talk to the app server. Meanwhile the Proxmox UI and internal monitoring remain invisible from the outside because no public forwarding rule exists for them.
DHCP and DNS also become easier to reason about when OPNsense handles them for the LAN side. Internal guests receive an IP address, the correct default gateway, and a DNS server from the same place.
How VPN and Remote Management Fit In
One of the best reasons to use OPNsense is that it makes the remote-management path cleaner. If Proxmox or internal services are reachable directly from the internet, the attack surface grows quickly.
That is why many setups follow a pattern like this:
- Only expose the public HTTPS path or a very small number of service ports.
- Keep the Proxmox UI, OPNsense UI, backup dashboard, and internal Git services behind VPN.
- Let administrators enter the internal network first through WireGuard or another VPN, then access the systems they need.
The benefit is not only that fewer ports remain open. The management path itself becomes separate from the user-facing path. Administrator traffic stops arriving through the same door as public traffic. If a management UI is published directly, scanners and login attempts can reach it immediately. Putting it behind VPN forces one more authentication boundary before the UI is even visible.
In practice, the following rules are common:
- expose
443and hide most other paths behind VPN - never publish the Proxmox or OPNsense management UI directly on a public IP
- keep internal DNS, monitoring, and backup systems reachable only on the private network
- use a dedicated VPN subnet so administrator access logs remain easy to track
That makes remote operations simpler. Administrators enter through VPN first, then reach internal services. You no longer need to poke SSH holes everywhere just to manage the environment. Running the VPN endpoint directly on OPNsense often makes policy and subnet management easier because the firewall and the tunnel terminate in the same place.
When a Virtual Firewall Is Enough and Where Its Limits Are
Running OPNsense as a Proxmox VM is very practical, but it is not always the right final answer. The strengths and limits both matter.
When virtual OPNsense fits well
An OPNsense VM on Proxmox works very well when:
- the environment is a homelab or a small internal network that can tolerate a single-host outage
- you control maintenance windows and occasional reboots
- you want to learn and iterate on firewall, VPN, and private-network design
- you want one spare mini PC to hold both application workloads and a structured network boundary
The learning value is especially strong. If rules become messy, you can restore from configuration backup or use a VM snapshot carefully. You can also repeat layout experiments much faster than on separate hardware. It also helps to remember that OPNsense is FreeBSD-based, so driver support and troubleshooting habits can differ from Linux.
When dedicated firewall hardware deserves consideration
You should start thinking about moving OPNsense onto separate hardware when:
- the network must stay up even while the Proxmox host reboots
- the firewall is the single mandatory gateway for the whole environment and downtime is expensive
- NIC design, VLAN layout, and availability requirements become more complex
- multiple Proxmox nodes and the wider physical network all need centralized control
The key idea is this: putting OPNsense in a VM improves convenience and learning speed, but it also ties the firewall to the hypervisor. So “is a virtual firewall practical?” and “is a virtual firewall sufficient?” are not the same question.
For example, when the Proxmox host reboots, the OPNsense VM goes down with it, and internal guests lose internet access plus their VPN gateway until the VM returns. In the other direction, if OPNsense is misconfigured, the Proxmox host may still be alive while the internal network loses its path outward. That is why recovery paths and console access matter even in a small homelab.
Common Misconceptions
“One firewall VM means security is solved”
No. OPNsense manages boundaries, but it does not replace guest OS patching, authentication policy, backups, or admin-account protection.
“If OPNsense runs on Proxmox, all traffic automatically goes through it”
Not at all. Traffic paths still depend on which bridge each VM or LXC guest uses and which default gateway it points to. If an important VM remains attached directly to vmbr0, or if a guest on vmbr1 still uses the home router as its gateway, it may bypass the firewall entirely.
“Once VPN is enabled, network design can wait”
VPN still needs structure. You still have to decide where the tunnel terminates, which subnet it reaches, and which systems it can access. Otherwise you only reduce exposed ports while leaving internal boundaries vague.
Wrap-up
So yes, OPNsense is an especially good firewall companion for Proxmox. Proxmox divides workloads, while OPNsense defines the boundaries and exposure rules around them. When you are building a self-hosted web infrastructure from a mini PC or spare computer, that combination ties virtualization, security, and remote administration into one workable model.
The main takeaways are straightforward. First, Proxmox and OPNsense work well together because they solve different problems: compute separation and network boundary control. Second, bridge separation alone is not enough; default gateways, NAT, and VPN termination points must be planned too. Third, a virtual firewall is very practical for homelabs and small internal use, but its fate remains tied to the host that runs it.
In the next article, we will move from concept to layout and walk through how to create an OPNsense VM on Proxmox, split WAN and LAN interfaces, and connect bridges without accidentally bypassing the firewall.
💬 댓글
이 글에 대한 의견을 남겨주세요