[OPNsense Series Part 3] Designing NAT, Port Forwarding, and Reverse Proxy Paths

한국어 버전

With Part 2 you now have an OPNsense VM on Proxmox with clean WAN/LAN separation. In this chapter “clean” specifically means OPNsense is the sole Layer 3 router between vmbr0 (WAN) and vmbr1 (LAN), and every guest uses the OPNsense LAN IP as its default gateway. The next question is “how does traffic actually flow?” Every homelab or small service ends up deciding what goes public, what stays private, and which path administrators must take. This article turns OPNsense’s NAT, port-forwarding, and reverse proxy knobs into concrete policy.

Prerequisites Checklist

Confirm these items before touching NAT rules. If any fails, revisit Part 2 first.

  • The OPNsense WAN interface reaches the public side of the ISP modem/router or a private upstream network and has a working default route.
  • The OPNsense LAN interface serves a private subnet (for example 10.10.0.0/24) with DHCP handing out the LAN IP as the gateway.
  • Any LAN guest running ip route | head -n1 (or netstat -rn | head -n1) reports the OPNsense LAN IP as the default gateway.
  • The Proxmox host itself still keeps management access via the WAN bridge (or an out-of-band network) so you can recover if the firewall rules lock you out.

How This Post Flows

  1. Outbound NAT fundamentals and why Hybrid mode helps
  2. Minimal inbound port-forwarding patterns for HTTPS
  3. Reverse proxy routes and exposure guidelines
  4. Common mistakes around gateways, bridges, and policy order
  5. Verification routines and troubleshooting tips

Terms Covered Here

  1. Outbound NAT: The policy that translates private addresses (10.10.0.0/24) into a public IP when traffic leaves.
  2. Inbound NAT (DNAT): Rules that take inbound ports and redirect them to internal services.
  3. Reverse proxy: A VM or LXC guest that terminates TLS and routes to internal services.
  4. Policy install order: The relationship between Firewall > NAT > Port Forward and the auto-generated Firewall > Rules > WAN entries.
  5. Gateway policy path: Which default gateway each guest uses—critical to avoid bypassing OPNsense.

Reading Card

  • Estimated time: 20 minutes
  • Prereqs: Finished Part 2 with WAN/LAN split and guests using OPNsense as the gateway
  • Outcome: Confidently set outbound NAT modes, HTTPS port forwarding, reverse proxy exposure, and the validation steps that keep them honest.

Outbound NAT: Keep Hybrid as the Baseline

OPNsense ships with three outbound modes under Firewall > NAT > Outbound.

  1. Automatic: Hands-off SNAT for every local subnet. Ideal for a single LAN + single WAN homelab.
  2. Manual: You define every rule. Use it only when you must control multi-WAN load sharing or maintain completely custom egress policies.
  3. Hybrid: Automatic rules remain, but you can append manual entries for select subnets. The more specific CIDR wins.

Start in Automatic if you truly have one LAN and no plans for growth. Switch to Hybrid as soon as you need WireGuard subnets, VLANs, or multiple WAN IPs. Because Hybrid still installs auto-rules, keep manual entries scoped to narrow CIDRs (for example /32 VPN peers) so PF’s most-specific-match behavior makes your override take effect. If you eventually outgrow the auto-rules entirely, plan the cutover to Manual mode and recreate each rule explicitly.

Here is the outbound flow.

LAN guests10.10.0.0/24OPNsenseOutbound NATWAN IP203.0.113.5Internet Src 10.10.0.xSNAT 203.0.113.5RepliesDst 10.10.0.x

In Hybrid mode simply:

  • Leave automatic rule generation on.
  • Add manual rules only when a subnet needs a different translation target. Set Source to the subnet and Translation to the WAN IP or interface alias you want.

Port Forwarding: Expose the Bare Minimum

Inbound traffic reaches internal services through port-forwarding rules under Firewall > NAT > Port Forward. The examples in this post focus on TCP web workloads (HTTP/HTTPS). UDP services, real-time protocols, or SSH/RDP ports that cannot sit behind a reverse proxy require explicit, separate DNAT rules or, preferably, VPN-only exposure. If you only host a couple of HTTPS apps, direct port forwards are fine; a reverse proxy simply centralizes certificates, WAF features, and logging once your surface grows. A typical reverse-proxy pattern looks like this:

Requirement Configuration
Publish HTTPS (443) to an internal proxy 10.10.0.10 Port Forward: Interface=WAN, Destination Port=443, Redirect Target IP=10.10.0.10, Redirect Target Port=443
Redirect HTTP (80) to HTTPS Handle inside the proxy or with OPNsense’s HAProxy/NGINX plugin
Keep SSH or Proxmox UI private Do not create a port-forward rule; expose them only through VPN

Whenever you add a port-forward rule, OPNsense automatically creates the matching WAN firewall entry—as long as Filter rule association = Add associated filter rule stays at its default. If you set it to “None,” you must build the WAN rule yourself. If traffic fails, the culprit is usually a wrong interface/port match, a missing filter rule, or a proxy that is not actually listening. Remember that reverse proxies only help with HTTP(S); expose SSH/RDP or any custom TCP service either through VPN or with highly constrained DNAT rules.

The flow stays simple.

ClientWAN IP203.0.113.5OPNsensePort ForwardReverse Proxy10.10.0.10App Service10.10.0.20 HTTPS 443DNATRouteResponseResponseResponse

Reverse Proxy Exposure Guidelines

Even in a homelab you do not need to publish everything. Use these guardrails:

  1. One HTTPS front door: Open only 443, let the reverse proxy route to internal services, and lean on wildcard certificates when possible.
  2. Separate admin paths: Keep the Proxmox UI, OPNsense UI, backups, and Grafana behind WireGuard/OpenVPN. Administrators enter via VPN first.
  3. Internal DNS stays internal: Resolve private domains inside OPNsense or the proxy. Only publish the bare minimum records externally.
  4. Log aggressively: Capture TLS SNI, headers, and upstream timing in the proxy logs so you can tell whether a new rule actually receives traffic.
  5. Treat the proxy as a SPOF: If it dies, every HTTPS service fails. Provide at least two proxies with HAProxy/Keepalived, or pair the OPNsense HAProxy plugin with an external proxy so you have a fallback.

Gateway, Bridge, and Policy Pitfalls

  1. Gateway bypass: A VM set with a static IP might still use the ISP router as its default gateway. Fix it by handing out the OPNsense LAN IP via DHCP (Services > DHCPv4 > LAN) and hard-coding route add default <LAN IP> in cloud-init or config management templates.
  2. Overly permissive WAN rules: Port forwarding creates the minimal pass rule automatically. Adding a broad “allow any” on WAN defeats the effort.
  3. Dual-homed guests: If a VM connects to both vmbr0 and vmbr1, the wrong NIC can become the default route. Avoid attaching internal workloads to the WAN bridge altogether.
  4. No DMZ for the proxy: Putting the reverse proxy and app servers on the same flat subnet means a compromised proxy sees everything. At minimum use a dedicated VLAN or alias for the proxy, enforce TLS/mTLS to the backends, and monitor that segment aggressively.

Validation Routine

  1. Outbound: From a LAN guest, run curl https://ifconfig.me and confirm it matches the OPNsense WAN IP. If not, inspect Hybrid rule priorities.
  2. Inbound: From an external network (cellular), run curl -I https://domain and confirm the proxy logs an entry. When there is no response, compare OPNsense’s WAN IP with the ISP modem’s public IP. A mismatch signals double NAT; add matching forwards on the ISP router or switch it to bridge mode.
  3. Firewall logs: Firewall > Log Files > Live View with filters interface = WAN, rule = NAT shows whether a rule matches. Add a shared request ID header in your reverse proxy and upstream apps so you can correlate the entries.
  4. Rule ordering: When juggling multiple port forwards, re-check Firewall > Rules > WAN so broad CIDR rules do not sit above the specific ones. Do this again any time you change DHCP scopes or add new subnets.

Troubleshooting Tips

  1. HTTPS handshake fails: The reverse proxy may be validating backend certificates. Either deploy your internal CA (ensure the SAN covers the backend hostname) or disable backend verification temporarily.
  2. Port forwarding fails entirely: You might still be behind the ISP router’s NAT (double NAT). If whatismyip disagrees with the WAN IP shown in OPNsense, add matching forwards on the ISP router or enable bridge mode.
  3. VPN conflicts with port forwarding: WireGuard peers that route 0.0.0.0/0 can pull return traffic into the tunnel. Narrow each peer’s AllowedIPs so reverse-proxy responses exit over WAN.

Wrap-Up

Part 3 turned the OPNsense VM into a working policy engine. Remember these three anchors. First, keep outbound NAT in Hybrid mode so you get automatic coverage with room for precise overrides. Second, publish as little as possible—ideally one HTTPS entry point with a reverse proxy handling the rest. Third, keep gateways, bridges, and rule order aligned so nothing can slip around the firewall. Up next we will take these patterns into VPN and VLAN segmentation for deeper control.

💬 댓글

이 글에 대한 의견을 남겨주세요