Simple question, difficult solution. I can’t work it out. I have a server at home with a site-to-site VPN to a server in the cloud. The server in the cloud has a public IP.

I want people to access server in the cloud and it should forward traffic through the VPN. I have tried this and it works. I’ve tried with nginx streams, frp and also HAProxy. They all work, but, in the server at home logs I can only see that people are connecting from the site-to-site VPN, not their actual source IP.

Is there any solution (program/Docker image) that will take a port, forward it to another host (or maybe another program listening on the host) that then modifies the traffic to contain the real source IP. The whole idea is that in the server logs I want to see people’s real IP addresses, not the server in the cloud private VPN IP.

  • Admiral Patrick
    link
    fedilink
    English
    11 year ago

    I’ve no experience with Zerotier, but I use a combo of WG and Openvpn. I use OpenVPN inside the Docker containers since it’s easier to containerize than WG.

    Inside the Docker container, I have the following logic:

    1. supervisord starts openvpn along with the other services in the container (yeah, yeah, it’s not “the docker way” and I don’t care)
    2. OpenVPN is configured with an “up” and “down” script
    3. When OpenVPN completes the tunnel setup, it runs the up script which does the following:
    # Get the current default route / Docker gateway IP
    export DOCKER_GW=$(ip route | grep default | cut -d' ' -f 3)
    
    # Delete the default route so the VPN can replace it.
    ip route del default via $DOCKER_GW;
    
    # Add a static route through the Docker gateway only for the VPN server IP address
    ip route add $VPN_SERVER_IP via $DOCKER_GW; true
    ip route add $LAN_SUBNET via $DOCKER_GW; true
    
    

    LAN_SUBNET is my local network (e.g. 192.168.0.1/24) and VPN_SERVER_IP is the public IP of the VPS (1.2.3.4/32). I pass those in as environment variables via docker-compose.

    The VPN server pushes the default routes to the client (0.0.0.0/1 via <VPS VPN IP> and 128.0.0.0/1 via <VPS VPN IP>

    Again, sorry this is all generic, but since you’re using different mechanisms, you’ll need to adapt the basic logic.

    • @nickshanks@lemmy.worldOP
      link
      fedilink
      English
      11 year ago

      Thanks, this helps a lot. So in your OpenVPN config, on the client, do you have it to send all traffic back through the VPN?

      • Admiral Patrick
        link
        fedilink
        English
        11 year ago

        You may be able to do it through the client, yes, but I have it pushed from the server:

        • @nickshanks@lemmy.worldOP
          link
          fedilink
          English
          01 year ago

          Okay, can we go back to those iptables commands?

          iptables -t nat -A PREROUTING -d {VPS_PUBLIC_IP}/32 -p tcp -m tcp --dport {PORT} -j DNAT --to-destination {VPN_CLIENT_ADDRESS}
          iptables -t nat -A POSTROUTING -s {VPN_SUBNET}/24 -o eth0 -j MASQUERADE
          

          Just to confirm, is the -o eth0 in the second command essentially the interface where all the traffic is coming in? I’ve setup a quick Wireguard VPN with Docker, setup the client so that it routes ALL traffic through the VPN. Doing something like curl ifconfig.me now shows the public IP of the VPS… this is good. But it seems like the iptables command aren’t working for me.

          • Admiral Patrick
            link
            fedilink
            English
            21 year ago

            Just to confirm, is the -o eth0 in the second command essentially the interface where all the traffic is coming in?

            That is the interface the masqueraded traffic should exit.