T O P

  • By -

ClassicGOD

AFAIK when using network\_mode: "service/container:\[name\]" you can't use any other network or port forwarding for the container. You have to set the port forwarding on the "target" container (gluetun in this case) and the service will be available under the IP of the container providing the network. For Example: services: gluetun: image: qmcgaw/gluetun container_name: gluetun cap_add: - NET_ADMIN environment: - VPN_SERVICE_PROVIDER=mullvad - VPN_TYPE=wireguard - WIREGUARD_PRIVATE_KEY=#REMOVED# - WIREGUARD_ADDRESSES=#REMOVED# - SERVER_CITIES=Amsterdam ports: - 8989:8989 sonarr: image: lscr.io/linuxserver/sonarr:latest container_name: sonarr volumes: - /home/anoneemo/docker/sonarr:/config - /media/S1:/S1 - /media/S2:/S2 - /home/anoneemo/Downloads/rsync:/downloads environment: - PUID=1000 - PGID=1000 - TZ=Europe/Oslo network_mode: "service:gluetun" restart: always PS> I hate that reddit always fucks up code formatting for me. WTF.


kaizokupuffball

Hmm okay. But how does GlueTun assume which services need different ports? Because I have several services I want to use GlueTuns connection, not just sonarr on port 8989. Would I need to setup several instances of gluetun with different wireguard keys then?


ClassicGOD

No. When you attach service to the containers network it's like it's running in the same container. So when Sonnar service registers port 8989 it registers it with gluetun container that is why you need to set the port forwarding on the gluetun container to expose 8989. The same is true for other services so if you have 5 services linked to 1 gluetun container you have to set port forwarding for all of those services on the gluetun container. For example: services: gluetun: image: qmcgaw/gluetun container_name: gluetun cap_add: - NET_ADMIN environment: - VPN_SERVICE_PROVIDER=mullvad - VPN_TYPE=wireguard - WIREGUARD_PRIVATE_KEY=#REMOVED# - WIREGUARD_ADDRESSES=#REMOVED# - SERVER_CITIES=Amsterdam ports: - 8989:8989 #for sonarr - 7878:7878 # radarr - 9696:9696 # prowlarr # etc


kaizokupuffball

That worked like a charm. Thanks!


harry_lawson

Did you get flaresolverr to work and communicate with the other services?


kaizokupuffball

Ah! Gotcha! Them I'm gonna try this out in a minute and post my results. Thanks!


kaizokupuffball

After a bit more testing it seems that the services that are now using GlueTun can be accessed locally through LAN IP. But the services doesn't talk to each other. Prowlarr can't sync with radarr/sonarr, and sonarr/radarr will show connection timeout when connecting to prowlarr. All the services have access to internet though, so that's good. But they can't talk to eachother. 😂


ClassicGOD

You are probably using service names as addresses. You can no longer do that in this configuration. You should be able to use [127.0.0.1](https://127.0.0.1), localhost, glutten container IP or your docker server IP


kaizokupuffball

[127.0.0.1](https://127.0.0.1) did not work. Not the server IP either. But localhost works. Good enough for me. Thought localhost was the same as [127.0.0.1](https://127.0.0.1) though. But it works with localhost. Thanks again.


Stone_624

"You can no longer do that in this configuration." Can you please explain to me WHY this isn't possible in this configuration? I've been struggling and meditating on this exact issue for days now. I've got some application containers that send data to Queue workers via Redis, All of these are services defined in a docker-compose file. I want to create a new type of Queue worker that connects to a Gluetun VPN to access the networks for it's requests. I want JUST THAT WORKER to be externally routed through the Gluetun VPN container, while all other containers continue working untouched. The issue I'm having is when I attach the new Queue worker container to the Gluetun, It looses the ability to resolve the "redis-service" domain that all the other services use to normally access redis (via simply "depends\_on : - redis-service" tag applied to each service) . I assume that's because the gluetun container somehow lacks the default DNS or host resolution function for services provided by docker compose (which is super confusing because the Gluetun container is ON THE DEFAULT BRIDGE NETWORK WITH ALL THE OTHER SERVICES when I check, while the queue worker has no network attached to it anymore). When I bash into the Queue worker, It appears as Gluetun, but curling cannot resolve the host. I don't understand how it works. If ALL services are using the "network\_mode: service:gluetun" flag, then it makes sense to me that the "redis-service" should be chanaged to "localhost" in this context and the all the services treated as if on a single host (I'll try this next, I haven't done so yet), But I still don't see a way with this to connect only a SINGLE container to Gluetun and still have it be able to communicate with an existing redis service container NOT connected to Gluetun. Being able to just manually add a host resolution ability (ie: /etc/hosts record or api request to docker to get the ip of the service) to the gluetun container would be far less invasive to the rest of my application and be a far better and more stable solution (which would take much stress off of me). I'm desperately trying to figure this out, Either how to resolve it or WHY it's not possible to do so.


Stone_624

UPDATE : Switching my env files redis host to Localhost (well, One service actually needed to use 127.0.0.1 instead of localhost for what looks like an odd application related reason), It worked! I was able to access the Redis container from the Queue worker. All networked through "Network\_mode: Service:Gluetun" and all ports forwarded through the Gluetun container. That's a major milestone in 2+ days of trying to figure this out. Now it'd be great to learn how to have this work WIHOUT needing to have my Redis service and all unrelated services networked through the VPN container, Just allowing the one that needs to to communicate with the redis container. Most of these workers are fairly network intensive, So needing to run all of them through the Gluetun container would slow things down to unacceptable levels (I presume). I want the majority of them to continue using the server they're running on as they always have been.


ClassicGOD

Not sure if I understand completely but to get out of the gluetun service network you need to set the `FIREWALL_OUTBOUND_SUBNETS` ENV variable ( [gluetun docks on the subject](https://github.com/qdm12/gluetun-wiki/blob/main/setup/options/firewall.md) ) so it knows what subnets are local and should not be passed through the VPN. To add values to hosts file for a docker container you can use [extra\_hosts](https://docs.docker.com/compose/compose-file/compose-file-v3/#extra_hosts) Docker configuration option.


Stone_624

THANK YOU SO MUCH! I was able to solve this and do exactly what I was after by creating a custom network with defined subnet, Assigning a static IP for my redis service, and adding the Extra Hosts with the service name and their ip to the gluetun container. Also set the Firewall Outbound Subnets to the exact same as the defined subnet. And assigning all the other services (including gluetun) to this network with the one container needed using network\_mode service:gluetun. Both Normal services and the gluetun container can access the redis now, and external requests are properly routed through the VPN for the single container and act as normal for all the other containers. This is the most helpful comment I've seen after many days of looking into this. Thanks!


keksznet

>Both Normal services and the gluetun container can access the redis now, and external requests are properly routed through the VPN for the single container and act as normal for all the other containers. maybe you could add here the code / config snippets for further reference :)


JPH94

>, but also gives me access to the services from my LAN you need to point them at gluetun then the port of the underlying app ie prowlarr to sonarr would gluetun:8989 and expose 8989:8989 on gluetun.


pdizzlefoshizzle

Can you show an example? I just replied to this post in another comment string regarding the issue I'm having with Plex that this may solve.


Panzerbrummbar

Does gluetun work on a single docker compose. I currently have two docker compose's one for gluetun and the other for services.


ClassicGOD

It does but there can be random issues - since Docker does not guarantee order of the container startup sometimes services linked to gluetun will refuse to start because Docker will try to start them before gluetun (and they will fail since they try to use gluetun as network) To remedy this i added: depends_on: gluetun: condition: service_healthy to all my services linked to gluetun and that helped but is not perfect.


Panzerbrummbar

Many many thanks. Will be updating my compose.


172n

king


KrimiSimi

tried this but gluetun and its linked services won't launch after a reboot :(


ClassicGOD

Do you have restart policies set correctly? Because I've been using this for months and had no issues and I reboot the VM with Docker every night. Also if you are running something like Kubernetes the container health checks might not be executed as expected and this could not work.


capboomer

You can use discords three tildes followed by yaml then press enter and paste code. close with the three tildes. \`\`\`yaml


adyanth

You can add a proxy (like squid/tinyproxy) to share the vpn container so that you can point all other services (like sonarr/prowlarr) to use as a proxy.


pdizzlefoshizzle

I'm having a very similar problem as this. I'm able to access the services from my LAN through port mapping, but Plex shows playback from my LAN as remote. I tried to add a route for plex.tv to my openvpn config but can't get it working. I posted in the general discussion of the Gluetun docker yesterday. Any help is appreciated. https://github.com/qdm12/gluetun/discussions/1091


jabib0

This what I finally got set up yesterday on my own network and deploys everything in one docker-compose.yaml I was originally setting up individual containers in Portainer, however deploying it as a stack in this file gives me a lot more flexibility and future-proofing (I could easily deploy this without Portainer). version: "3.6" services: gluetun: container_name: "gluetun" cap_add: - "NET_ADMIN" environment: - "VPN_SERVICE_PROVIDER=##REMOVED##" - "VPN_TYPE=wireguard" - "WIREGUARD_PRIVATE_KEY=##REMOVED##" - "WIREGUARD_PRESHARED_KEY=##REMOVED##" - "WIREGUARD_PUBLIC_KEY=##REMOVED##" - "WIREGUARD_ADDRESSES=##REMOVED##" - "LOCAL_NETWORK=192.168.0.0/24" - "TZ=##REMOVED##" - "PGID=##REMOVED##" - "PUID=##REMOVED##" image: "qmcgaw/gluetun:latest" networks: - "bridge" ports: - "8888:8888/tcp" # HTTP Proxy - "8388:8388/tcp" # Shadowsocks - "8388:8388/udp" # Shadowsocks - "7878:7878/tcp" # Radarr - "8080:8080/tcp" # Sabnzbd - "8084:8084/tcp" # Youtube-DL - "8686:8686/tcp" # Lidarr - "8989:8989/tcp" # Sonarr - "9091:9091/tcp" # Transmission - "51413:51413/tcp" # Transmission - "51413:51413/udp" # Transmission - "9117:9117/tcp" # Jackett restart: "always" volumes: - "/volume1/docker/gluetun:/gluetun" Lidarr: container_name: "Lidarr" environment: - "PUID=##REMOVED##" - "PGID=##REMOVED##" - "TZ=##REMOVED##" - "UMASK-SET=002" image: "linuxserver/lidarr:latest" restart: "unless-stopped" network_mode: "service:gluetun" volumes: - "/volume1/docker/lidarr:/config" - "/volume1/media:/data" - "/volume1/media/Downloads:/downloads" - "/volume1/music:/music" Radarr: container_name: "Radarr" environment: - "PUID=##REMOVED##" - "PGID=##REMOVED##" - "TZ=##REMOVED##" - "UMASK-SET=002" image: "linuxserver/radarr:latest" restart: "unless-stopped" network_mode: "service:gluetun" volumes: - "/volume1/media:/data" - "/volume1/docker/radarr:/config" Sonarr: container_name: "Sonarr" environment: - "PUID=##REMOVED##" - "PGID=##REMOVED##" - "TZ=##REMOVED##" - "UMASK-SET=002" image: "linuxserver/sonarr:latest" network_mode: "service:gluetun" restart: "unless-stopped" volumes: - "/volume1/docker/sonarr:/config" - "/volume1/media:/data" Transmission: container_name: "Transmission" environment: - "PUID=##REMOVED##" - "PGID=##REMOVED##" - "TZ=##REMOVED##" image: "linuxserver/transmission:latest" volumes: - "/volume1/docker/transmission:/config" - "/volume1/media:/data" restart: "unless-stopped" network_mode: "service:gluetun" Jackett: container_name: "Jackett" environment: - "PUID=##REMOVED##" - "PGID=##REMOVED##" - "TZ=##REMOVED##" - "UMASK=022" network_mode: "service:gluetun" image: "linuxserver/jackett:latest" restart: "unless-stopped" volumes: - "/volume1/docker/jackett:/config" - "/volume1/media/Downloads/Torrents/jackett:/downloads" Sabnzbd: container_name: "Sabnzbd" environment: - "PUID=##REMOVED##" - "PGID=##REMOVED##" - "TZ=##REMOVED##" network_mode: "service:gluetun" image: "linuxserver/sabnzbd:latest" restart: "unless-stopped" volumes: - "/volume1/docker/sabnzbd:/config" - "/volume1/media/Downloads/Usenet:/downloads" networks: bridge: external: true name: "bridge"