• Welcome to Hurricane Electric's IPv6 Tunnel Broker Forums.

How do I pass IPv6 to my VPS using IPv4 as transit automatically?

Started by stevefan1999a, December 19, 2020, 09:47:19 AM

Previous topic - Next topic


As I have been assigned a /48 to my home network, I wanted to distribute /64s to my VPSes and I wish to let them be automatically routed through my home router (or other peers, but in V4, some of my servers aren't having a good route with he.net) but I don't know how. So what do I mean by automated is that maybe I need to install a software that connects to a VPN and boom, automatic /64 popped for you.

Right now I use WireGuard to connect all my servers (on IPv4) and just manually assigned IPv6 ranges and let V6 routes to my home router for all my VPSes and IPv6 internet access from my VPSes are actually SNAT masquerades, meaning the V6 connection is actually masked all this time and I can't really get a routable public V6 address for all of my VPSes. How do I sidestep this? Can I make my home router a V6 SIT server on the V4?

Also, I want to deploy Calico with IPv6 and I want to directly expose pod address to the public IPv6 range, maybe on a /80 (just for fun, I want to learn IPAM and network policy but not on Class B), and I can't even get a /64 yet.

Kubernetes is really a match in the heaven to utilize IPv6 as there are tons of containers and pods but private V4 ranges are pretty scarce, and sometimes you even bumped problems where your Flannel/Calico/Kube-Router address range exhausted problem and even halted pod creation, causing a catastrophic cluster failure for me once (the main worker node had fully exhausted a /24). This is why we should have had gone for V6 in the first place but Google didn't...

Now I know there is 5 /64 limit with tunnel broker and this is exactly the reason why I go for /48, but I don't know how router advertisement work and whether it has a play in this. If possible I would like to know how do I pass my Calico to use a public /80 range for pod deployment.

I have noticed how the HK peer of he.net is performing in a volatile manner, as my ping to some of the public V6 service (for example, dns.google) is jerking all the time between 3 to 15 ms. This is unusual until I found the culprit via mtr and it clearly shows the tunnel is not working in a stable manner, what is the main cause of this? Would this mean that because the HK peers has too many other clients and is being used for nefarious purposes and damaging the QoS of others?


cool, i discovered a little thing myself: if I had removed my SNAT, I can actually see the v6 address from correctly from "curl https://ipv6.icanhazip.com/". Then I let the upstream router add a route to my DMZ host and its working like a charm! I shouldn't have had go for a SIT route initially actually LOL. I've just forgot to add route :D...