• Welcome to Hurricane Electric's IPv6 Tunnel Broker Forums.

Setting up Own IPv6 tunnel server

Started by yiftachswr, March 27, 2013, 03:44:19 PM

Previous topic - Next topic

yiftachswr

hey guys


I have been playing with TunnelBroker and its working just fine :)
But the best v4 server to connect to is around 200ms away from here so I got a VPS in my country with native IPv6. Was wondering if anyone can please point me in the right direction how to set up the VPS to be able to tunnel my IPv6 traffic through it. I have been spending hours on the google but all the info I found was how to set up client side for a 6to4. I want to learn how to set up the server side.

Thanks :)

kasperd

Quote from: yiftachswr on March 27, 2013, 03:44:19 PMso I got a VPS in my country with native IPv6. Was wondering if anyone can please point me in the right direction how to set up the VPS to be able to tunnel my IPv6 traffic through it.
To do it properly, you need a /63 or shorter prefix routed to your VPS. Do you already have that? If you cannot get a prefix routed to your VPS, there are possible workarounds. But let's not discuss those workarounds, unless it becomes relevant.

Quote from: yiftachswr on March 27, 2013, 03:44:19 PMI have been spending hours on the google but all the info I found was how to set up client side for a 6to4. I want to learn how to set up the server side.
Setting up the server is not very different from setting up a client. The only thing that differs is the routing table.

Let's assume your VPS has addresses as follows:IPv4 address: 198.51.100.42
IPv6 address: 2001:db8::2/64
Default gateway 2001:db8::1
Routed prefix 2001:db8:1::/48


Then from that you can use 2001:db8:1:1::/64 for the tunnel link and 2001:db8:1:100::/56 as a routed prefix for the client.

Assuming the client has IPv4 address 203.0.113.7 the configurations would be as follows:
Code (Server) Select
Tunnel IPv4: 198.51.100.42
Peer IPv4: 203.0.113.7
Tunnel IPv6: 2001:db8:1:1::1/64
Routing table:
::/0 gateway 2001:db8::1
2001:db8:1:100::/56 gateway 2001:db8:1:1::2
Code (Client) Select
Tunnel IPv4: 203.0.113.7
Peer IPv4: 198.51.100.42
Tunnel IPv6: 2001:db8:1:1::2/64
Routing table:
::/0 gateway 2001:db8:1:1::1


If you tell us the actual addresses of your VPS, I can tell you what the configuration could look like using your addresses.

yiftachswr

Thank you for your descriptive reply!
sadly I am unable to get more than one IPv6 address for this box (I have been in communication with my provider and they refuse to give more IPs and they dont see the need.
and sadly also they are the only VPS provider that I found in new Zealand with native IPv6 so I dont have many other options left.

Is it piossible to set up such a thing with a single IPv4 + single IPv6 address? (linux box)

kasperd

Quote from: yiftachswr on April 01, 2013, 12:28:05 AMI am unable to get more than one IPv6 address for this box
I am not convinced that really is true. And if it really was true, I might have told them I wouldn't pay because the product wasn't what was advertised. But I think it is a misunderstanding, and in reality they are providing you access to more than one IPv6 address.

Quote from: yiftachswr on April 01, 2013, 12:28:05 AMIs it piossible to set up such a thing with a single IPv4 + single IPv6 address? (linux box)
It will be a bit difficult to get working, and the result won't be great.

I think the next step is to figure out, what you really got. First try running ip -6 addr to find out what addresses you have, and what prefix is assigned to the segment. Here is an example from a VPS I got myself:1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qlen 1000
    inet6 2a01:4f8:d16:701::2/126 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::21c:14ff:fe21:3ec7/64 scope link
       valid_lft forever preferred_lft forever
3: teredo: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1280 qlen 500
    inet6 fe80::ffff:ffff:ffff/64 scope link
       valid_lft forever preferred_lft forever
The interesting part is: 2a01:4f8:d16:701::2/126 scope global. How many global addresses do you have, and what is the prefix length?

In my case the prefix length used to be /64, and I had that link prefix all to myself. I did however not receive the routed prefix, as they had advertised I would. So I changed the link prefix from /64 to /126 and started a daemon to respond to all neighbor discovery on the link. Effectively turning the rest of the /64 into a routed prefix from which I could use all addresses except from the first four.

yiftachswr

ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    link/ether 00:16:3e:3a:bf:70 brd ff:ff:ff:ff:ff:ff
    inet 103.246.250.15/24 brd 103.246.250.255 scope global eth0
    inet6 2401:f000:32:23::18/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fe3a:bf70/64 scope link
       valid_lft forever preferred_lft forever
3: sit0: <NOARP> mtu 1480 qdisc noop
    link/sit 0.0.0.0 brd 0.0.0.0



would this mean I will be able to do the same as you and utilize the rest of the /64 even if they havent been announced?

kasperd

Quote from: yiftachswr on April 01, 2013, 02:20:57 AMwould this mean I will be able to do the same as you and utilize the rest of the /64 even if they havent been announced?
It depends on a few factors. First of all, it is important whether the segment is shared with other users.

What I did is something that should only be done, if you have the link /64 for yourself. Doing the same on a shared /64 could break connectivity for all other users on that segment. A good way to prevent that sort of breakage is by simply allocate a separate /64 for each user.

Your IP address looks like the segment might be shared. Having an address ending with ::18 means likely they started from 1 and went through the numbers until reaching 18. So what are the other IP addresses used for?

I tried traceroute to them, but I found that only 2401:f000:32:23::1, 2401:f000:32:23::2, and 2401:f000:32:23::18 are responding. The route to 2401:f000:32:23::1 is one hop shorter suggesting that one is your gateway.

A much better idea on what is on the segment can be gotten by running a few commands on the VPS itself. Try to ping the addresses I mentioned plus a few of those I got no response from. Then use ip -6 neigh to figure out if any of them are responding to neighbor discovery, and if multiple of those addresses are assigned to a single network interface (i.e. they have the same MAC address).

You can also find out a bit by running a tcpdump command. I think tcpdump -pni eth0 ip6 will do.

Next you need to figure out if you can utilize other addresses. That can be done either by assigning another address manually, or by enabling one of the autoconfiguration methods.

yiftachswr

ip -6 neigh

fe80::230:48ff:feba:b194 dev eth0 lladdr 00:30:48:ba:b1:94 router STALE
2401:f000:32:23::1 dev eth0 lladdr 00:22:83:8d:70:01 router REACHABLE
2401:f000:32:23::2 dev eth0 lladdr 00:30:48:ba:b1:94 router REACHABLE
2401:f000:32:23::3 dev eth0  INCOMPLETE
2401:f000:32:23::4 dev eth0  FAILED
2401:f000:32:23::5 dev eth0  FAILED
2401:f000:32:23::7 dev eth0  FAILED
2401:f000:32:23::8 dev eth0  FAILED
2401:f000:32:23::14 dev eth0  FAILED
2401:f000:32:23::15 dev eth0  INCOMPLETE


tcpdump -pni eth0 ip6 (for about 20min)

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes
16:34:04.902073 IP6 fe80::8c8f:6618:954c:9c73.dhcpv6-client > ff02::1:2.dhcpv6-server: dhcp6 solicit
16:34:05.897713 IP6 fe80::8c8f:6618:954c:9c73.dhcpv6-client > ff02::1:2.dhcpv6-server: dhcp6 solicit
16:34:07.897803 IP6 fe80::8c8f:6618:954c:9c73.dhcpv6-client > ff02::1:2.dhcpv6-server: dhcp6 solicit
16:34:11.898120 IP6 fe80::8c8f:6618:954c:9c73.dhcpv6-client > ff02::1:2.dhcpv6-server: dhcp6 solicit
16:34:19.893657 IP6 fe80::8c8f:6618:954c:9c73.dhcpv6-client > ff02::1:2.dhcpv6-server: dhcp6 solicit
16:34:35.894682 IP6 fe80::8c8f:6618:954c:9c73.dhcpv6-client > ff02::1:2.dhcpv6-server: dhcp6 solicit
16:34:56.679818 IP6 fe80::222:830b:c08d:7001 > fe80::216:3eff:fe3a:bf70: ICMP6, neighbor solicitation, who has fe80::216:3eff:fe3a:bf70, length 32
16:34:56.679867 IP6 fe80::216:3eff:fe3a:bf70 > fe80::222:830b:c08d:7001: ICMP6, neighbor advertisement, tgt is fe80::216:3eff:fe3a:bf70, length 24
16:34:56.680167 IP6 fe80::222:830b:c08d:7001 > fe80::216:3eff:fe3a:bf70: ICMP6, neighbor solicitation, who has fe80::216:3eff:fe3a:bf70, length 32
16:34:56.680176 IP6 fe80::216:3eff:fe3a:bf70 > fe80::222:830b:c08d:7001: ICMP6, neighbor advertisement, tgt is fe80::216:3eff:fe3a:bf70, length 24
16:35:01.678406 IP6 fe80::216:3eff:fe3a:bf70 > fe80::222:830b:c08d:7001: ICMP6, neighbor solicitation, who has fe80::222:830b:c08d:7001, length 32
16:35:01.681611 IP6 fe80::222:830b:c08d:7001 > fe80::216:3eff:fe3a:bf70: ICMP6, neighbor advertisement, tgt is fe80::222:830b:c08d:7001, length 24
16:35:01.681661 IP6 fe80::222:830b:c08d:7001 > fe80::216:3eff:fe3a:bf70: ICMP6, neighbor advertisement, tgt is fe80::222:830b:c08d:7001, length 24
16:35:01.682080 IP6 fe80::222:830b:c08d:7001 > fe80::216:3eff:fe3a:bf70: ICMP6, neighbor advertisement, tgt is fe80::222:830b:c08d:7001, length 24
16:35:01.682187 IP6 fe80::222:830b:c08d:7001 > fe80::216:3eff:fe3a:bf70: ICMP6, neighbor advertisement, tgt is fe80::222:830b:c08d:7001, length 24
16:35:07.896616 IP6 fe80::8c8f:6618:954c:9c73.dhcpv6-client > ff02::1:2.dhcpv6-server: dhcp6 solicit
16:37:52.658329 IP6 fe80::222:830b:c08d:7001 > ff02::1:ff00:5: ICMP6, neighbor solicitation, who has 2401:f000:32:23::5, length 32
16:37:52.659636 IP6 fe80::222:830b:c08d:7001 > ff02::1:ff00:5: ICMP6, neighbor solicitation, who has 2401:f000:32:23::5, length 32
16:37:53.658693 IP6 fe80::222:830b:c08d:7001 > ff02::1:ff00:5: ICMP6, neighbor solicitation, who has 2401:f000:32:23::5, length 32
16:37:53.660115 IP6 fe80::222:830b:c08d:7001 > ff02::1:ff00:5: ICMP6, neighbor solicitation, who has 2401:f000:32:23::5, length 32
16:37:54.658245 IP6 fe80::222:830b:c08d:7001 > ff02::1:ff00:5: ICMP6, neighbor solicitation, who has 2401:f000:32:23::5, length 32
16:37:54.658424 IP6 fe80::222:830b:c08d:7001 > ff02::1:ff00:5: ICMP6, neighbor solicitation, who has 2401:f000:32:23::5, length 32
16:38:01.658028 IP6 fe80::222:830b:c08d:7001 > ff02::1:ff00:5: ICMP6, neighbor solicitation, who has 2401:f000:32:23::5, length 32
16:38:01.658344 IP6 fe80::222:830b:c08d:7001 > ff02::1:ff00:5: ICMP6, neighbor solicitation, who has 2401:f000:32:23::5, length 32
16:38:02.658236 IP6 fe80::222:830b:c08d:7001 > ff02::1:ff00:5: ICMP6, neighbor solicitation, who has 2401:f000:32:23::5, length 32
16:38:02.658370 IP6 fe80::222:830b:c08d:7001 > ff02::1:ff00:5: ICMP6, neighbor solicitation, who has 2401:f000:32:23::5, length 32
16:38:03.657889 IP6 fe80::222:830b:c08d:7001 > ff02::1:ff00:5: ICMP6, neighbor solicitation, who has 2401:f000:32:23::5, length 32
16:38:03.662986 IP6 fe80::222:830b:c08d:7001 > ff02::1:ff00:5: ICMP6, neighbor solicitation, who has 2401:f000:32:23::5, length 32
16:41:11.894640 IP6 fe80::8c8f:6618:954c:9c73.dhcpv6-client > ff02::1:2.dhcpv6-server: dhcp6 solicit
16:41:12.890187 IP6 fe80::8c8f:6618:954c:9c73.dhcpv6-client > ff02::1:2.dhcpv6-server: dhcp6 solicit
16:41:14.890362 IP6 fe80::8c8f:6618:954c:9c73.dhcpv6-client > ff02::1:2.dhcpv6-server: dhcp6 solicit
16:41:18.885714 IP6 fe80::8c8f:6618:954c:9c73.dhcpv6-client > ff02::1:2.dhcpv6-server: dhcp6 solicit
16:41:26.886324 IP6 fe80::8c8f:6618:954c:9c73.dhcpv6-client > ff02::1:2.dhcpv6-server: dhcp6 solicit
16:41:42.887322 IP6 fe80::8c8f:6618:954c:9c73.dhcpv6-client > ff02::1:2.dhcpv6-server: dhcp6 solicit
16:42:14.889318 IP6 fe80::8c8f:6618:954c:9c73.dhcpv6-client > ff02::1:2.dhcpv6-server: dhcp6 solicit

35 packets captured
35 packets received by filter
0 packets dropped by kernel


I have been more in contact with the provider and they mentioned that they only have a single /64 routed to the actual node server so I highly doubt they will be able to give me anything close to a /64 let alone a /48.

kasperd

Quote from: yiftachswr on April 01, 2013, 01:47:28 PM
fe80::230:48ff:feba:b194 dev eth0 lladdr 00:30:48:ba:b1:94 router STALE
2401:f000:32:23::1 dev eth0 lladdr 00:22:83:8d:70:01 router REACHABLE
2401:f000:32:23::2 dev eth0 lladdr 00:30:48:ba:b1:94 router REACHABLE
Both 2401:f000:32:23::1 and 2401:f000:32:23::2 are listed as routers. I'm not sure what to make of that.

Quote from: yiftachswr on April 01, 2013, 01:47:28 PMtcpdump -pni eth0 ip6 (for about 20min)
What that tells is is, that there isn't much IPv6 activity on that link, but it does look like there are a few other nodes. Some are using privacy addresses even for link-local, so it might have been slightly more information to do output in a format showing MAC addresses as well.

Quote from: yiftachswr on April 01, 2013, 01:47:28 PMI have been more in contact with the provider and they mentioned that they only have a single /64 routed to the actual node server so I highly doubt they will be able to give me anything close to a /64 let alone a /48.
You wouldn't have needed anything close to a /48 to do a nice and clean setup. I said a /63 was needed, but in reality a routed /64 would have worked just as fine only slightly less intuitive.

They have the /48 listed separately in the whois database, but they also have the entire /32 as well. The descriptions in whois gives no clue as to why, the /48 is listed separately.

You are not out of options yet, but since the link does appear to be shared the nice/simple/clean solutions aren't possible.

What you can do is to tunnel Ethernet over IP between your VPS and a router at home. Then use the bridging support in the Linux kernel to forward packets between eth0 and the Ethernet-tunnel device. There are a few caveats in such a setup. First of all you want to filter the traffic, as you don't want to forward all Ethernet traffic, but only IPv6. That just means it needs to be filtered by ether-type as all IPv6 traffic uses one distinct ether-type. Additional filtering of the IPv6 traffic for security and performance reasons may be a good idea as well, but that would get complicated. Also since many more types of attacks are possible on-link than across a routed path, you want to ensure packets cannot be injected into your tunnel, so you need a bit more security on such a tunnel than on a tunnel, which only carries routed IPv6 traffic. Integrity protection with an HMAC would be a good idea.

I don't know if standard tunnel implementations exists, which achieves the above, but it certainly is possible.

That approach also introduces some MTU issues. Nodes will rightfully believe the link has an MTU of 1500 bytes, as it is Ethernet. But with all the additional encapsulation, the final IPv4 packets will have many extra headers. They'll get fragmented between the endpoints, so 1500 byte packets will still work. But it would be more efficient and more reliable to keep the IPv4 packets within the MTU of the IPv4 link between your router and the VPS.

AFAIR 14 bytes are needed for the Ethernet header. Then you need some integrity protection. IPsec with HMAC-SHA512 would be 80 bytes, but you are going to need the integrity protection at a different layer in the stack from where it would usually go, so another header format than IPsec may make more sense, that doesn't change the size much though. Then encapsulating the entire thing in UDP would make it less likely to encounter problems on the path, so that's another 8 bytes, and finally you need 20 bytes for an IPv4 header. If the IPv4 MTU is 1500 bytes, that means you'll be left with 1500-14-80-8-20 = 1378 bytes for the IPv6 MTU, which should be rounded down to a multiple of 8, so the final MTU would be 1376 bytes.

You could just let the tunnel clamp the MSS on TCP SYN packets such that at least those don't require fragmentation on the IPv4 link, and then let fragmentation on the IPv4 link deal with all other packets exceeding the MTU.

yiftachswr

Sorry I have not updated my post.

Thank you for the feedback and advice kasperd, sadly it was a bit beyond me/

What I ended up doing is setting up openVPN on the VPS with a single IPv6 and single IPv4 address;
and at the end I added a connect and disconnect script:
/etc/openvpn/client-connect.sh has:

#!/bin/bash

# This is a script that is run each time a remote client connects
# to this openvpn server.
# it will setup the ipv6 tunnel depending on the ip address that was
# given to the client

BASERANGE="2401:67F6:FA0F"
# v6net is the last section of the ipv4 address that openvpn allocated
V6NET=$(echo ${ifconfig_pool_remote_ip} | awk -F. '{print $NF}')

SITID="sit${V6NET}"

# setup the sit between the local and remote openvpn addresses
sudo /sbin/ip tunnel add ${SITID} mode sit ttl 255 remote ${ifconfig_pool_remote_ip} local ${ifconfig_local}
sudo /sbin/ip link set dev ${SITID} up

# config routing for the new network
sudo /sbin/ip -6 addr add ${BASERANGE}:${V6NET}::1/64 dev ${SITID}
sudo /sbin/ip -6 route add ${BASERANGE}:${V6NET}::/64 via ${BASERANGE}:${V6NET}::2 dev ${SITID} metric 1

# log to syslog
echo "${script_type} client_ip:${trusted_ip} common_name:${common_name} local_ip:${ifconfig_local} \
remote_ip:${ifconfig_pool_remote_ip} sit:${SITID} ipv6net:${V6NET}" | /usr/bin/logger -t ovpn


while the /etc/openvpn/client-disconnect.sh has:


#!/bin/bash

# This is a script that is run each time a remote client disconnects
# to this openvpn server.

BASERANGE="2401:67F6:FA0F"
# v6net is the last section of the ipv4 address that openvpn allocated
V6NET=$(echo ${ifconfig_pool_remote_ip} | awk -F. '{print $NF}')

SITID="sit${V6NET}"

sudo /sbin/ip -6 addr del ${BASERANGE}:${V6NET}::1/64 dev ${SITID}

# remove the sit between the local and remote openvpn addresses
sudo /sbin/ip link set dev ${SITID} down
sudo /sbin/ip tunnel del ${SITID} mode sit ttl 255 remote ${ifconfig_pool_remote_ip} local ${ifconfig_local}

# log to syslog
echo "${script_type} client_ip:${trusted_ip} common_name:${common_name} local_ip:${ifconfig_local} \
remote_ip:${ifconfig_pool_remote_ip} sit:${SITID} ipv6net:${V6NET} duration:${time_duration} \
received:${bytes_received} sent:${bytes_sent}" | /usr/bin/logger -t ovpn



and now on my windows machines I simply added the VPN (10.18.0.X) to push the v6 traffic out of

Long story short now I have v6 connectivity as a proxy, but my v4 machine can not be reach the reverse like tunnelbroker sadly. on the plus not I can have multiple v4 clients connected all sharing the 1 v6 address