• Welcome to Hurricane Electric's IPv6 Tunnel Broker Forums.

News:

Welcome to Hurricane Electric's Tunnelbroker.net forums!

Main Menu

NAT64/DNS64 under linux

Started by jimb, September 18, 2011, 01:19:30 AM

Previous topic - Next topic

jimb

I got NAT64/DNS64 working on my gentoo linux based router box.  The combination of NAT64 and DNS64 allows IPv6 only hosts to communicate with IPv4 only hosts on the internet.

I looked around for various solutions for linux, and found that the most well known solution from Ecdysis wasn't compatible with the kernel I'm running on my router box (2.6.38-11), so I decided to give Tayga a try.  

Tayga is a simple "stateless" 1:1 NAT64 translator daemon which runs in user space, and makes use of a TUN interface to interface with the network.

My goal was to allow IPv6 only hosts on my LAN communicate with the IPv4 internet using a single public IPv4.  Since Tayga does simple one-to-one IPv6 -> IPv4 translation, with every IPv6 address requiring a corresponding IPv4 address, it essentially requires a double NAT (unless you have a large chunk of public IPv4s you can use for the NAT64 address pool).  So, to achieve this goal, the source translation goes something like this:

LAN IPv6 source address -> Private IPv4 pool address (NAT64 by Tayga) -> Public IPv4 address (NAT44 by iptables)

The return packets go through the reverse process.  This approach works very well, since it allows iptables/netfilter to do what it does well, instead of having some nat64 kernel module do everything, including keep connection state information, while keeping the nat64 process a simple operation.

Here are the configuration for the NAT64 half (addresses obfuscated):

/etc/tayga.conf:

tun-device nat64
ipv4-addr 192.168.255.1
prefix 2001:db8:1234:ffff::/96
dynamic-pool 192.168.255.0/24


This uses "nat64" for the name of the tun interface hooked up to Tayga, 192.168.255.1 is the IPv6 address Tayga itself uses, the NAT64 IPv6 prefix is part of my /48, and is used to represent IPv4 addresses as an IPv6 address, and the dynamic-pool is a pool of IPv4 addresses which Tayga translate IPv6 addresses to using the RFC defined algorithm.

To initialize the tun interface, one first issues the command tayga -mktun which simply creates the tun interface, then the interface must be assigned IPv4 and IPv6 addresses, and brought up, like so:

ip link set nat64 up
ip addr add 192.168.0.1 dev nat64
ip addr add 2001:db8:1234::1 dev nat64
ip route add 192.168.255.0/24 dev nat64
ip route add 2001:db8:1234:ffff::/96 dev nat64


The IPv4 and IPv6 addresses put on the tun interface are the same as the inside addresses on the ethernet interface, set as a /32 and /128 respectively, but this works just fine under linux.  The routes direct traffic for the IPv6 NAT64 range and IPv4 pool to the tun interface and thus into the Tayga deamon.  The interface and routes looks like this afterward:

ip addr:

8: nat64: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 500
   link/none
   inet 192.168.0.1/32 scope global nat64
   inet6 2001:db8:1234::1/128 scope global
      valid_lft forever preferred_lft forever

ip route:

192.168.255.0/24 dev nat64  scope link
2001:470:8332:ffff::/96 dev nat64  metric 1024


At this point, Tayga will translate my LAN hosts' source IPv6 addresses to an IPv4 address in the pool 192.168.255.0/24, and back.  But this wont get us to the internet yet.  The second part is setting up a iptables/netfilter NAT44 rule to translate from the NAT64 private IPv4 pool range to our public IPv4 address, and a filter table rule to allow the traffic.  This is a simple two liner (public IPs obfuscated):

iptables -A POSTROUTING -s 192.168.255.0/24 -j SNAT --to-source 198.51.100.1
iptables -A FORWARD -s 192.168.255.0/24 -i nat64 -j ACCEPT


Now I can actually send packets to the IPv4 internet from an IPv6 address on my LAN by using the NAT64 IPv6 prefix like so (a google IPv4 in this case):

{jimb@r51jimb/pts/1}~> ping6 -n 2001:db8:1234:ffff::74.125.224.116
PING 2001:db8:1234:ffff::74.125.224.116(2001:db8:1234:ffff::4a7d:e074) 56 data bytes
64 bytes from 2001:db8:1234:ffff::4a7d:e074: icmp_seq=1 ttl=52 time=64.2 ms
64 bytes from 2001:db8:1234:ffff::4a7d:e074: icmp_seq=2 ttl=52 time=14.1 ms
64 bytes from 2001:db8:1234:ffff::4a7d:e074: icmp_seq=3 ttl=52 time=13.7 ms


Now we need a DNS64 server to return our NAT64 prefixed IPv6 addresses in place of IPv4 addresses.  For this, I just turned on the functionality in BIND 9.8 with the following config section:

options {
     . . .

  dns64 2001:db8:1234:ffff::/96 {
     clients { 192.168.0.8/32; 2001:db8:1234:0:211:25ff:fe32:76; 2001:db8:1234::8; };
  };

    . . .
};


This directive tells BIND to return synthesized IPv6 addresses when a AAAA record is requested for a host on the internet which only has an IPv4 address by prepending our NAT64 prefix to the IPv4 address placed in the lower 32 bits of the NAT64 IPv6 prefix.  The "clients" section restricts this behavior to only my test clients.  The result are DNS answer such as this:

{jimb@r51jimb/pts/1}~> dig www.google.com aaaa
; <<>> DiG 9.7.3 <<>> www.google.com aaaa
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 38149
;; flags: qr rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 4, ADDITIONAL: 0

;; QUESTION SECTION:
;www.google.com.                        IN      AAAA

;; ANSWER SECTION:
www.google.com.         599918  IN      CNAME   www.l.google.com.
www.l.google.com.       296     IN      AAAA    2001:db8:1234:ffff::4a7d:e071
www.l.google.com.       296     IN      AAAA    2001:db8:1234:ffff::4a7d:e072
www.l.google.com.       296     IN      AAAA    2001:db8:1234:ffff::4a7d:e073
www.l.google.com.       296     IN      AAAA    2001:db8:1234:ffff::4a7d:e074
www.l.google.com.       296     IN      AAAA    2001:db8:1234:ffff::4a7d:e070


Putting it all together, I can now do things like this:

{jimb@r51jimb/pts/1}~> ping6 www.google.com
PING www.google.com(2001:470:8332:ffff::4a7d:e072) 56 data bytes
64 bytes from 2001:470:8332:ffff::4a7d:e072: icmp_seq=1 ttl=52 time=15.6 ms
64 bytes from 2001:470:8332:ffff::4a7d:e072: icmp_seq=2 ttl=52 time=14.9 ms
64 bytes from 2001:470:8332:ffff::4a7d:e072: icmp_seq=3 ttl=52 time=32.4 ms
64 bytes from 2001:470:8332:ffff::4a7d:e072: icmp_seq=4 ttl=52 time=15.6 ms

{jimb@r51jimb/pts/1}~> wget -6 www.google.com -O /dev/null
--2011-09-18 00:19:46--  http://www.google.com/
Resolving www.google.com... 2001:db8:1234:ffff::4a7d:e074, 2001:db8:1234:ffff::4a7d:e070, 2001:db8:1234:ffff::4a7d:e071, ...
Connecting to www.google.com|2001:db8:1234:ffff::4a7d:e074|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: `/dev/null'

   [ <=>                                   ] 10,286      --.-K/s   in 0s      

2011-09-18 00:19:46 (167 MB/s) - `/dev/null' saved [10286]

{jimb@r51jimb/pts/1}~> lftp ftp.mozilla.org
lftp ftp.mozilla.org:~> debug
lftp ftp.mozilla.org:~> dir
---- Connecting to ftp.mozilla.org (2001:db8:1234:ffff::3ff5:d189) port 21
<--- 230 Login successful.
---> PWD
<--- 257 "/"
---> EPSV
<--- 229 Extended Passive Mode Entered (|||51300|)
---- Connecting data socket to (2001:db8:1234:ffff::3ff5:d189) port 51300
---- Data connection established
---> LIST
<--- 150 Here comes the directory listing.
---- Got EOF on data connection
---- Closing data socket
<--- 226 Directory send OK.
-rw-r--r--    1 ftp      ftp           528 Nov 01  2007 README
-rw-r--r--    1 ftp      ftp           560 Sep 28  2007 index.html
drwxr-xr-x   35 ftp      ftp          4096 Nov 10  2010 pub
lftp ftp.mozilla.org:~>


Firefox watching a youtube video over NAT64 (you'll need IPv6 to see this):



(yes I obfuscated the IPv6)

So far everything works as expected except for port mode FTP.  I suppose that's because of the double NAT, but it could just be some iptables settings I need.  Oddly, I get DROP log entries from the FTP NAT helper module on outgoing packets originating from my real public IP going to the FTP server.  Weird eh?

Anyway, this was a lot easier than I thought it'd be.  Maybe it should be a new HE test.   :P

kriteknetworks

yeah, after brief testing with ecdysis, I moved to tayga as well, seemed to work better.

jimb

Quote from: kriteknetworks on September 18, 2011, 09:58:46 AM
yeah, after brief testing with ecdysis, I moved to tayga as well, seemed to work better.
Searching around I think I spotted two other nat64 implementations on sourceforge, but they seem very idle and have little to no documentation.

I also think there's a NAT64 target for netfilter out there, but it hasn't made it into the mainstream kernel yet.  That'd probably ultimately be the ideal way to handle it, since it would be directly integrated with netfilter/iptables.  But for now, Tayga + netfilter seems to work well for me.

kr1zmo

This may sound extremely stupid, but I am new to the IPv6 world, how would I ensure my server would only respond to link-local addresses. I want to ensure, if I setup a linux DN64-NAT64, that it wouldn't be available to the public. On IPv4 you would ensure your router isn't forwarding requests on port 53 to the server; however, IPv6 is a whole other beast, how do you ensure this is the case.

kasperd

Quote from: kr1zmo on May 30, 2013, 03:31:45 AMI want to ensure, if I setup a linux DN64-NAT64, that it wouldn't be available to the public.
Is it the DNS server or the NAT, which you are worried about?

I'd say ensuring the NAT64 is not available to the public is the most important part. If you run your NAT64 using the well-known prefix 64:ff9b::/96, you are guaranteed it won't be publicly available. Packets to that prefix aren't supposed to be routed over the public internet, and you shouldn't worry about those randomly getting routed over the public internet to your LAN.

If you use the well-known prefix, and leave your DNS64 open to the public, then if they use your DNS server, they will still use their own NAT64. If they don't have their own NAT64 using the well-known prefix they will be sending packets through the default route until they reach the backbone and get a no-route error back. So little harm is done by letting the public use your DNS64.

It is also possible to run DNS64+NAT64 using a globally routed /96 prefix. It is indeed possible to set up DNS64+NAT64 using a /96 out of the addresses you got allocated from HE. Of course you should only do that, if you actually intend to run a public NAT64 service.

Quote from: kr1zmo on May 30, 2013, 03:31:45 AMOn IPv4 you would ensure your router isn't forwarding requests on port 53 to the server; however, IPv6 is a whole other beast, how do you ensure this is the case.
This is about the DNS server. It is less important to protect that, than your NAT64. But if you don't exactly need your DNS server being public, then limiting the set of clients it will respond to, is a good idea. How to configure it depends on the DNS server you are using. I noticed a section about how to configure this, the last time I was looking up something in the BIND administrator's manual. But I don't know which DNS server you are using.

wswartzendruber

You can't map the well-known NAT64 prefix to a private IPv4 address range.  This is against the RFC and Tayga will not let you do it.

I defined a ULA for internal LAN access.  Let's say it's fd00:1111:2222::/48.  My hosts are on the fd00:1111:2222::/64 and NAT64 is mapped to fd00:1111:2222:ffff::/96.  You can also take the well-known prefix and turn on the ULA bits giving you fd64:ff9b::/96.

kasperd

Quote from: wswartzendruber on June 03, 2013, 12:30:00 PMYou can't map the well-known NAT64 prefix to a private IPv4 address range.  This is against the RFC and Tayga will not let you do it.
What problems could that possibly cause? Also I think a typical NAT64 deployment doesn't need to work with RFC 1918 addresses in the first place. You could probably trick most NAT64 implementations by using RFC 6598 address space instead of RFC 1918 address space. But I am not sure that is a good idea.

Quote from: wswartzendruber on June 03, 2013, 12:30:00 PMYou can also take the well-known prefix and turn on the ULA bits giving you fd64:ff9b::/96.
That's not compliant with RFC 4193 since the global ID is required to be generated randomly.

wswartzendruber

Quote from: kasperd on June 03, 2013, 04:07:18 PM
Quote from: wswartzendruber on June 03, 2013, 12:30:00 PMYou can't map the well-known NAT64 prefix to a private IPv4 address range.  This is against the RFC and Tayga will not let you do it.

What problems could that possibly cause? Also I think a typical NAT64 deployment doesn't need to work with RFC 1918 addresses in the first place. You could probably trick most NAT64 implementations by using RFC 6598 address space instead of RFC 1918 address space. But I am not sure that is a good idea.

I assume that's the new 100.64.0.0/10 subnet that ARIN gave up?  If so, using that is something of a hack to get around safety checks.  The main problem with mapping the well-known NAT64 prefix to a private address range is that the well-known prefix is globally shared.  So for any two entities that use the well-known prefix, 64:ff9b::/96 should route to the appropriate NAT64 gateway.  Now since this is IPv6 and that prefix is well-known and public, each address used should map to a unique host.  But let's say I map it to 192.168.0.0/16.  64:ff9b::192.168.0.1 in one network and the same address in a different network will each go somewhere else.  We have, essentially, a single public address that maps to more than one private host.

kasperd

Quote from: wswartzendruber on June 05, 2013, 09:23:17 PMWe have, essentially, a single public address that maps to more than one private host.
And how is that supposed to be a problem? The same would be true if it was an anycast address, which are taken from the same pool of addresses as unicast. This is going to break TCP connections, if you move to a different NAT64 device in the middle of a connection. But that would happen anyway due to NAT64 being inherently stateful.

wswartzendruber

Different anycast hosts usually perform the same function.  The most prominent example I can think of is that we have thousands of root DNS servers but only thirteen IPv4 addresses for all of them.  If you try this with NAT64 and the well-known prefix, there's no telling what host you'll get or what it does.

kasperd

Quote from: wswartzendruber on June 06, 2013, 08:03:01 PMIf you try this with NAT64 and the well-known prefix, there's no telling what host you'll get or what it does.
And why would that be a problem? The same holds for other classes of IP addresses as well.

torchddv

Forgive me for hijacking this thread a bit, but it looks like the discussion has petered out lately anyhow, and I'm trying to find someone with experience using Tayga.

I installed Tayga on my router (Asus rt-n66u) for the purpose of providing IPv6 access to some legacy IPv4-only webcams sitting on my home network. That is to say, in my case the IPv6 hosts reside somewhere in the internet and the servers are inside my home network, which seems to be the opposite of what everyone else is using NAT64 for.

Is what I am trying to do possible? I managed to get it installed (along with tun.o), and the logs show it's started OK and is running but darned if I can get it to work! I don't have a DNS64 set up as I have a static map in tayga.conf and will configure an AAAA record to the static IPv6 ip. I used iptables to accept all packets to and from the device nat64 but if I ping the camera, the logs don't show that a packet was either dropped or accepted. I've even tried turning off the firewall completely. Am I configuring something wrong or is Tayga just not capable of doing what I'm trying to use it for?