I got NAT64/DNS64 working on my gentoo linux based router box. The combination of NAT64 and DNS64 allows IPv6 only hosts to communicate with IPv4 only hosts on the internet.
I looked around for various solutions for linux, and found that the most well known solution from Ecdysis wasn't compatible with the kernel I'm running on my router box (2.6.38-11), so I decided to give
Tayga a try.
Tayga is a simple "stateless" 1:1 NAT64 translator daemon which runs in user space, and makes use of a TUN interface to interface with the network.
My goal was to allow IPv6 only hosts on my LAN communicate with the IPv4 internet using a single public IPv4. Since Tayga does simple one-to-one IPv6 -> IPv4 translation, with every IPv6 address requiring a corresponding IPv4 address, it essentially requires a double NAT (unless you have a large chunk of public IPv4s you can use for the NAT64 address pool). So, to achieve this goal, the source translation goes something like this:
LAN IPv6 source address -> Private IPv4 pool address (NAT64 by Tayga) -> Public IPv4 address (NAT44 by iptables)The return packets go through the reverse process. This approach works very well, since it allows iptables/netfilter to do what it does well, instead of having some nat64 kernel module do everything, including keep connection state information, while keeping the nat64 process a simple operation.
Here are the configuration for the NAT64 half (addresses obfuscated):
/etc/tayga.conf:
tun-device nat64
ipv4-addr 192.168.255.1
prefix 2001:db8:1234:ffff::/96
dynamic-pool 192.168.255.0/24
This uses "nat64" for the name of the tun interface hooked up to Tayga, 192.168.255.1 is the IPv6 address Tayga itself uses, the NAT64 IPv6 prefix is part of my /48, and is used to represent IPv4 addresses as an IPv6 address, and the dynamic-pool is a pool of IPv4 addresses which Tayga translate IPv6 addresses to using the RFC defined algorithm.
To initialize the tun interface, one first issues the command
tayga -mktun which simply creates the tun interface, then the interface must be assigned IPv4 and IPv6 addresses, and brought up, like so:
ip link set nat64 up
ip addr add 192.168.0.1 dev nat64
ip addr add 2001:db8:1234::1 dev nat64
ip route add 192.168.255.0/24 dev nat64
ip route add 2001:db8:1234:ffff::/96 dev nat64
The IPv4 and IPv6 addresses put on the tun interface are the same as the inside addresses on the ethernet interface, set as a /32 and /128 respectively, but this works just fine under linux. The routes direct traffic for the IPv6 NAT64 range and IPv4 pool to the tun interface and thus into the Tayga deamon. The interface and routes looks like this afterward:
ip addr:
8: nat64: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 500
link/none
inet 192.168.0.1/32 scope global nat64
inet6 2001:db8:1234::1/128 scope global
valid_lft forever preferred_lft forever
ip route:
192.168.255.0/24 dev nat64 scope link
2001:470:8332:ffff::/96 dev nat64 metric 1024
At this point, Tayga will translate my LAN hosts' source IPv6 addresses to an IPv4 address in the pool 192.168.255.0/24, and back. But this wont get us to the internet yet. The second part is setting up a iptables/netfilter NAT44 rule to translate from the NAT64 private IPv4 pool range to our public IPv4 address, and a filter table rule to allow the traffic. This is a simple two liner (public IPs obfuscated):
iptables -A POSTROUTING -s 192.168.255.0/24 -j SNAT --to-source 198.51.100.1
iptables -A FORWARD -s 192.168.255.0/24 -i nat64 -j ACCEPTNow I can actually send packets to the IPv4 internet from an IPv6 address on my LAN by using the NAT64 IPv6 prefix like so (a google IPv4 in this case):
{jimb@r51jimb/pts/1}~> ping6 -n 2001:db8:1234:ffff::74.125.224.116
PING 2001:db8:1234:ffff::74.125.224.116(2001:db8:1234:ffff::4a7d:e074) 56 data bytes
64 bytes from 2001:db8:1234:ffff::4a7d:e074: icmp_seq=1 ttl=52 time=64.2 ms
64 bytes from 2001:db8:1234:ffff::4a7d:e074: icmp_seq=2 ttl=52 time=14.1 ms
64 bytes from 2001:db8:1234:ffff::4a7d:e074: icmp_seq=3 ttl=52 time=13.7 ms
Now we need a DNS64 server to return our NAT64 prefixed IPv6 addresses in place of IPv4 addresses. For this, I just turned on the functionality in BIND 9.8 with the following config section:
options {
. . .
dns64 2001:db8:1234:ffff::/96 {
clients { 192.168.0.8/32; 2001:db8:1234:0:211:25ff:fe32:76; 2001:db8:1234::8; };
};
. . .
};
This directive tells BIND to return synthesized IPv6 addresses when a AAAA record is requested for a host on the internet which only has an IPv4 address by prepending our NAT64 prefix to the IPv4 address placed in the lower 32 bits of the NAT64 IPv6 prefix. The "clients" section restricts this behavior to only my test clients. The result are DNS answer such as this:
{jimb@r51jimb/pts/1}~> dig www.google.com aaaa
; <<>> DiG 9.7.3 <<>> www.google.com aaaa
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 38149
;; flags: qr rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 4, ADDITIONAL: 0
;; QUESTION SECTION:
;www.google.com. IN AAAA
;; ANSWER SECTION:
www.google.com. 599918 IN CNAME www.l.google.com.
www.l.google.com. 296 IN AAAA 2001:db8:1234:ffff::4a7d:e071
www.l.google.com. 296 IN AAAA 2001:db8:1234:ffff::4a7d:e072
www.l.google.com. 296 IN AAAA 2001:db8:1234:ffff::4a7d:e073
www.l.google.com. 296 IN AAAA 2001:db8:1234:ffff::4a7d:e074
www.l.google.com. 296 IN AAAA 2001:db8:1234:ffff::4a7d:e070
Putting it all together, I can now do things like this:
{jimb@r51jimb/pts/1}~> ping6 www.google.com
PING www.google.com(2001:470:8332:ffff::4a7d:e072) 56 data bytes
64 bytes from 2001:470:8332:ffff::4a7d:e072: icmp_seq=1 ttl=52 time=15.6 ms
64 bytes from 2001:470:8332:ffff::4a7d:e072: icmp_seq=2 ttl=52 time=14.9 ms
64 bytes from 2001:470:8332:ffff::4a7d:e072: icmp_seq=3 ttl=52 time=32.4 ms
64 bytes from 2001:470:8332:ffff::4a7d:e072: icmp_seq=4 ttl=52 time=15.6 ms
{jimb@r51jimb/pts/1}~> wget -6 www.google.com -O /dev/null
--2011-09-18 00:19:46-- http://www.google.com/
Resolving www.google.com... 2001:db8:1234:ffff::4a7d:e074, 2001:db8:1234:ffff::4a7d:e070, 2001:db8:1234:ffff::4a7d:e071, ...
Connecting to www.google.com|2001:db8:1234:ffff::4a7d:e074|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: `/dev/null'
[ <=> ] 10,286 --.-K/s in 0s
2011-09-18 00:19:46 (167 MB/s) - `/dev/null' saved [10286]
{jimb@r51jimb/pts/1}~> lftp ftp.mozilla.org
lftp ftp.mozilla.org:~> debug
lftp ftp.mozilla.org:~> dir
---- Connecting to ftp.mozilla.org (2001:db8:1234:ffff::3ff5:d189) port 21
<--- 230 Login successful.
---> PWD
<--- 257 "/"
---> EPSV
<--- 229 Extended Passive Mode Entered (|||51300|)
---- Connecting data socket to (2001:db8:1234:ffff::3ff5:d189) port 51300
---- Data connection established
---> LIST
<--- 150 Here comes the directory listing.
---- Got EOF on data connection
---- Closing data socket
<--- 226 Directory send OK.
-rw-r--r-- 1 ftp ftp 528 Nov 01 2007 README
-rw-r--r-- 1 ftp ftp 560 Sep 28 2007 index.html
drwxr-xr-x 35 ftp ftp 4096 Nov 10 2010 pub
lftp ftp.mozilla.org:~>Firefox watching a youtube video over NAT64 (you'll need IPv6 to see this):

(yes I obfuscated the IPv6)
So far everything works as expected except for port mode FTP. I suppose that's because of the double NAT, but it could just be some iptables settings I need. Oddly, I get DROP log entries from the FTP NAT helper module on outgoing packets originating from my real public IP going to the FTP server. Weird eh?
Anyway, this was a lot easier than I thought it'd be. Maybe it should be a new HE test.
