• Welcome to Hurricane Electric's IPv6 Tunnel Broker Forums.

Getting around Apple's "Hampering Eyeballs"

Started by CaroleSeaton, June 21, 2013, 09:57:02 AM

Previous topic - Next topic

CaroleSeaton

Users of Apple products may have noticed that Apple operating systems are absurdly aggressive in seeking out IPv4 connections when IPv6 is available, what Emile Aben called "hampering eyeballs". NAT64/DNS64 can be used to force a preference for IPv6, since DNS64 ignores A records whenever AAAA records are available, but there really aren't any NAT64 implementations suitable for home use right now.

For the sake of people who find this irksome, would it be possible for Hurricane to dedicate a conventional DNS server that replicates this behavior, returning A records only when no AAAA records are available?

kasperd

I think you might not have understood how DNS64 works. It does not remove or ignore any DNS records. It creates synthetic AAAA records on domains, which were otherwise IPv4 only. But it does not remove any A records.

The only reason this would cause clients to prefer IPv6 is due to the clients typically having no IPv4 connectivity. It is the lack of IPv4 connectivity, which cause them to prefer IPv6. In case of dual stack domains, DNS64 does nothing. It only creates synthetic AAAA records, if there are none.

Suppressing A records on dual stack domains may be a useful feature in some cases. I have however not found any existing support for this in any of the DNS servers, I knew of.

aandaluz

I also hate how apple has implemented happy eyeballs in osx, as emile described in this nice article
If you use Chrome as your web browser, you can enable the built in asyncronous Dns client to bypass OSX's address resolution mechanism and use chrome's happy eyeballs implementation, which uses a 300ms timeout. This way, most hosts will be reachable over ipv6 by default:

chrome async dns osx 10.8 dns
 
This is far from an universal bugfix. For instance, I cannot reach 6lab.cisco.com  over ipv6 unless i disable completely ipv4. It is pretty normal, since latency is abnormally high in the ipv6 path

ping 6lab.cisco.com
PING 6lab.cisco.com (173.38.154.157): 56 data bytes
64 bytes from 173.38.154.157: icmp_seq=0 ttl=44 time=64.973 ms
64 bytes from 173.38.154.157: icmp_seq=1 ttl=44 time=70.106 ms
64 bytes from 173.38.154.157: icmp_seq=2 ttl=44 time=67.604 ms

ping6 6lab.cisco.com
PING6(56=40+8+8 bytes) 2001:470:7b31::d8ce:f1b9:731f:23e6 --> 2001:420:81:101::c:15c0:4664
16 bytes from 2001:420:81:101::c:15c0:4664, icmp_seq=0 hlim=51 time=389.828 ms
16 bytes from 2001:420:81:101::c:15c0:4664, icmp_seq=1 hlim=51 time=465.396 ms
16 bytes from 2001:420:81:101::c:15c0:4664, icmp_seq=2 hlim=51 time=489.043 ms

--- 6lab.cisco.com ping6 statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/std-dev = 389.828/433.681/489.043/44.334 ms

traceroute

traceroute to 6lab.cisco.com (173.38.154.157), 64 hops max, 52 byte packets
router.asus.com (192.168.1.1)  1.101 ms  0.848 ms  0.763 ms
2  89.131.67.126 (89.131.67.126)  20.398 ms  20.567 ms  20.480 ms
3  10.254.28.9 (10.254.28.9)  20.896 ms  20.977 ms  20.879 ms
4  10.254.28.1 (10.254.28.1)  35.082 ms  23.396 ms  34.402 ms
5  10.255.200.70 (10.255.200.70)  20.309 ms  20.711 ms  20.271 ms
6  62.36.198.65 (62.36.198.65)  23.078 ms  22.370 ms  23.773 ms
7  62.36.202.34 (62.36.202.34)  21.974 ms  22.560 ms  21.453 ms
8  81.52.186.189 (81.52.186.189)  23.583 ms  23.299 ms  24.781 ms
xe-3-1-1.barcr3.barcelona.opentransit.net (193.251.242.31)  21.385 ms  21.520 ms  21.685 ms
10  tengige1-3-0-11.pastr1.paris.opentransit.net (193.251.243.176)  43.921 ms  39.135 ms  39.795 ms
11  gigabitethernet13-2-0.pascr4.paris.opentransit.net (193.251.240.1)  38.919 ms  38.332 ms  38.670 ms
12  ge6-0-0.br2.par2.alter.net (146.188.112.77)  37.903 ms  38.584 ms  38.290 ms
13  so-0-2-0.xt1.ams2.alter.net (146.188.14.249)  49.269 ms  49.571 ms  50.014 ms
14  gigabitethernet6-0-0.gw5.ams6.alter.net (212.136.176.38)  50.184 ms  49.472 ms  49.515 ms
15  193.79.226.58 (193.79.226.58)  50.437 ms  49.170 ms  49.678 ms
16  ams3-dmzbb-gw2-gig5-2.cisco.com (64.103.36.1)  49.818 ms  49.961 ms  50.248 ms
17  ams3-dmznet-gw1-gig2-2.cisco.com (64.103.36.90)  50.296 ms  50.748 ms  50.267 ms
18  ams3-dmzvlab-gw1-gig0-0-0.cisco.com (64.103.36.54)  50.092 ms  50.019 ms  49.814 ms
19  * * *
20  * * *
21  * * *

traceroute6 6lab.cisco.com
traceroute6 to 6lab.cisco.com (2001:420:81:101::c:15c0:4664) from 2001:470:7b31::d8ce:f1b9:731f:23e6, 64 hops max, 12 byte packets
1  2001:470:7b31::1  5.535 ms  0.955 ms  0.835 ms
2  aandaluz-1.tunnel.tserv11.ams1.ipv6.he.net  67.188 ms  68.240 ms  67.754 ms
3  v213.core1.ams1.he.net  62.892 ms  63.229 ms  73.253 ms
4  10gigabitethernet1-4.core1.lon1.he.net  81.217 ms  78.039 ms  75.962 ms
5  10gigabitethernet10-4.core1.nyc4.he.net  139.029 ms  138.334 ms  148.079 ms
6  100gigabitethernet7-2.core1.chi1.he.net  157.721 ms  167.701 ms  158.859 ms
7  10gigabitethernet3-2.core1.den1.he.net  179.830 ms  186.978 ms  181.190 ms
8  10gigabitethernet13-5.core1.sjc2.he.net  245.792 ms  294.949 ms  321.921 ms
9  10gigabitethernet5-2.core1.pao1.he.net  221.109 ms  216.493 ms  206.568 ms
10  ciscosystems.v403.core1.pao1.he.net  211.545 ms  306.828 ms  294.148 ms
11  2001:420:80:8::2  212.289 ms  210.035 ms  231.959 ms
12  2001:420:81:100::2  483.688 ms  409.067 ms  407.829 ms
13  * * *
14  * * *
15  * * *
16  * * *
17  * * *
18  * * *
19  * * *

aandaluz

Sorry for the thread necromacy, but it seems that osx elcapitan and ios9 will finally get rid of hampering eyeballs  :D
https://www.ietf.org/mail-archive/web/v6ops/current/msg22455.html
Quote
Hi everyone,

Today Apple released the first public seeds of iOS 9 and OS X El Capitan.
These seeds (and the third developer seeds released yesterday) include an improved version of Happy Eyeballs.

Based on our testing, this makes our Happy Eyeballs implementation go from roughly 50/50 IPv4/IPv6 in iOS 8 and Yosemite
to ~99% IPv6 in iOS 9 and El Capitan betas.

While our previous implementation from four years ago was designed to select the connection with lowest latency
no matter what, we agree that the Internet has changed since then and reports indicate that biasing towards IPv6 is now
beneficial for our customers: IPv6 is now mainstream instead of being an exception, there are less broken IPv6 tunnels,
IPv4 carrier-grade NATs are increasing in numbers, and throughput may even be better on average over IPv6.

The updated implementation performs the following:
- Query the DNS resolver for A and AAAA.
   If the DNS records are not in the cache, the requests are sent back to back on the wire, AAAA first.
- If the first reply we get is AAAA, we send out the v6 SYN immediately
- If the first reply we get is A and we're expecting a AAAA, we start a 25ms timer
   - If the timer fires, we send out the v4 SYN
   - If we get the AAAA during that 25ms window, we move on to address selection
- When we have a list of IP addresses (either from the DNS cache or by receiving them close together with v4 before v6),
   we perform our own address selection algorithm to sort them. This algorithm uses historical RTT data to prefer addresses
   that have lower latency - but has a 25ms leeway: if the historical RTT of two compared address are within 25ms of each
   other, we use RFC3484 to pick the best one.
- Once the list is sorted, we send out the SYN for the first address and start timers based on average and variance of the
   historical TCP RTT. Roughly speaking, we start the second address around the same time we send out a SYN retransmission
   for the first address.
- The first address to reply with a SYN-ACK wins the race, we then cancel the other TCP connection attempts.

If this behavior proves successful during the beta period, you should expect more IPv6 traffic from Apple products in the future.
Note however that this only describes the current beta and all these details are subject to change.

Please test this out if you have the means to, we'd love to see test results and receive feedback!

I would like to personally thank Jason Fesler and Paul Saab for their help investigating these issues and testing this.

Thanks,
David Schinazi
CoreOS Networking Engineer