• Welcome to Hurricane Electric's IPv6 Tunnel Broker Forums.

Tunnel traffic stalling, probable MTU problem?

Started by fenton, June 23, 2010, 11:11:22 AM

Previous topic - Next topic

fenton

Hi,

I have been running an IPv6-inside IPv4 tunnel from SixXS at home for a few months now, and everything has been good.  The tunnel terminates in Phoenix, Arizona though, and I found out that Hurricane Electric provides a free tunnel service terminating in Fremont, and the latency is about half of the SixXS tunnel, so I requested a tunnel from HE and switched over to it.  I'm terminating my tunnels at a Cisco 881 running IOS 12.4(20)T3.

I'm having problems that appear to be MTU-related with the HE tunnel.  Pings, traceroutes, etc. as well as simple telnet sessions work OK, but http retrievals hang.

When I do a large ping from outside toward my Linux machine, I see different things.  Here's the behavior with SixXS:

fenton@vps:~$ ping6 -s 1400 2001:1938:23c:1:21a:70ff:fe11:c889
PING 2001:1938:23c:1:21a:70ff:fe11:c889(2001:1938:23c:1:21a:70ff:fe11:c889) 1400 data bytes
From 2001:4de0:1000:a4::2 icmp_seq=1 Packet too big: mtu=1280
1408 bytes from 2001:1938:23c:1:21a:70ff:fe11:c889: icmp_seq=3 ttl=54 time=72.0 ms
1408 bytes from 2001:1938:23c:1:21a:70ff:fe11:c889: icmp_seq=4 ttl=54 time=72.0 ms

Looks to me like the SixXS tunnel endpoint is causing the sender to adjust MTU downward, and everything succeeds after that.

But here's what happens when I reconfigure my 881 to use the HE tunnel:

fenton@vps:~$ ping6 -s 1450 ipv6.bluepopcorn.net
PING ipv6.bluepopcorn.net(kernel.bluepopcorn.net) 1450 data bytes
From gige-gbge0.tserv3.fmt2.ipv6.he.net icmp_seq=1 Packet too big: mtu=1480
^C
--- ipv6.bluepopcorn.net ping statistics ---
9 packets transmitted, 0 received, +1 errors, 100% packet loss, time 8004ms

I'm still getting an ICMP message, but it looks like the MTU it's advertising is too big, and subsequent pings are failing.

I have switched back to the SixXS tunnel for now (so you'll see the SixXS behavior if you try it).  My sense is that the HE end of the tunnel is advertising the wrong path MTU.  Is that correct?

brad

Quote from: fenton on June 23, 2010, 11:11:22 AM
I have switched back to the SixXS tunnel for now (so you'll see the SixXS behavior if you try it).  My sense is that the HE end of the tunnel is advertising the wrong path MTU.  Is that correct?

Check the MTU on the tunnel interface on your router. If I remember correctly Cisco uses the wrong MTU by default. Set it to 1280. If anything is wrong its on your end.

broquea

Looks to me that both sixxs and us complain about too big packet, and they default to 1280 on their side (see their too big reply). Our side defaults to 1480. Also the same payload size wasn't used in one test as another (1400 vs 1450 != same test). Try the mtu change on the Cisco down to 1280 and see if it replicates what you see with sixxs, which would also be a packet too big error, but ping replies.

fenton

Added 'ipv6 mtu 1280' to the tunnel interface config, and no improvement.  You'll notice that the Packet too big ICMP message comes from the head end (the HE end) of the tunnel; I'm pinging from the outside world toward my host.  Isn't that where the MTU needs to be configured?

brad

Quote from: broquea on June 23, 2010, 04:04:21 PM
Looks to me that both sixxs and us complain about too big packet, and they default to 1280 on their side (see their too big reply). Our side defaults to 1480. Also the same payload size wasn't used in one test as another (1400 vs 1450 != same test). Try the mtu change on the Cisco down to 1280 and see if it replicates what you see with sixxs, which would also be a packet too big error, but ping replies.

Hrmm. from reverse DNS on the tunnels I got the impression the tunnelbroker POPs were a *BSD box, which definitely do not default to 1480. Your default should be mentioned to users and router examples should take that into consideration because 1480 is not a universal default and as per spec the default is 1280.

fenton

Still puzzled about this one.  I have tried the MTU suggestion to no avail.  Any further ideas?

apecentral

I had a similar issue with large outgoing packets being dropped. I finally figured out it was due to windows using an MTU of 1500 whereas my ddwrt tunnel endpoint was using 1472. Setting windows to 1472 with netsh fixed it.

fenton

Quote from: apecentral on July 07, 2010, 06:18:21 PM
I had a similar issue with large outgoing packets being dropped. I finally figured out it was due to windows using an MTU of 1500 whereas my ddwrt tunnel endpoint was using 1472. Setting windows to 1472 with netsh fixed it.
But the problem I'm having is with incoming packets being dropped.  I have already set the outgoing MTU to 1280.