• Welcome to Hurricane Electric's IPv6 Tunnel Broker Forums.

News:

Welcome to Hurricane Electric's Tunnelbroker.net forums!

Main Menu

MTU Issues.

Started by Napsterbater, May 18, 2012, 08:16:22 PM

Previous topic - Next topic

Napsterbater

Hey guys.

I have had a tunnel for a bit with tunnelbroker, Its setup on my Cisco 887 ADSL Modem/Router (877 before that same MTU). I normally run an MTU of 1478 for my PPPoA dialer interface for efficacy sake (not much but not the point).

Now this makes my tunnel MTU 1458.. this used to work absolutely perfect. always passed http://test-ipv6.com/ with a 10/10 and 9/10 and websites loaded fine, however something has changed it it broke this config but my config has not changed, still PPPoA at 1478 MTU with a tunnel MTU on my side of 1458 but http://test-ipv6.com/ fails out complaining about packets to big and websites wont load over ipv6.

Now switching my PPPoA MTU back to 1500 fixes this, but I'm wondering what changed to break my config, I did notice a new feature where you can set the MTU for your tunnel, I'm wondering if that had anything to do with it.

Anyone with ideas.

kasperd

When did it stop working? Which tunnel server are you using? What are your IP addresses? If I knew your IPv4 and IPv6 address, I could send a couple of packets from here, to see what is happening.

I don't know all that much about PPP. I am wondering when you change the MTU of your PPP connection, whether you are really changing both ends of the PPP connection, or if you are somehow configuring it in a way, where the two ends of your PPP connection are using different MTU.

Napsterbater

#2
Probably started 2 mouths ago id guess (just a wild guess on when i started noticing an issue), I'm not sure exactly how the MTU negation works on ppp either, I do know between my 877 and 887 i have been running the tunnel over the same DSL connection with the same MTU for over a year, never had a problem until recently.

Ill send you the IP's in a PM, currently I have switched back to an MTU of 1478 during testing.

EDIT:
Capcha not working for the PM so i sent you an e-mail to the address attached to your fourm profile.

broquea

What does this thread have to do with the IPv6 certification topic?

Napsterbater

Ops wrong section, can a mod move this to a more appropriate section.

kasperd

Man was this thread hard to find again? I don't even read this section.

I did a traceroute towards both your IPv4 and IPv6 addresses using 1500 and 1480 byte packets, which are the largest I can send. In both cases they made it to the destination without problems. This doesn't say much about the return path though. I did do traceroute with ping packets, where packets that reach the destination will trigger replies of the same size. But any earlier hop responding with an ICMP/ICMPv6 packet will send a reply of a size small enough to make it through the network regardless of MTU. So I also tried to ping the last few hops of each path, and still saw no problems.

I assume you currently have the PPP connection set for a 1500 byte MTU, which is why I couldn't see any problems. I'd need to repeat the traces while the PPP connection is set to a lower MTU to find out more. If you don't want to leave the MTU settings at the lower value because it would cause problems for you, you could set it up with the MTU setting causing problems and then set the MSS value to 1220 bytes, which should eliminate the MTU issues for all TCP connections. That way it would be usable but still allow ping and traceroute to test with problematic packet sizes.

Napsterbater

I did, I cant even get to this page at all with the MTU set back to 1478 even using the ipv4.tunnelbroker.net domain because something on this page tried to load www.tunnelbroker.net.

Ill set the the PPP MTU back to 1478 for now until i get a notification from the fourms about a reply.

kasperd

With an IPv4 based traceroute I now see an MTU of 1478 in the direction from my computer to you. However the MTU in the direction from you to me is still 1500 bytes.

Traceroute sends packets that allow fragmentation in flight, so when sending 1500 byte packets using traceroute, I don't actually see the lower MTU. It just means they get fragmented before they reach you. The 1500 byte replies sent by your device makes it all the way back to me without getting fragmented, so the MTU from your device to mine has not been reduced.

If I instead use ping, it will send packets that do not allow fragmentation in flight. That means the first packet sent by ping gets an error back. Then ping proceeds with sending fragmented packets. Those are reassembled by your device, and the 1500 byte reply is sent all the way back to my computer without being fragmented.

The origin of the fragmentation needed ICMP packet is slightly puzzling. I assume it is the last router before reaching your device. When I do a traceroute from here to your IPv4 address, your device shows up as hop number 21. I assume the router at hop number 20 is the one sending the ICMP packet. However the IPv4 address from which I receive the fragmentation needed message is a different one from the one sending TTL expired at hop number 20. I assume this is because that router has multiple IPv4 addresses (which is quite normal) and uses the IPv4 address of the incoming interface for TTL expired and the IPv4 address of the outgoing interface for fragmentation needed errors. A traceroute to the IPv4 source of the fragmentation needed messages matches on the first 19 hops, and then shows a different IP at hop 20, which is why I think it is simply the router at hop 20 using different IPv4 addresses.

None of this explains why you would have a problem with IPv6 in IPv4 tunnelling, so I'll now try an IPv6 traceroute to see what that shows.

kasperd

Quote from: kasperd on May 24, 2012, 02:25:48 PMI'll now try an IPv6 traceroute to see what that shows.
I can no longer do a traceroute to your IPv6 address. It traces all the way to the tunnel server, and then gets no responses beyond that. And that was with packets so small that it cannot be due to an MTU issue.

My guess is that when you changed the MTU setting your ISP gave you a new IPv4 address, and you updated your DNS name to point at the new IPv4 address, but the tunnel server is still configured to use the old IPv4 address.

I'll go to bed now. I'll take another look on your connection tomorrow, if I can find the time.

Napsterbater

I can confirm the Outbound MTU of 1478, Max size ping with Do not fragment set is 1450.

I'm not sure what happened to my tunnel, IP was updated (script on the 887) and I have no connectivity over it now for some reason, and the 887 is showing "last input never", I'm gonna shut the tunnel0 interface for a bit and see if something needs to timeout on the tunnel server.

Napsterbater

#10
Got the tunnel back up, had to change the MTU from 1480 to 1280 and back and it started working, well still have the same MTU issue.


Also to add when the tunnel is up with the IPv4 MTU set to 1478 the biggest ping packet i can send (ping -6 -l 1410 google.com) is 1410.

C:\Users\Napsterbater>ping -6 -l 1410 google.com

Pinging google.com [2607:f8b0:4004:802::1006] with 1410 bytes of data:
Reply from 2607:f8b0:4004:802::1006: time=63ms
Reply from 2607:f8b0:4004:802::1006: time=62ms
Reply from 2607:f8b0:4004:802::1006: time=63ms
Reply from 2607:f8b0:4004:802::1006: time=61ms

Ping statistics for 2607:f8b0:4004:802::1006:
    Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 61ms, Maximum = 63ms, Average = 62ms

C:\Users\Napsterbater>ping -6 -l 1411 google.com

Pinging google.com [2607:f8b0:4004:802::1006] with 1411 bytes of data:
General failure.
Request timed out.
Request timed out.
Request timed out.

Ping statistics for 2607:f8b0:4004:802::1006:
    Packets: Sent = 4, Received = 0, Lost = 4 (100% loss),

kasperd

Quote from: Napsterbater on May 24, 2012, 02:56:13 PMI can confirm the Outbound MTU of 1478
Your outgoing MTU on IPv4 is at least 1500 bytes. When I ping you with a 1500 byte ICMP echo request, I receive a reply from you, which is not fragmented. Hence I know your outgoing MTU is at least 1500 bytes.

Your inbound MTU is still 1478 bytes. I receive fragmentation needed replies, when I try to ping you with packets larger than that.

I can ping you with IPv6 packets as well, and I can confirm the existence of an MTU problem. A 1478 byte IPv4 MTU translates to a 1458 bytes IPv6 MTU on the tunnel. If I ping you with 1459 byte packets, they are silently dropped. If I ping you with 1458 byte packets, I get replies. The replies are not fragmented.

I have seen HE tunnel servers automatically adjust the IPv6 MTU of the tunnel in response to IPv4 fragmentation needed messages, and I would have expected that in this case as well. But it does not happen. In a previous thread, some person said he had set up much of the tunnelbroker.net infrastructure, and denied the existence of that feature. Obviously what some person says in the forum is less convincing than observations that I make myself.

I have observed other differences between the tunnel servers, so it is also plausible that the automatic MTU adjustment only exist on some of the tunnel servers and not others. And it could be that the HE engineers are unaware that there are differences between the routing platforms they built the tunnel servers on. (Identical hardware running different firmware versions would be enough to explain the differences I have seen. Identical hardware and firmware but different configuration could also explain it.)

I am going to make a couple more experiments to narrow down what happens.

kasperd

Reducing my own IPv6 MTU to 1458 bytes causes my computer to fragment the ICMPv6 echo requests I send above that size. That way I have confirmed that you are able to receive the fragmented packets, reassemble them and reply. The replies are also fragmented if they go above 1458 bytes, so your outbound IPv6 MTU is currently 1458 bytes, and replies are fragmented accordingly.

I also did a traceroute to a 6to4 address, which I constructed from your IPv4 address. This one I did not expect any reply to, but it could tell me how the path towards your gateway deals with 6in4 packets that are too large for the IPv4 MTU. They should be handled the same as any other packet which is too large, but I have seen one router, which treated 6in4 packets differently.

What I found was that with 1458 byte packets, I received no responses whatsoever. That was expected. With 1459 packets I received ICMP messages indicating that the 6in4 packet was larger than the IPv4 MTU. That is also as expected. The ICMP messages originated from the same IPv4 address as the other such messages, so nothing suspicious going on there.

The size of the ICMP replies was not one of those I have most frequently seen. They are obviously truncated, since the original packet was already too big, it is difficult to put the entire thing into an error message without that getting too big as well. The number of bytes I have seen the messages truncated to have usually been either 28 bytes of the triggering IPv4 packet resulting in a 56 byte ICMP response, or 548 bytes of the triggering IPv4 packet resulting in a 576 byte ICMP response, which is the largest that is guaranteed to be possible to handle by any IPv4 host.

In your case I received 52 bytes of the triggering IPv4 packet which means 80 bytes ICMP response. Not sure what is the reasoning behind that choice. It is larger than the 68 bytes, which is guaranteed to be transferred in a single fragment. It is larger than what is required to include a full TCP header, but it is not enough to include a full IPv6 header. The actual contents of the ICMP response was:
20 bytes IPv4 header
8 bytes ICMP header
20 bytes IPv4 header
32 bytes IPv6 header
So the last 8 bytes of the IPv6 header and the entire IPv6 payload was removed by the truncation. However in previous experiments I have found that 8 bytes of the IPv6 header was sufficient for some HE tunnel servers to recognize the affected IPv4 address and act accordingly.

My conclusion is that the tunnel server you are using is unable to automatically adjust the MTU of the tunnel. There are no guarantees about this automatic adjustment, so you'll need to use a manual adjustment. There is a feature in the webinterface to configure your tunnels, through which you can set the MTU that the tunnel server will use towards you. Any value in the range from 1280 to 1458 should work for you. Going with the largest value you can pick without exceeding 1458 bytes should give the best performance.

To the question about what happened when you started noticing problems, my best guess is that something was changed on this particular tunnel server such that it was no longer able to automatically adjust tunnel MTUs. I don't know if anybody knows why some of the tunnel servers have this feature and others don't.