• Welcome to Hurricane Electric's IPv6 Tunnel Broker Forums.

Netalyzer says I have a IPv6 fragmentation problem.

Started by bicknell, January 26, 2012, 06:18:14 PM

Previous topic - Next topic

kasperd

Quote from: kcochran on January 29, 2012, 09:27:15 AMOptions would likely be 1480, 1472, and 1280, unless anyone can think of any other useful common values.
Is it a lot easier to implement a fixed set of values than it is to implement an input field where any value from 1280 to 1480 could be entered?

bicknell

Quote from: kasperd on January 29, 2012, 01:38:47 PM
Is there still any reason to think this is not a flaw in netalyzer?

I duplicated the problem using UDP iPerf between two hosts I control.  No netalyzer involved.

kcochran

Quote from: kasperd on January 29, 2012, 01:40:56 PM
Quote from: kcochran on January 29, 2012, 09:27:15 AMOptions would likely be 1480, 1472, and 1280, unless anyone can think of any other useful common values.
Is it a lot easier to implement a fixed set of values than it is to implement an input field where any value from 1280 to 1480 could be entered?

It's about the same either way, but there are a few sweet spots that are the most useful.  So it's really a choice between: leaving it completely open-ended, and people really have to know why they may need to change it in order to change it to a useful value; giving the most useful values; or do both, and hope it doesn't confuse and/or break people.  I also know we'd find several MTUs of 1337 if it were free-form.  ;-)

frim

I'm having the same issues, also with an Airport Express as tunnel endpoint. Netalyzr gives:

Your system can not send or receive fragmented traffic over IPv6. The path between our system and your network has an MTU of 1381 bytes. The bottleneck is at IP address 2001:470:0:7d::2. The path between our system and your network does not appear to handle fragmented IPv6 traffic properly.

The interesting part is that these issues only started about a week ago. Before that, everything worked fine for months.

kasperd

Quote from: kcochran on January 29, 2012, 07:57:44 PMI also know we'd find several MTUs of 1337 if it were free-form.
You'd probably need to round it down to a multiple of 8. But "mtu-=mtu%8" isn't hard to implement :-)

bicknell

Quote from: kcochran on January 29, 2012, 07:57:44 PM
It's about the same either way, but there are a few sweet spots that are the most useful.  So it's really a choice between: leaving it completely open-ended, and people really have to know why they may need to change it in order to change it to a useful value; giving the most useful values; or do both, and hope it doesn't confuse and/or break people.  I also know we'd find several MTUs of 1337 if it were free-form.  ;-)

I would pick a few common values, and document why they should be used:

9000* - 6in4 over a jumbo-frame capable network.
4450* - 6in4 over a 4470 byte MTU network (Packet Over SONET)
1480  - 6in4 over a 1500 byte MTU network (FIOS, Cable Modem)
1472  - 6in4 over PPPoE over a 1500 byte network (DSL)
1280  - IPv6 Minimum MTU (Should work everywhere)

* Note, these values require your ISP to have a jumbo frame clean path to Hurricane Electric, which generally means private peering with Hurricane and configuring the peering for jumbo frames.

Then, if you wan to score bonus points, create a small tool/applet that tests the IPv4 path between the tunnel endpoint and the tunnel broker server to determine the largest IPv4 packet that can pass without fragmentation, and then make a recommendation to the user.

kasperd

Quote from: bicknell on January 30, 2012, 06:12:17 AMif you wan to score bonus points, create a small tool/applet that tests the IPv4 path between the tunnel endpoint and the tunnel broker server to determine the largest IPv4 packet that can pass without fragmentation, and then make a recommendation to the user.
As far as I could tell that already happenes. It just happenes behind the scenes without you even noticing. But I'd need somebody to doublecheck to be sure I'm interpreting my observations correctly.

broquea

It recommends closest broker based on routing. He is talking about recommending a MTU. The MTU selection thing came up a bunch and got shot down an equal amount. Maybe this time it'll stick. Personally 3 options are enough: 1480 (which the tservs default to now), 1472 (for pppoe) and 1280 minimum. If you are on a network that is letting you get jumbo frame capabilities on WAN, then you are on a network that should already be providing you native IPv6 IMO.

kasperd

Quote from: broquea on January 30, 2012, 09:24:05 AMHe is talking about recommending a MTU.
I just did the same test over again, and what I saw was that if the IPv6 in IPv4 packet from the tunnel server to the user results in an ICMP message indicating that the IPv4 packet needs fragmentation, then the tunnel server will use that information to lower the IPv6 MTU of the tunnel temporarily. I don't know for how long the tunnel server keeps the lower MTU on the tunnel.

kasperd

Quote from: kasperd on January 30, 2012, 10:36:34 AMI don't know for how long the tunnel server keeps the lower MTU on the tunnel.
I just tried to time it. After the tunnel server received the ICMP message it lowered the MTU of the tunnel for 150 seconds. After those had passed it increased the MTU back to the default value.

snarked

QuoteIf you are on a network that is letting you get jumbo frame capabilities on WAN, then you are on a network that should already be providing you native IPv6 IMO.

Just because jumbo frames are based on 1GB or faster ethernet (e.g. 1000-base-T or fiber), that in no way implies that IPv6 is supported in such hardware.  I consider these characteristics as orthogonal and thus unrelated.

I have seen plenty of things (e.g. VoIP phones) that are being produced today which are IPv4 only.

igorybema

Hi,
I see the same message from netalyzr however I'm not experiencing any other problems with IPv6. My guess is that the netalyzr test is broken/incorrect.
The iperf test with udp 1433 is not valid as pmtu detection wil not work for iperf as you choose 1433 as message size. That wil not fit into the 1432 (1480) bytes tunnel. UDP does not allow segmentation and because you told iperf to use 1433 it will stay on 1433 bytes.

Al tests I did showed that PMTU is working correcty with he's tunnelbroker ipv6 connections. It must be that the netalyzr test is incorrect. Hopefully they will respond also to this thread.

regards, Igor
using openwrt as home router

bicknell

Quote from: igorybema on February 04, 2012, 12:45:42 PM
The iperf test with udp 1433 is not valid as pmtu detection wil not work for iperf as you choose 1433 as message size. That wil not fit into the 1432 (1480) bytes tunnel. UDP does not allow segmentation and because you told iperf to use 1433 it will stay on 1433 bytes.

I don't think you're right on the packet size issue.  I can send 1600 byte packets between two 1500 byte hosts on clean IPv6 connections.  If your theory was correct that wouldn't work as well.

Quote from: igorybema on February 04, 2012, 12:45:42 PM
Al tests I did showed that PMTU is working correcty with he's tunnelbroker ipv6 connections. It must be that the netalyzr test is incorrect. Hopefully they will respond also to this thread.

I think the Apple Time Capsule is dropping all IPv6 fragments inbound on the tunnel as a security policy.  I have opened a bug with apple to that effect, and will report back where that goes if they get back to me.

kasperd

Quote from: bicknell on February 08, 2012, 01:25:37 PMI don't think you're right on the packet size issue.  I can send 1600 byte packets between two 1500 byte hosts on clean IPv6 connections.  If your theory was correct that wouldn't work as well.
Fragmentation is permitted on the sending host regardless of which upper layer protocol is used. As long as there is no later hop with a smaller MTU than the first hop, it should work just fine.

The problem is only when a later hop has a smaller MTU. For TCP the best approach is to just hand the fragmentation needed info from the IP layer to the TCP layer and let TCP segment differently. Though I have seen cases where the TCP segment that triggered the fragmentation needed message would get retransmitted using fragmentation, but later packets would be segmented by the TCP layer.

I don't know exactly what is supposed to happen for UDP. Having the stack on the sending host buffer the UDP packet and retransmit in case of a fragmentation needed message doesn't sound like what you would expect from UDP. And pushing the requirement of dealing with fragmentation to the application layer isn't good either. Failing to implement either of those approaches will just lead to the application layer having to deal with a lost packet, which it is supposed to be capable of anyway. But always having a timeout for the very first packet isn't a great solution.

Quote from: bicknell on February 08, 2012, 01:25:37 PMI think the Apple Time Capsule is dropping all IPv6 fragments inbound on the tunnel as a security policy.  I have opened a bug with apple to that effect, and will report back where that goes if they get back to me.
I saw the problem without any Apple equipment. I'm pretty sure whatever the problem is, does not lie on my end of the tunnel.

bicknell

Quote from: kasperd on February 08, 2012, 01:54:08 PM
I don't know exactly what is supposed to happen for UDP. Having the stack on the sending host buffer the UDP packet and retransmit in case of a fragmentation needed message doesn't sound like what you would expect from UDP. And pushing the requirement of dealing with fragmentation to the application layer isn't good either. Failing to implement either of those approaches will just lead to the application layer having to deal with a lost packet, which it is supposed to be capable of anyway. But always having a timeout for the very first packet isn't a great solution.

My understanding (which I admit may be wrong) is the current state of the art in Linux and FreeBSD is that the first packet is lost.  That is, the first UDP packet goes out, gets dropped and generates a packet too big.  The server then caches the new MTU and uses that for sending additional UDP packets.  The first packet, and in in-flight, are dropped and the application must resend.

I believe this is why there is a lot of discussion about how UDP & PMTU discovery don't work for transactional services in IPv6.  For instance, DNS's send one, get one typical operation.  Indeed, I believe best current operational practice for DNS over IPv6 is to send only 1280 byte UDP frames. :(

Still, with iperf this should show up as a few lost packets at the start and then a successful test for the rest of the test, worst case.

I need to find someone who's actually written code in an IPv6 stack and ask them for more details.