• Welcome to Hurricane Electric's IPv6 Tunnel Broker Forums.

News:

Welcome to Hurricane Electric's Tunnelbroker.net forums!

Main Menu

ICMP / Traceroute / MTR

Started by nohn, October 19, 2012, 06:02:07 AM

Previous topic - Next topic

nohn

Is it by design that within HE backbone you can't see the HE hops when issuing a traceroute/mtr but if you go the way around, it works?

HOST: a                    Loss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- 2001:my:he:rou:te:host  0.0%    10    1.4   1.6   1.3   2.8   0.4
  2.|-- ???                       100.0    10    0.0   0.0   0.0   0.0   0.0
  3.|-- ???                       100.0    10    0.0   0.0   0.0   0.0   0.0
  4.|-- ???                       100.0    10    0.0   0.0   0.0   0.0   0.0
  5.|-- ???                       100.0    10    0.0   0.0   0.0   0.0   0.0
  6.|-- ???                       100.0    10    0.0   0.0   0.0   0.0   0.0
  7.|-- 2a01:238:4211:8200:dead:b  0.0%    10   34.4  34.6  32.5  43.6   3.3


HOST: b                    Loss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- 2a01:238:4000::2          80.0%    10    0.8   1.1   0.8   1.4   0.4
  2.|-- 2a01:238:b:113::1          0.0%    10    4.4  11.6   2.2  76.2  22.7
  3.|-- 2a01:238:1:b1a3::2         0.0%    10   13.2  13.2  12.8  13.7   0.3
  4.|-- 2001:7f8::1b1b:0:1         0.0%    10   14.0  13.7  13.2  14.0   0.2
  5.|-- 2001:470:0:225::1          0.0%    10   19.8  19.8  19.3  20.7   0.5
  6.|-- 2001:470:0:7d::2           0.0%    10   32.6  24.3  22.1  32.6   3.3
  7.|-- 2001:my:tunnel:prefix::2       0.0%    10   32.4  32.5  32.0  33.8   0.6
  8.|-- 2001:my:he:rou:te:host  0.0%    10   33.2  33.1  32.4  35.1   0.8

kasperd

In the past I have helped a user on this forum, who saw the same symptom. In his case it turned out the reason was the way his own tunnel endpoint was handling the TTL fields.

There are some variations in how a tunnel endpoint can handle TTL fields. That particular user was using a router, which was handling them in what was almost the worst way it could be done. And my first guess is that whatever hardware or software you are using for your tunnel endpoint is doing the same.

When encapsulating an IPv4 packet in an IPv6 packet, there are two different approaches to setting the IPv4 TTL. Either it copies the hop limit from the IPv6 packet, or it uses a constant value on all packets. Using a constant value for the TTL is how it is conventionally done. Using a constant value works fine, except all the IPv4 hops between the two ends of the tunnel will be invisible to the IPv6 layer. Any IPv6 traceroute will simply show it as if there was only a single hop from one end of the tunnel to the other end.

Switching from a constant TTL value to copying from the IPv6 header can make the IPv4 hops visible, but three prerequisites must be met.

  • The IPv4 hops must include enough payload in ICMP errors to contain the original IPv6 header
  • The sending end of the tunnel must be capable of converting these ICMP errors into ICMPv6 errors
  • The receiving end of the tunnel must replace the hop limit in the IPv6 header, with the minimum of the IPv4 TTL and IPv6 hop limit.

What that user's gateway did was to copy the hop limit to the TTL without being able to handle the ICMP errors, moreover the HE tunnel servers don't satisfy the last prerequisite.

I guess the same is happening to you. Hop number 2 through 6 in your traceroute would have been IPv4 addresses, but because your gateway cannot translate those, they show up as missing. If you run an IPv4 traceroute to the tunnel server, you can probably find out what those five IPv4 addresses are.

By the time you send an IPv6 packet with a large enough TTL to reach the tunnel server, the IPv6 header inside will have a larger hop limit. This hides all the hops, which would have shown up as hop number 2 through 6 on your traceroute output, if your gateway had been using a constant TTL. And in your case, that actually hides everything. If the HE tunnel server had replaced the hop limit with minimum of TTL and hop limit, then you would still not have seen IPv4 hops (because of a problem with your gateway), but the IPv6 hops would not have become hidden but rather have been shifted down on the list, such that you would have seen the tunnel server as hop number 7.

Keep in mind, that what I describe is a known problem, that corresponds with your symptoms. But I don't have definite proof, that this is what happens in your case as well. If you can do a packet capture on the physical IPv4 interface, then we can see how the 6in4 packets leave your network, which will confirm or invalidate my guess.

nohn

it worked after I changed


iface he-ipv6 inet6 v4tunnel
    ....


to


iface he-ipv6 inet6 v4tunnel
    ....
    ttl             255

kasperd

Quote from: nohn on December 23, 2012, 05:02:51 PMit worked after I changed
That pretty much confirms my guess. Is it a Linux system, which has such a broken TTL handling in the default configuration?

snarked

Linux TTL isn't broken.  What happens is that the encapsulated packet inherits the IPv4 TTL (which is proper as it an IPv4 packet).  IPv4 hops are not the same as IPv6 hops.  As he found out, setting a specific TTL overrides this pseudo-problem.  As no RFC specifically says to (re-)set the TTL value when encapsulating 6in4, I don't see this as a bug.

kasperd

Quote from: snarked on December 24, 2012, 11:36:24 AMAs no RFC specifically says to (re-)set the TTL value when encapsulating 6in4, I don't see this as a bug.
If you want to view a tunnel as simply using IPv4 as a link layer, then using a constant TTL is the correct thing to do. You don't want a link layer where delivery is data dependent, and from the viewpoint of the link layer, the higher level hop limit is just data.

The symptom that nohn saw was exactly that of a link layer, that would drop packets depending on the higher level data. In particular packets where the high level hop limit was below a certain threshold were silently dropped.

No matter how you view it, I consider it broken to produce IPv4 packets with a very low TTL without at the same time being prepared to handle the ICMP errors it may produce.

You can choose another viewpoint, which is that a tunnel is not just a link layer, but something else. And in that case you may want to take additional measures to provide more transparency and protection against routing loops involving some cross-layer interactions. From this viewpoint copying the hop limit from the IPv6 packet into the IPv4 TTL is the correct thing to do, but there are a couple of prerequisites that need to be in place to ensure that things still behave in a usable way.

Those prerequisites I already mentioned above. They are the ability to convert ICMP errors for the encapsulated packets into proper ICMPv6 errors for the inner packet. And updating of the hop limit of the inner packet at decapsulation time. The first of those was clearly not in place. The second might be, but I am not aware of such a feature being present.

A configuration that doesn't work well, when both endpoints are configured in the same way, I will also claim to be broken. If you copy the IPv6 hop limit to the IPv4 TTL, then the receiving end must replace the IPv6 hop limit with minimum of IPv4 TTL and IPv6 hop limit to achieve a good result. If you copy IPv6 hop limit to IPv4 TTL on packets you send, and discard the IPv4 TTL on packets you receive, then you don't get a good result, if the other endpoint behaves the same way. I consider that to be broken.

You can choose a third approach, which is to work well with endpoints regardless of which of the above two approaches, they use. If you choose that approach, you need to send packets with a constant IPv4 TTL, still be prepared to handle ICMP errors for the tunnelled packets, and on received tunnelled packets replace the IPv6 hop limit with minimum of IPv6 hop limit and IPv4 TTL. That's what I have chosen to do by default in my stack, and it works very well.