• Welcome to Hurricane Electric's IPv6 Tunnel Broker Forums.

Strange TCP behavior from Sites served by Fastly near Denver tunnel endpoint

Started by Kyle Butt, May 17, 2023, 11:14:55 AM

Previous topic - Next topic

Kyle Butt

I have been seeing strange TCP behavior from sites served by Fastly over ipv6 from my tunnel endpoint.

The basic outline is that an HTTP connection starts, and then partway through the connection I receive a packet that is WAY out of order: in one example, sequence 38557:38842, for a connection where the last ack was the connection set up. Other times it is a similarly out of sequence packet right after the HTTP headers are returned. When the remote server comes back with the in-order response, the ipv6 flowlabel is different. Several sites that are served by Fastly are showing this behavior.

I tested the same site from 4 different VPS's that I have around the US, and didn't see the same behavior at any of them. They are in San Jose, CA; Los Angeles, CA; Kansas City, MO; and Jacksonville, FL.

In order to make sure nothing was fishy on my router, I had it capture the wrapped packets that are sent to or from that host.

I've You cannot view this attachment. Let me know if there's anything else that might help diagnose this.

I'm curious if other people at the Denver tunnel endpoint are seeing similar behavior.



Same here in Ashburn... I disabled the tunnel for the time being as this has persisted for a few weeks now. I'm afraid bad actors are ruining this service for all of us :(



From the troubleshooting I've done this has been a two stage issue. This is through the Lon1 tunnel endpoint but I've tried several and they all show the same issue, so it's a HE routing issue intially and now it appear there's a MTU issue somewhere.

1. When the problem initially occurred all traffic to fastly was being routed through Brazil this was resolved on the 18th May.

2. Whilst traffic is now correctly routing there must be an MTU mismatch somewhere along the traffic path, ICMP traffic is no issue but TCP traffic is getting dropped which really doesn't make TLS handshakes happy. I've enabled TCP MSS clamping on all traffic to Fastly (fortunately they only use 2x /32) to clamp down to 1300 which has 'fixed' the issue for me.

I'm going to do some more testing to see how big I can go before packets start dissapearing.


Kyle Butt

I was able to fix it with a tweak to the MTU of the tunnel.

I set the tunnel to 1480, but more importantly, I set the ipv6 router advertisements to include the MTU. I suspect that the initial mss was derived from the MTU of my local link, and was causing problems with fastly. With the fastly routes having a smaller MTU, things go smoothly again.