Hurricane Electric's IPv6 Tunnel Broker Forums

Advanced search  

News:

Welcome to Hurricane Electric's Tunnelbroker.net forums!

Pages: 1 2 3 [4] 5 6 ... 10
 31 
 on: October 10, 2020, 04:36:27 AM 
Started by ajyip6 - Last post by ajyip6
Possible solution to "First problem: outgoing pings often don't work until I've had an incoming ping"

On https://wiki.dd-wrt.com/wiki/index.php/IPv6 I found:
Quote
I occasionally have issues with the tunnel dying randomly. Pinging the router's IPv6 address fixes it for some reason, I have no idea why. :( -- update 2009.12.14 by calraith: Try adding metric 1 as an argument to the ip route add directives. ip route add ::/0 dev he-ipv6 metric 1

The metric was 1024 before I changed it. I've put the smarthub firewall back into "Default" mode (rather than the worrying "Disabled") and I'll keep it under observation to see if the first problem has gone away.

Andy

 32 
 on: October 10, 2020, 03:21:06 AM 
Started by mclovin - Last post by mclovin
"I can PING, but TCP connections hangs" sounds very much like the problem I describe in the "Tunnel Problems" thread in the "Questions & Answers" forum in the "Tunnelbroker.net Specific Topics" section. There is no solution there either, but it would be interesting to know if your diagnostics are comparable with my diagnostics

Andy
My wget looks the same as yours. If you run wireshark (or maybe tcpdump) it should be quite easy to see if it's the same problem.

 33 
 on: October 09, 2020, 05:14:02 AM 
Started by cshilton - Last post by nemesis101fc
We ran into this a few weeks ago as well, after having worked perfectly for a year or more.  Setup is unbound, as site resolver, forwarding netflix domain requests to a bind instance that strips AAAA responses.  After some digging with tcpdump, found, as did the original poster, that some of the netflix responses were now cnames to aws, and other, domains.  These are then resolved in the usual way and AAAA responses are returned breaking netflix.  We fixed it by getting unbound to forward the specific cname destinations to the bind stripping instance.  This has been working, for us, for a couple of weeks now.  We are located in Northern Britain and obviously the cnames returned will be region specific.  Just thought I'd list the domains/hosts we are forwarding for AAAA stripping in case this helps anyone else.
netflix.com.
netflix.net.
nflxext.com.
nflximg.net.
nflximg.com.
nflxvideo.net.
nflxso.net.
e13252.dscg.akamaiedge.net.
dualstack.ichnaea-vpc0-1803858966.eu-west-1.elb.amazonaws.com.
dualstack.beaconserver-ce-vpc0-1537565064.eu-west-1.elb.amazonaws.com.
dualstack.wwwservice2--frontend-san-vpc0-138074574.eu-west-1.elb.amazonaws.com.
dualstack.wwwservice--frontend-san-vpc0-445693027.eu-west-1.elb.amazonaws.com.
dualstack.ichnaea-web-323206729.eu-west-1.elb.amazonaws.com.

Really sucks that we have to jump through hoops like this to watch content we've paid for especially as netflix lists our /48 as being from the UK!!!!


 34 
 on: October 08, 2020, 02:27:10 PM 
Started by mclovin - Last post by ajyip6
"I can PING, but TCP connections hangs" sounds very much like the problem I describe in the "Tunnel Problems" thread in the "Questions & Answers" forum in the "Tunnelbroker.net Specific Topics" section. There is no solution there either, but it would be interesting to know if your diagnostics are comparable with my diagnostics

Andy

 35 
 on: October 08, 2020, 01:45:30 PM 
Started by ajyip6 - Last post by ajyip6
Some more diagnostics (taken while wget ip6only.me is waiting)...

# ss -i | tail -2
tcp     ESTAB    0         143            [2001:470:1f08:445::2]:35302              [2604:90:1:1::69]:http
    cubic wscale:6,6 rto:1600 backoff:2 rtt:130.447/65.223 mss:1208 pmtu:1280 rcvmss:536 advmss:1208 cwnd:1 ssthresh:7 bytes_sent:572 bytes_retrans:429 bytes_acked:1 segs_out:6 segs_in:1 data_segs_out:4 send 74.1Kbps lastsnd:1470 lastrcv:3140 lastack:3140 pacing_rate 1.5Mbps delivered:1 busy:3140ms unacked:1 retrans:1/3 lost:1 rcv_space:12080 rcv_ssthresh:64328 minrtt:130.447

and a bit later

# ss -i | tail -1
    cubic wscale:6,6 rto:18560 backoff:6 rtt:82.045/41.022 mss:1208 pmtu:1280 rcvmss:536 advmss:1208 cwnd:1 ssthresh:7 bytes_sent:1144 bytes_retrans:1001 bytes_acked:1 bytes_received:1 segs_out:11 segs_in:3 data_segs_out:8 send 117.8Kbps lastsnd:1980 lastrcv:21100 lastack:930 pacing_rate 471.2Kbps delivered:1 busy:21100ms unacked:2 retrans:1/7 lost:1 rcv_space:12080 rcv_ssthresh:64328 minrtt:82.045

Andy

 36 
 on: October 08, 2020, 01:25:41 PM 
Started by ajyip6 - Last post by ajyip6
Thanks for your reply.

I've set the tunnel MTU to 1280 at both ends, but it has not made a difference.

One thing to notice (from the iptables log in the first post) is that the HTTP "GET" packet is getting through (as the ack is eventuallly being returned). But by the time the ack gets back, the sender has already resent the GET three times and sent a TCP FIN. This makes me think either that something somewhere is very slow, or that I am resendig and timing out too quickly. Can this be changed?

Ping indicates that the RTT to ip6only.me is fine (82ms) but the timing in the iptables log might be interesting.

Andy

 37 
 on: October 07, 2020, 04:11:58 AM 
Started by ajyip6 - Last post by tomkep
I'll just comment to one observation you made:

Second problem: The TCP handshake works, but not subsequent packets

The fact that you have "empty" tcp packets through and not seeing the ones carrying payload may indicate problems with either incorrect MTU setting somewhere in your setup or PATH MTU DISCOVERY along the way.

Please make sure MTU on your WAN interface is correct (for whatever media end encapsulation you use) and that the tunnel interface MTU is correctly calculated from the previous one (usually this means it should be 20 bytes smaller to account for encapsulating IPv4 header). Please also make sure that MTU setting on tunnel server (you have it in advanced settings) is matching or smaller than your MTU.

PATH MTU DISCOVERY problems usually indicate overzealous FW dropping ICMPv6 packet too big messages somewhere. This may be you - which you should be able to fix after inspecting FW rules, or someone else - then forcing PMTU to lower value (you may have to experiment) for specific network(s) in Linux MANGLE tables is probably the only cure you have at hand (short of complaining to relevant network administrator if you can identify the offending node).

Of course you can also try to lower MTU on your IPv6 interfaces to 1280 bytes (mandated by RFC) and this is the shortest path to solution for abovementioned issues - at some performance cost.

 38 
 on: October 06, 2020, 04:27:23 PM 
Started by ajyip6 - Last post by ajyip6
An observation in the attached image (which HE won't let me attach, but I hope you get the drift)...

  • I did nc -6lp 8989 on a server that is LAN-side of the endpoint
  • I did wget http://test6a.pcpki.com:8989/test on a Vultr VM on the Internet
  • The wget end outputed its normal audit trail until "HTTP request sent, awaiting response..."
  • Nothing appeared on the screen of the nc end until I pressed control-C on the wget end. The http request appears on the nc's terminal as soon as the client wget terminates.
  • If I do the same with IPv4, the http request appears immediately
  • Similarly, normal nc-to-nc pipes work normally for IPv4 (output appears when I press return) but for IPv6 the output doesn't appear at the server end until the client terminates

So I do have two way traffic. But something is wrong and it is driving me potty. Does this observation mean anything to anyone?

Andy

 39 
 on: October 06, 2020, 02:52:17 PM 
Started by ajyip6 - Last post by ajyip6
Good question. My local ip[6]tables log everything before they do their stuff, so I am confident it is not them.

The Smarthub security was set to "allow outgoing, block unsolicited incoming" with the endpoint in the DMZ where (I thought) everything else incoming was sent and I could filter it. I have changed it to "disabled", and the first problem above seems to have gone away. So this means that something that wasn't getting in to the DMZ with the first settings is getting through with the second. Not sure what. So as long as I pay extra attention to security of my endpoint then fingers crossed that is OK.

To eliminate variables, I went back to having a virtual Slax in the DMZ as the endpoint, and configured it exactly as before (I had screenshotted it) when it had worked. The second problem still exists - the TCP connection is made but nothing application layer can then be sent along it. The smarthub now appears to be the only difference.

My suspicion is that it is a timing issue rather than a routing issue, but I can't think how to diagnose it

Andy

 40 
 on: October 06, 2020, 05:22:21 AM 
Started by ajyip6 - Last post by cholzhauer
I have no specific suggestions for you, other than to ask if there's a firewall in play here? If so, could you turn off for testing and see what you get?

Pages: 1 2 3 [4] 5 6 ... 10