• Welcome to Hurricane Electric's IPv6 Tunnel Broker Forums.

News:

Welcome to Hurricane Electric's Tunnelbroker.net forums!

Main Menu

Bandwidth testing oddity

Started by rlhdomain, April 26, 2008, 12:17:55 PM

Previous topic - Next topic

rlhdomain

localy over my network I've been testing bandwidth between my 2k3 and my XP machine
I did this due to seeing a file download at only 5Mbyte when going to the IPv6 address and the same file downloading at 11Mbyte over IPv4

has anyone else seen this

also even when using the IPv4 in the same Vlan the speed was the same as in the differant vlan test (testing for Router overhead)
I'm going to upgrade one of my server's nic's from a base Intel nic to a better intel nic
the nic used for the site hosting is a server adapter but the other addapters are base ones with fewer offloading features

also in testing IPv6 Bandwidth vs IPv4 bandwidth (I didn't think thered be this much of a differance)
I found I can "ping ipv6.google.com -f -l 65500" which is odd

testmonster

#1
Interesting regarding your local network...  this would probably have to do with the IPv6 protocol stack implementation on those hosts, or a difference in how IPv6 is treated in the application layer (shouldn't be... just brainstorming), or your hosts are configured with two different /64s and go through a router on your LAN to reach each other via IPv6 where IPv4 does not (wild speculation).

(Note: the next part doesn't really address your post)

Regarding testing sites on the general Internet...

If you see a significant difference in performance, the path and MTU your packets take to and from the site used for testing is likely different between IPv6 and IPv4.

Questions regarding the site used for testing:

What was the latency via IPv4?

What was the latency via IPv6?

You can measure latency via ping, by picking the best time out of 10 pings (not average because that includes jitter which is different).

BTW, ping returns RTT (round trip time).

Persuading the destination network to run atleast as good of an IPv6 network as their IPv4 will get this latency to converge.  IPv6 should not have a higher latency unless: 1) the server repsonding to the IPv6 address is in a different location than the server responding to the IPv4 address 2) the destination or source network does not natively run IPv6 on as many routers as they do IPv4 limiting the paths for IPv6 in their network.

What is the MTU of your IPv4 connection?

What is the MTU of your IPv6 connection?

Your IPv4 connection probably has a 1500 byte MTU.  Your IPv6 connection if it is via a tunnel here probably has a 1280 byte MTU.

Tunnels are useful for testing and experimentation.  Long term, you want native IPv6 connectivity.

MTUs on the IPv6 Internet at large varies, depending on which networks run native IPv6 in their core (like Hurricane) or an overlay network
via tunnels.  Here is some data regarding this:

http://www.ripe.net/ttm/Plots/pmtu/tunneldiscovery.cgi


rlhdomain

#2
my IPv6 tunnel MTU through here is 1514 (there is no default)

the bandwidth testing was
from VLAN2 to VLAN3 IPv6 5Mbyte (different /64)
from VLAN2 to VLAN3 IPv4 11Mbyte (different /24 IPv4 subnet)
from VLAN2 to VLAN2 IPv6 5Mbyte (same /64)
from VLAN2 to VLAN2 IPv4 11Mbyte (same /24 IPv4 subnet)

testing with both plain file download and speedtest mini
server windows 2k3 sp2
client windows XP sp2
switch cisco 2924M-XL
Router cisco 2651 128D/32F
router on a stick setup (router with subinterfaces per vlan and wan connections with IPv4 nat and 6over4 tunnel via HE)

regarding the ping I can't do "ping (ipv4 local) -f -l 65500" (highest nonfragmented IPv4 is 1475) but IPv6 can maxout the ping size without fragmenting) which is odd

and yes I agree tunnels are a temp solution till ISP's widely make IPv6 native available

the only reason I found the bandwidth odd is its local testing (which removes outside interference from the equation)
the client and server IPv6 protocal overhead might be the limiting factor

the network is a 100Mbit net
the server has the following nics
vlan1 HP built in nic
vlan2 intel pro/100
vlan3 intel pro/100 server adapter
vlan4 intel pro/100
the client has the following nic
Intel Pro/1000 CT

I plan on swaping one of the pro/100 desktop adapter for a pro/1000 MT (which has more offloading features)
(I already have a stock pile of pro/1000 MT's for testing)

I found sometime ago that intel nics handle protocal overhead better than most Nics (upto 80% of theoretical over 50~60% of theoretical but that was on IPv4)

I didn't expect a 48% drop in protocal efficiency
maybe intel will release a driver that handles it better

rlhdomain

after messing with something else (for other testing) and set the Rwin_Max to 8M my IPv6 speeds went up alot (old Rwin_max 1M)