x
<<   <   Page 4 / 4
mr zippy 12/4/2012 | 11:40:30 PM
re: And Furthermore, Bill... "While it's nice to theorize, there isn't much of a real engineering decision to be made here. Folks that feel very strongly are more than welcome (and are even encouraged) to increase the MTU in their local corner of the Internet, but expecting a network-wide change is unreasonable.
"

Agree.

I'm was only pointing out that if there is any direction to go with the common MTU size, it is likely to be upwards.

Shrinking it (ie., eg., by shrinking the MSS of a web site like google), penalises users who have a firewall that is configured correctly, and doesn't cause the users who have the badly configured firewalls to get their firewalls fixed.
skeptic 12/4/2012 | 11:40:24 PM
re: And Furthermore, Bill...
I'm was only pointing out that if there is any direction to go with the common MTU size, it is likely to be upwards.
================
The path to a "common" MTU of around 9k is
fairly clear. The motivation (and the need)
for going beyond 9k in the next several years
isn't real clear. Its not as big a performance
win as the people pushing it think.

Even at 9k, The cost of moving the packet is
going to turn all the other processing costs
into "noise". I dont see the advantage of
pushing beyond that because there are negative
effects in the switches to very large MTUs
and nobody has made a good case for what
problem really large MTUs is solving.

skeptic 12/4/2012 | 11:40:22 PM
re: And Furthermore, Bill... However, I'm not even certain that the path even to a 9K MTU is clear. There are numerous systems out there that have internal parts that are doing store-and-forward and only have 4K buffer sizes. Just about every POS link is like this. I'm sure there's a bunch of 100Mb Ethernet out there.

Getting past 1500 alone would be a major challenge and require some coordinated initiative with some clear benefits.
--------------------------
I meant that the path was clear from an IP
technology point of view. There is nothing
in the protocols or the basic technology
that prevents us from getting eventually to
9k.

Operationally, getting to 9k (or really
even getting to 4k) is going to
(as you say) be expensive. But a certain
proportion of equipment (sonet and GE) is
designed to get around the 4k buffer limitations
of the past. And yes, 100BT is out there, but
there was lots of ugly non-ethernet stuff out
there when we went to 1500 from 576(I think it
was 576). It was painful, but it could be done.

And I agree that the value can be questioned.
I would still rather see the big-MTU crowd
working on transition plans to 4k or 9k than
working on proposals for really massive MTUs
because the transition plans are at least
somewhat possible.
Tony Li 12/4/2012 | 11:40:22 PM
re: And Furthermore, Bill...
As a rule, I don't normally disagree with skeptic.
;-)

However, I'm not even certain that the path even to a 9K MTU is clear. There are numerous systems out there that have internal parts that are doing store-and-forward and only have 4K buffer sizes. Just about every POS link is like this. I'm sure there's a bunch of 100Mb Ethernet out there.

Getting past 1500 alone would be a major challenge and require some coordinated initiative with some clear benefits.

Tony
myresearch 12/4/2012 | 11:39:40 PM
re: And Furthermore, Bill... http://www.joelonsoftware.com/...
gbennett 12/4/2012 | 11:39:35 PM
re: And Furthermore, Bill... Hi David,

In your reply the other day you mention the following...


We also allow the encoder to select smaller packet sizes, and the Server now streams in RTSP, and can repacketize encoded media to reduce packet size dynamically.

...but are you saying that the encoder and server support RFC 1191 Path MTU Discovery? Are you at the mercy of third party encoders in this case?

I have a follow-up to the column that'll be up in the next day or so that broadens the topic beyond Media Player and Microsoft. But the host software (whoever writes it) is still a key component. This is a clear example of the chain being as strong as the weakest link.

Cheers,
Geoff
iljitsch 12/4/2012 | 11:36:51 PM
re: And Furthermore, Bill... "We should eventually use 9k packets."

How silly is that? Exchanging one limitation for another? What we need is to break through the 1500 byte monoculture, and start really taking advantage of path MTU discovery. Then everyone gets to use the maximum MTU that makes sense for their network, whether this is bigger or smaller than 1500 bytes.

Unfortunately, we have some work to do. First of all, we need to educate people that filtering out ICMP packet too big messages is a very bad idea. Second, we need better PMTUD implementations that don't give up and die when they don't get packet too big messages. Depending on a box far away that you don't control to do the right thing is just plain stupid.

The good thing is that using a larger than 1500 byte MTU shouldn't lead to nearly as many problems as using a smaller one. ISPs aren't exactly lining up to support this, though.

What we really need is a way for hosts with different MTUs to live on the same LAN, as 10 and 100 Mbps ethernet cards typically only do 1500 bytes while nearly all GE cards support more. So in practice you only get to do jumboframes in gigabit-only environments today. IPv6 supports routers announcing a per-subnet MTU, but unfortunately no per-host MTU yet.

By the way: Quicktime allows the user to set the maximum packet size when creating a live stream or when converting audio/video. The default is 1450 bytes.

(The fact that streaming is responsible for so many fragments is no coincident: if the application creates UDP packets that are larger than 1500 bytes the IPv4 stack can't really do anything but fragment. With IPv6, PMTUD will automatically be used here, though.)
<<   <   Page 4 / 4
HOME
Sign In
SEARCH
CLOSE
MORE
CLOSE