see I've had very interesting debates on QOS. Having recently put up a +10G core, I found it hard to understand why QOS was such a big deal. We did do it, we have a very kickass QOS policy. It is even implemented and documented and alot of guys using our network mark packets appropriately.
While 'random surfing' today, turns out there's a weird phenomenon known as instantaneous buffer utilization/congestion.
This instantaneous buffer utilization can lead to a difference in delay times between packets in the same voice stream. This difference - jitter, is the variation between when a packet is expected to arrive and when it actually is received. To compensate for these delay variations between voice packets in a conversation, VoIP endpoints use jitter buffers to turn the delay variations into a constant value so that voice can be played out smoothly.
Hence the primary role of QoS in a network like ours is not to control latency or jitter but to manage packet loss. In 10GE campus networks, it takes only a few milliseconds of congestion to cause instantaneous buffer overruns resulting in packet drops. That single drop is what we take care of with QOS on a 10G core, its why we'll do it on the 40G core...it's why we'll keep doing QOS....I still suck abit at QOS....its just too nuanced for my attention span..