I received a Problem Management Report some time back describing a performance problem on a certain large bureaucracy's - er - company's - networking hardware. Unfortunately, it was not considered politic to explain this problem with the real answer, and the customer had to be mollified with a patch. But the truth deserves to be known! Problem Report: DESC: Customer describes a problem where they have a server/client application passing packets across the [FDDI] network. If they initialize the contents of the packets to zeroes, they see "5 to 10 times" less performance than if they initialize the packet contents to a non-zero value. Censored Answer: Statistical analysis has determined that there are more 1 bits than 0 bits in data. A bit of logic shows that this result must hold, because 0 represents an absence of data, and 1 encodes its presence. Thus, data has more 1's than 0's, QED. Therefore, to optimize throughput, the hardware negates all the bits to produce less 1's and more 0's. Since there is less data to transmit, the throughput increases. The receiver simply re-negates the incoming bits and thus recovers the actual data sent. The slowdown you experience is simply because all those 0's are transmitted by the hardware as 1's (data), and we have already shown that transmitting data requires more time than transmitting nothing.
(From the "Rest" of RHF)