Stream: helpdesk (published)

Topic: Is there any advantage of using Big-Endian nowadays?


view this post on Zulip Júlio Hoffimann (Nov 25 2025 at 13:11):

The docstring of ntoh mentions the term "Network". Was Big-Endian introduced to optimize data transfer over networks?

What is the current status of this convention? The fact that it can't be mmaped and that it can't be consumed in chuncks without a bswap seems very limiting.

view this post on Zulip Jakob Nybo Andersen (Nov 25 2025 at 13:29):

Endianness is arbitrary, so initial hardware and specifications differed. Processors have come to coalesce to little endianness, and network protocols to big endianness.
You can basically guarantee that any computer will store and load data little endian, so in that sense, you don't need to worry about it. W.r.t network protocols and file formats, they may be big-endian, but their content needs to be actively parsed anyway, so byteswapping can be considered just a minor part of parsing some format

view this post on Zulip Jakob Nybo Andersen (Nov 25 2025 at 13:34):

Or, to put it in another way, you can consume a stream/array of bytes without considering endianness. When you have to interpret these bytes as some data structure, e.g. Int32 or whatever, you need to parse it. And when you parse, you need to do a _lot_ of operations to go from an array of bytes to whatever data structure it represents, and considering endianness is a small part of the task

view this post on Zulip Júlio Hoffimann (Nov 25 2025 at 13:45):

Got it. So basically big-endian is still the main convention for network protocols. Sad the industry didn't converge on a single representation. It seems quite arbitrary. Is there a strong reason to prefer big-endian for networks?

view this post on Zulip Jakob Nybo Andersen (Nov 25 2025 at 14:04):

No, not that I'm aware of

view this post on Zulip Júlio Hoffimann (Nov 25 2025 at 14:05):

Thank you for the answers!

view this post on Zulip Sukera (Nov 25 2025 at 15:26):

One sort-of arbitrary justification that I heard in university (but of course wasn't backed up by anything) was that in big endian the bigger parts of a number is transmitted first over the network (confusingly). The reasoning why this was considered better is that even if the number is truncated, you'll at least know the order of magnitude of whatever was transmitted. In practice this doesn't really matter anyway, since transmitted data is usually checksummed & discarded if it arrives incomplete, since retransmission is relatively cheap.

view this post on Zulip Sukera (Nov 25 2025 at 15:29):

Nowadays, it's mostly done for backwards compatibility with existing protocols. For new protocols it's often just an arbitrary choice at this point.


Last updated: Nov 27 2025 at 04:44 UTC