I had an interesting meal in Paris a few weeks ago at a small bistro. I like Louisiana Cajun-style food, especially spicy andouille sausage, so when I saw “andouillette” on the menu, my stomach grumbled in anticipation. Certainly, the word ended in “ette”, but even my limited knowledge of French told me that this is just a diminutive ending. So maybe these sausages were small. No big deal, right?
When my lunch arrived, something was not quite right. First, this did not smell like any andouille sausage I had ever had. It was a familiar scent, but I couldn’t quite place it. But as soon as I cut into the sausage, and the filling burst out of the casing, it was clear what I had ordered. Tripe. Chitterlings . Pig intestines. With french fries.
I then knew where I had smelt this before. My grandfather, a Scotsman, was fond of his kidney pies and other dishes made of “variety meats”. This is food from an earlier time. The high fat content, and (in earlier days at least) cheaper prices of these cuts of meat provided essential meals for the poor. Although my grandfather ate these dishes out of preference, I’m pretty sure that his grandfather ate them out of necessity. How times change.
This was brought to mind recently as was reading the “final draft” of the Ecma Office Open XML (OOXML), something that was probably once done out of necessity in the memory-poor world of 1985, but now looks like an anachronism in the modern world of XML markup.
I’m talking about bitmasks. If you are a C programmer then you know already what I am talking about.
In C, imagine you want to store values for a number of yes/no (Boolean) type questions. C does not define a Boolean type, so the convention is to use an integer type and set it to 1 for true, and 0 for false. (Or in some conventions, 0 for true and anything else for false. Long story.) The smallest variable you can declare in C is a “char” (character) type, on most systems 8 bits (1 byte long) or even padded to a full 16 bits. But the astute reader will notice that a yes/no boolean question is really expressing only 1 bit of information, so storing it in an 8 bit character is a waste of space.
Thus the bitmask, a technique used by C programmers to encode multiple values into a single char (or int or long) variable by ascribing meaning to individual bits of the variables. For example, an 8-bit char can actually store the answer to 8 different yes/no questions, if we think of it in binary. So 10110001 is Yes/No/Yes/Yes/No/No/No/Yes. Expressed as an integer, it can be stored in a single variable, with the value of 177 (the decimal equivalent of 10110001).
The C language does not provide a direct way to set or query the values of an individual bit, but it does provide some “bitwise” operators that can be used to indirectly set and query bits in a bitmask. So if you want to test to see if the fifth (counting from the right) bit is true, then you do a bitwise AND with the number 16 and see if it is anything other than zero. Why 16? Because 16 in binary is 00010000, so doing a bitwise AND will extract just that single bit. Similarly you get set a single bit by doing a bitwise OR with the right value. This is one of the reasons why facility with binary and hexadecimal representations are important for C programmers.
So what does this all have to do with OOXML?
Consider this C-language declaration:
typedef struct tagLOCALESIGNATURE {
DWORD lsUsb[4];
DWORD lsCsbDefault[2];
DWORD lsCsbSupported[2];
} LOCALESIGNATURE, *PLOCALESIGNATURE;
This, from MSDN is described as a memory structure for storing:
…extended font signature information, including two code page bitfields (CPBs) that define the default and supported character sets and code pages. This structure is typically used to represent the relationships between font coverage and locales.
Compare this data structure to the XML defined in section 2.8.2.16 (page 759) of Volume 4 the OOXML final draft:
The astute reader will notice that this is pretty much a bit-for-bit dump of the Windows SDK memory structure. In this case the file format specification provides no abstraction or generalization. It merely is a memory dump of a Windows data structure.
This is one example of many. Other uses of bitmasks in OOXML include things such as:
- paragraph conditional formatting
- table cell conditional formatting
- table row conditional formatting
- table style conditional formatting settings exception
- pane format filter
If this all sounds low-level and arcane, the you perceive correctly. I like the obscure as much as the next guy. I can recite Hammurabi in Old Babylonian, Homer in Greek, Catullus in Latin and Anonymous in Old English. But when it comes to an XML data format, I seek to be obvious, not obscure. Manipulating bits, my friends, is obscure in the realm of XML.
Why should you care? Bitmasks are use by C programmers, so why not in XML? One reason is addressing bits within an integer runs into platform-specific byte ordering difference. Different machine processors (physical and virtual) make different assumptions. Two popular conventions are go by the names of Big-endian and Little-endian. It would divert me too far from my present argument to explain the significance of that, so if you want more detail on that I suggest you seek out a programmer with grey hairs and ask him about byte-ordering conventions.
A second reason to avoid bitmasks in XML is that avoids being part of the XML data model. You’ve created a private data model inside an integer and it cannot be described or validated by XML Schema, RELAX NG, Schematron, etc. Even XSLT, the most-used method of XML transformation today, lacks functions for bit-level manipulations. TC45’s charter included the explicit goal of:
…enabling the implementation of the Office Open XML Formats by a wide set of tools and platforms in order to foster interoperability across office productivity applications and with line-of-business systems
I submit that the use of bitmasks is the not the thing to do if you want support in a “wide set of tools and platforms”. It can’t be validated and it can’t be transformed.
Thirdly, the reasons for using bitmasks in the first place are not relevant in XML document processing. Don’t get me wrong. I’m not saying bit-level data structures are always wrong on all occasions. They are certainly the bread and butter of systems programmers, even today, and they was truly needed in the days where data was transferred via XModem on 12kbps lines. But in XML, when the representation of the data is already in an expansive text representation to facilitate cross-platform use, trying to save a byte of storage here or there, at the expense of the additional code and complexity required to deal with bitmasks, that the wrong trade-off. Remember in the end, the XML gets zipped up anyways, and will typically end up to be 10-20% the size of the same document in DOC format. So, these bitmasks aren’t really saving you much, if any, storage.
Fourthy, bitmasks are not self-describing. If I told you the “table style conditional formatting exception” had the value of 32, would that mean anything to you? Or would it send you hunting through a 6,000+ page specification in search for a meaning? But what if I told you that the value was “APPLY_FIRST_ROW”, then what would you say? A primary virtue of XML is that it is humanly readable. Why throw that advantage away?
Finally, there are well supported alternatives to bitmasks in standard XML, such as enumeration types on XML Schema. Why avoid a data representation that allows both validation and manipulation by common XML tools?
It seems to me that the only reason that bitmasks were used here is that the Excel application already used them. Much easier for Microsoft to make the specification match the source code than to make a standard that is good, platform and application neutral XML.
So, for the second time in a month the thought enters my mind: “You expect me to eat this tripe ?!”