• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

An Antic Disposition

  • Home
  • About
  • Archives
  • Writings
  • Links
You are here: Home / Archives for Rob

Rob

Why is OOXML Slow?

2006/10/19 By Rob 5 Comments

Of course, one could simply dismiss this question, saying that a specification for an XML vocabulary does not have performance as such, since a specification cannot be executed. However, the choices one makes in designing an XML language will impact the performance of the applications that work with the format. For example, both ODF and OOXML store their XML in compressed Zip files. This will cause reading and writing of a document to be faster in cases where memory is plentiful and computation is much faster than storage and retrieval, which is to say on most modern desktops. But this same scheme may be slower in other environments, say PDA’s. In the end, the performance characteristics of a format cannot be divorced from operational profile and environmental assumptions.

When comparing formats, it is important to isolate the effects of the format versus the application. This is important from the analysis standpoint, but also for legal reasons. Remember that the only implementation of (draft) OOXML is (beta) Office 2007, and the End User Licence Agreement (EULA) has this language:

7. SCOPE OF LICENSE. …You may not disclose the results of any benchmark tests of the software to any third party without Microsoft’s prior written approval

So let’s see what I can do while playing within those bounds. I started with a sample of 176 documents, randomly selected from the Ecma TC45’s document library. I’m hoping therefore that Microsoft will be less likely to argue that these are not typical. These documents are all in the legacy binary DOC format and include agendas, meeting minutes, drafts of various portions of the specification, etc.

Some basic statistics on this set of documents:

Min length = 1 page
Mode = 2 pages
Median length = 7 pages
Mean length = 34 pages
Max length = 409 pages

Min file size= 27,140 bytes
Median file size= 159,000 bytes
Mean file size= 749,000 bytes
Max file size= 15,870,000 bytes

So rather than pick a single document and claim that it reflects the whole, I looked at a wide range of document sizes in use within a specific document-centric organization.

I converted each document into ODF format as well as OOXML, using OpenOffice 2.03 and Office 2007 beta 2 respectively. As has been noted before, both ODF and OOXML formats are XML inside of a Zip archive. The compression from the zipping not only counters the expansion factor of the XML, but in fact results in files which are smaller than the original DOC files. The average OOXML document was 50% the size of the original DOC file, and the average ODF document was 38% the size of the DOC. So net result is that the ODF documents came out smaller, averaging 72% of their OOXML equivalents.

A quick sanity check of this result is easy to perform. Create an empty file in Word in OOXML format, and an empty file in OpenOffice in ODF format. Save both. The OOXML file ends up being 10,001 bytes, while the ODF file is only 6,888 bytes, or 69% of the OOXML file.

Here is a histogram of the ODF/OOXML size ratios for the sampled files. As you can see, there is a wide range of behaviors here, with some files even ending up larger in ODF format. But on average the ODF files were smaller.


What about the contents of the Zip archives? The OOXML documents tended to contain more XML files (on average 6 more) than the parallel ODF document, but these XML files were individually smaller, average 32,080 bytes versus 66,490 for ODF. However the net effect is that the average total size of the XML in the OOXML is greater than in ODF (684,856 bytes versus 401,406 bytes).

Here’s part 2 of the experiment. The proposal is that many (perhaps most) tools that deal with these formats will need to read and parse all of the XML files within the archive. So a core part of performance that these apps will share is how long it takes to unzip and parse these XML files. Of course this is only part of the performance story. What the application does with the parsed data is also critical, but that is application-dependent and hard to generalize. But the basic overhead of parsing is universal.

To test this out wrote a Python script to time how long it takes to unzip and parse (Python 2.4 minidom) all the XML’s in these 176 documents. I repeated each measurement 10 times and averaged. And I did this for both the OOXML and the ODF variants.

The results indicate that the ODF documents were parsed, on average 3.6x faster than the equivalent OOXML files. Here is a plot showing the ratio of OOXML parse time to ODF parse time as a function of page size:As you can see there is a wide variation in this ratio, especially with shorter documents. In some case the OOXML document took 8x or more longer time to parse than the equivelant ODF document. But with longer documents the variation settles out and settles on the 3.6x factor mentioned

Now how do we explain this? A rough model of XML parsing performance is that it has a fixed overhead to start up, initialize data structures, parse tables, etc., and then some incremental cost dependent on the size and complexity of the XML document. Most systems in the world work like this, fixed overhead plus incremental cost per unit of work. This is true whether we’re talking about XML parsing, HTTP transfers, cutting the lawn or giving blood at a blood bank. The general insight into these systems is that where the fixed overhead is significant, you want to batch up your work. Doing many small transactions will kill performance.

So one theory is that OOXML is slower because of the cost of initializing more XML parses. But it could also be because the aggregate size of the XML files are larger. More testing would be required to gauge the relative contribution of these two factors. However one thing is clear. Although this test was done with minidom on Python, the results are of wide applicability. I can think of no platform and no XML parser for which a larger document comprised of more XML files would be faster than a smaller document made up of fewer XML files. Parsing ODF word processing documents should be faster than OOXML versions everywhere.

I’m not the first one to notice some of these difference. Rick Jelliffe did some analysis of the differences between OOXML and ODF back in August. He approached it from a code complexity view, but in passing noted that the same word processor document loaded faster in ODF format in OpenOffice compared to the same document in OOXML format in Office 2007 beta. On the complexity side he noted that the ODF markup was more complex than the parallel OOXML document. So if ODF is more complex but also smaller, this may amount to higher information density, compactness of expression, etc., and that could certainly be a factor in performance.

So what’s your theory? Why do you think ODF word processing documents are faster than OOXML’s?

Filed Under: ODF, OOXML, Performance, XML

The Celerity of Verbosity

2006/10/17 By Rob 16 Comments

I’ve been hearing some rumblings from the north-west that Ecma Office Open XML (OOXML) format has better performance characteristics than OpenDocument Format (ODF), specifically because OOXML uses shorter tag names. Putting aside for the moment the question of whether OOXML is in fact faster than ODF (something I happen not to believe), let’s take a look at this reasonable question: What effect does using longer, humanly readable tags have on performance compared to using more cryptic terse names?

Obviously there are a number of variables at play here:

  • What XML API are you using? DOM or SAX? The overhead of holding the entire document in memory at once would presumably cause DOM to suffer more from tag length than SAX.
  • What XML parser implementation are you using? The use of internal symbol tables might make tag length less important or even irrelevant in some parsers.
  • What language are you programming in? Some language, like Java have string internalization features which can conflate all identical strings into a single instance.
  • What size document are you working with? Document parsing has fixed overhead as well as overhead proportionate to document size. A very short document will be dominated by fixed costs.

So there may not be a single answer for all users with all tools in all situations.

First, let’s talk a little about the tag length issue. It is important to note that the designer of an XML language has control over some, but not all names. For example take a namespace declaration:

xmlns:ve="http://schemas.openxmlformats.org/markup-compatibility/2006"

The values of namespace URI’s are typically predetermined and are often long in order to reduce the chance of accidental collisions. But the namespace prefix is usually chosen to be quite short, and is under the control of the application writing the XML, though a specific prefix is typically not mandated by language designer.

Element and attribute names can certainly be set by the language designer.

Attribute values may or may not be determined by the language designer. For example:

val="Heading1"

Here the name of the style may be determined by the template, or even directly by the user if he is entering a new named style. So the language designer and the application may have no control over the length of attribute values. Other attribute values may be fixed as part of the schema, and the length of those are controlled by the language designer.

Similarly, the length of character content is also typically determined by the user, since this is typically how free-form user content is entered, i.e., the text of the document.

Finally, note that the core XML markup for beginning and ending elements, delimiting attribute values, character entities etc., are all non-negotiable. You can’t eliminate them to save space.

Now for a little experiment. For the sake of this investigation, I decided to explore the performance of a DOM parse in Python 2.4 of a medium-sized document. The document I picked was a random, 60 page document selected from Ecma TC45’s XML document library which I converted from Microsoft’s binary DOC format into OOXML.

As many of you know, an OOXML document is actually multiple XML documents stored inside a Zip archive file. The main content is in a file called “document.xml” so I restricted my examination to that file.

So, how much overhead is there in a our typical OOXML document? I wrote a little Python script to count up the size of all of the element names and attributes names that appeared in the document. I counted only the characters which were controllable by the language designer. So w:pPr counts as three characters, counting only “pPr” since the namespace and XML delimiters cannot be removed. “pPr” is what the XML specification calls an NCName, also called a non-qualified name, since it is not qualified or limited by a namespace. There were 51,800 NCName’s in this document, accounting for 16% of the overall document size. The average NCName was 3.2 characters long.

For comparison, a comparably sized ODF document had an average NCName length of 7.7 and an NCName’s represented 24% of the document size.

So, ODF certainly uses longer names than OOXML. Personally I think this is a good thing, from the perspective of readability, a concern of particular interest to the application developer. Machines will get faster, memory will get cheaper, bandwidth will increase and latency will decrease, but programmers will never get any smarter and schedules will never allow enough time to complete the project. Human Evolution progresses at too slow a speed. So if you need to make a small trade-off between readability and performance, I usually favor readability. I can always tune the code to make it faster. But the developers are at a permanent disadvantage if the language uses cryptic. I can’t tune them.

But let’s see if there is really a trade-off to be made here at all. Let’s measure, not assume. Do longer names really hurt performance as Microsoft claims?

Here’s what I did. I took the original document.xml and expanded the NCNames for the most commonly-used tags. Simple search and replace. First I doubled them in length. Then quadrupled. Then 8x longer. Then 16x and even 32x longer. I then timed 1,000 parses of these XML files, choosing the files at random to avoid any bias over time caused by memory fragmentation or whatever. The results are as follows:

Expansion Factor NCName Count Total NCName Size (bytes) File size (bytes) NCName Overhead Average NCName Length (bytes) Average Parse Time (seconds)
1 (original) 51,800 166,898 1,036,393 16% 3.2 3.3
2 51,800 187,244 1,056,739 18% 3.6 3.2
4 51,800 227,936 1,097,443 21% 4.4 3.2
8 51,800 309,320 1,178,827 26% 6.0 3.2
16 51,800 472,088 1,341,595 35% 9.1 3.3
32 51,800 797,624 1,667,131 48% 15.4 3.3

If you like box-and-whisker plots (I sure do!) then here you go:What does this all mean? Even though we expanded some NCNames to 32-times their original length, making a 5x increase in the average NCName length, it made no significant difference in parse time. There is no discernible slow down in parse time as the element and attribute names increase.

Keep in mind again that the typical ODF documents shows an average NCName length of 7.7 . The above tests dealt with lengths twice that amount, and still no slowdown.

“Myth Busted”. I revert this topic to the spreaders of such FUD to substantiate their contrary claims.

Filed Under: ODF, OOXML, Performance, XML

A bit about the bit with the bits

2006/10/15 By Rob 16 Comments

I had an interesting meal in Paris a few weeks ago at a small bistro. I like Louisiana Cajun-style food, especially spicy andouille sausage, so when I saw “andouillette” on the menu, my stomach grumbled in anticipation. Certainly, the word ended in “ette”, but even my limited knowledge of French told me that this is just a diminutive ending. So maybe these sausages were small. No big deal, right?

When my lunch arrived, something was not quite right. First, this did not smell like any andouille sausage I had ever had. It was a familiar scent, but I couldn’t quite place it. But as soon as I cut into the sausage, and the filling burst out of the casing, it was clear what I had ordered. Tripe. Chitterlings . Pig intestines. With french fries.

I then knew where I had smelt this before. My grandfather, a Scotsman, was fond of his kidney pies and other dishes made of “variety meats”. This is food from an earlier time. The high fat content, and (in earlier days at least) cheaper prices of these cuts of meat provided essential meals for the poor. Although my grandfather ate these dishes out of preference, I’m pretty sure that his grandfather ate them out of necessity. How times change.

This was brought to mind recently as was reading the “final draft” of the Ecma Office Open XML (OOXML), something that was probably once done out of necessity in the memory-poor world of 1985, but now looks like an anachronism in the modern world of XML markup.

I’m talking about bitmasks. If you are a C programmer then you know already what I am talking about.

In C, imagine you want to store values for a number of yes/no (Boolean) type questions. C does not define a Boolean type, so the convention is to use an integer type and set it to 1 for true, and 0 for false. (Or in some conventions, 0 for true and anything else for false. Long story.) The smallest variable you can declare in C is a “char” (character) type, on most systems 8 bits (1 byte long) or even padded to a full 16 bits. But the astute reader will notice that a yes/no boolean question is really expressing only 1 bit of information, so storing it in an 8 bit character is a waste of space.

Thus the bitmask, a technique used by C programmers to encode multiple values into a single char (or int or long) variable by ascribing meaning to individual bits of the variables. For example, an 8-bit char can actually store the answer to 8 different yes/no questions, if we think of it in binary. So 10110001 is Yes/No/Yes/Yes/No/No/No/Yes. Expressed as an integer, it can be stored in a single variable, with the value of 177 (the decimal equivalent of 10110001).

The C language does not provide a direct way to set or query the values of an individual bit, but it does provide some “bitwise” operators that can be used to indirectly set and query bits in a bitmask. So if you want to test to see if the fifth (counting from the right) bit is true, then you do a bitwise AND with the number 16 and see if it is anything other than zero. Why 16? Because 16 in binary is 00010000, so doing a bitwise AND will extract just that single bit. Similarly you get set a single bit by doing a bitwise OR with the right value. This is one of the reasons why facility with binary and hexadecimal representations are important for C programmers.

So what does this all have to do with OOXML?

Consider this C-language declaration:

typedef struct tagLOCALESIGNATURE {
DWORD lsUsb[4];
DWORD lsCsbDefault[2];
DWORD lsCsbSupported[2];
} LOCALESIGNATURE, *PLOCALESIGNATURE;

This, from MSDN is described as a memory structure for storing:

…extended font signature information, including two code page bitfields (CPBs) that define the default and supported character sets and code pages. This structure is typically used to represent the relationships between font coverage and locales.

Compare this data structure to the XML defined in section 2.8.2.16 (page 759) of Volume 4 the OOXML final draft:

The astute reader will notice that this is pretty much a bit-for-bit dump of the Windows SDK memory structure. In this case the file format specification provides no abstraction or generalization. It merely is a memory dump of a Windows data structure.

This is one example of many. Other uses of bitmasks in OOXML include things such as:

  • paragraph conditional formatting
  • table cell conditional formatting
  • table row conditional formatting
  • table style conditional formatting settings exception
  • pane format filter

If this all sounds low-level and arcane, the you perceive correctly. I like the obscure as much as the next guy. I can recite Hammurabi in Old Babylonian, Homer in Greek, Catullus in Latin and Anonymous in Old English. But when it comes to an XML data format, I seek to be obvious, not obscure. Manipulating bits, my friends, is obscure in the realm of XML.

Why should you care? Bitmasks are use by C programmers, so why not in XML? One reason is addressing bits within an integer runs into platform-specific byte ordering difference. Different machine processors (physical and virtual) make different assumptions. Two popular conventions are go by the names of Big-endian and Little-endian. It would divert me too far from my present argument to explain the significance of that, so if you want more detail on that I suggest you seek out a programmer with grey hairs and ask him about byte-ordering conventions.

A second reason to avoid bitmasks in XML is that avoids being part of the XML data model. You’ve created a private data model inside an integer and it cannot be described or validated by XML Schema, RELAX NG, Schematron, etc. Even XSLT, the most-used method of XML transformation today, lacks functions for bit-level manipulations. TC45’s charter included the explicit goal of:

…enabling the implementation of the Office Open XML Formats by a wide set of tools and platforms in order to foster interoperability across office productivity applications and with line-of-business systems

I submit that the use of bitmasks is the not the thing to do if you want support in a “wide set of tools and platforms”. It can’t be validated and it can’t be transformed.

Thirdly, the reasons for using bitmasks in the first place are not relevant in XML document processing. Don’t get me wrong. I’m not saying bit-level data structures are always wrong on all occasions. They are certainly the bread and butter of systems programmers, even today, and they was truly needed in the days where data was transferred via XModem on 12kbps lines. But in XML, when the representation of the data is already in an expansive text representation to facilitate cross-platform use, trying to save a byte of storage here or there, at the expense of the additional code and complexity required to deal with bitmasks, that the wrong trade-off. Remember in the end, the XML gets zipped up anyways, and will typically end up to be 10-20% the size of the same document in DOC format. So, these bitmasks aren’t really saving you much, if any, storage.

Fourthy, bitmasks are not self-describing. If I told you the “table style conditional formatting exception” had the value of 32, would that mean anything to you? Or would it send you hunting through a 6,000+ page specification in search for a meaning? But what if I told you that the value was “APPLY_FIRST_ROW”, then what would you say? A primary virtue of XML is that it is humanly readable. Why throw that advantage away?

Finally, there are well supported alternatives to bitmasks in standard XML, such as enumeration types on XML Schema. Why avoid a data representation that allows both validation and manipulation by common XML tools?

It seems to me that the only reason that bitmasks were used here is that the Excel application already used them. Much easier for Microsoft to make the specification match the source code than to make a standard that is good, platform and application neutral XML.

So, for the second time in a month the thought enters my mind: “You expect me to eat this tripe ?!”

Filed Under: OOXML

When language goes on holiday

2006/10/15 By Rob 4 Comments

This apt phrase is from Wittgenstein, Philosophical Investigations, section 38, “Philosophical problems arise when language goes on holiday”. One cannot be sloppy in language without at the same time being sloppy in thought.

Of course, this thought is not new. In Analects 13:3, Confucius is given a hypothetical question by a disciple: “If the ruler of Wei put the administration of his state in your hands, what would you do first?”. Confucius replied, “There must be a Rectification of Names,” explaining:

If language is not correct, then what is said is not what is meant; if what is said is not what is meant, then what must be done remains undone; if this remains undone, morals and art will deteriorate; if justice goes astray, the people will stand about in helpless confusion. Hence there must be no arbitrariness in what is said. This matters above everything.

In that spirit, let us talk of “choice”, a word loaded with meaning. Choice is good, right? Who would voluntarily give up their god-given right to choose for himself? Reducing choice is immoral. A central role of government is to ensure that we can choose freely. For a market to thrive it must be free of every regulation that reduces our ability to choose. These are all self-evident truths.

Or are they?

Let me set you a problem. I place before you a glass of water. Whether it is half full or half empty I leave to your imagination. What use is this glass of water to you? Certainly you can drink it. Or you could sell it to someone else. Or you could create a derivative option to buy the water, and sell this option to someone else. Or you could pledge the water as collateral for some other purchase. You have several options, several choices. But suppose you are thirsty. Then what do you do with this nice, cold glass of water? If you drink it, then you can no longer sell it, sell options on it, or pledge it. Drinking the water eliminates choice. So better not to drink it. Just let it sit there, on the table. But still you get thirstier and thirstier.

What a cruel dilemma I’ve given you! You cannot drink without reducing your future options, without eliminating choice. Of course, the water slowing gets warmer and evaporates. Even not choosing is itself a choice.

The Moving Finger writes; and, having writ,
Moves on: nor all your Piety nor Wit
Shall lure it back to cancel half a Line,
Nor all your Tears wash out a Word of it.
— Omar Khayyam

How are we to make sense of this paradox? The fact is that every decision, ever choice you make, commits you and eliminates some other choices. We choose because without choosing we cannot claim the value in a single path among alternatives. If you want to quench your thirst then you must drink the water. It is that simple.

So I’ve found it amusing to see how Microsoft and their supporters constantly attack open source and open standards on the grounds that they reduce choice. For example, Microsoft’s lobbying arm, with the Orwellian doublespeak name “The Freedom to Innovate Network” lists this among its policy talking points:

[G]overnments should not freeze innovation by mandating use of specific technology standards

This talking point is picked up and repeated. Open Malaysia picks on a local news article which quoted a Microsoft director speaking on Malaysia’s move toward favoring Free and Open Source Software (FOSS) in government procurements:

My opinion is that it [the policy] limits choice as the country has a software procurement preference policy

The Initiative For Software Choice is the latest face on the hundred-headed hydra spreading FUD around the world. However they have recently had the embarrassment of seeing an example of their handiwork leaked to the press which is worth a read in full.

This in itself is neither new nor news, but it just recently occurred to me that this is all just an abuse of language, with no substance behind it. When one adopts a technology standard one does it with some desired outcome in mind. One chooses this path in order to receive that benefit. Adopting a standard is like drinking a glass of water. You doing it because you are thirsty.

A recent Danish report (the “Rambøll Report”) looked at the significant cost savings of moving the Danish government to OpenOffice/ODF compared to using MS Office with OOXML. Is it wrong to choose a less expensive alternative? Or is it better not to choose at all, and forgo the cost savings?

I think we need to all ask ourselves what we thirst for. Are you suffering from vendor lock-in? Are your documents tied to a single platform and vendor? Are you overpaying for software of which you use only a fraction of the functionality? Are you unable to move to a more robust desktop platform because your application vendor has tied its applications to a single platform? If you are thirsty, I have one word of advice: “Drink”.

Filed Under: ODF, OOXML, Standards

A Leap Back

2006/10/12 By Rob

1/23/2007 — A translation of this post, in Spanish has been provided by a reader. You can find it in the Los Trylobytes blog.

I’ve also taken this opportunity to update page and section references to refer to the final approved version of the Ecma Office Open XML specification, as well as providing a link to the final specification.


Early civilizations tried to rationalize the motions of the heavenly bodies. The sun rises and sets and they called that length of time a “day”. The moon changes phases and they called a complete cycle a “month”. And the sun moves through the signs of the zodiac and they called that a “year”. Unfortunately, these various lengths of time are not nice integral multiples of each other. A lunar month is not exactly 30 days. A solar year is not exactly 12 lunar months.

To work around these problems, civil calendars were introduced — some of the world’s first international standards — to provide a common understanding of date reckoning, without which commerce, justice and science would remain stunted.

In 45 B.C., Julius Caesar directed that an extra day be added to February every four years. (Interestingly, this extra day was not a February 29th as we have today in leap years, but by making February 24th last for two days.) This Julian System was in use for a long time, though even it has slight errors. By having a leap year every four years, we had 100 leap years every 400 years. However, to keep the seasons aligned properly with church feasts, etc., (who wants to celebrate Easter in Winter?) it was necessary to have only 97 leap years every 400 years.

So, in 1582 Pope Gregory XIII promulgated a new way of calculating leap years, saying that years divisible by 100 would be leap years only if they were also evenly divisible by 400. So, the year 1600 and 2000 were leap years, but 1700, 1800 and 1900 were not leap years. This Gregorian calendar was initial adopted by Catholic nations like Spain, Italy, France, etc. Protestant nations pretty much had adopted it by 1752, and Orthodox countries later, Russia after their 1918 revolution and Greece in 1923.

So, for most of the world, the Gregorian calendar has been the law for 250-425 years. That’s a well-established standard by anyone’s definition. Who would possibly ignore it or get it wrong at this point?

If you guessed “Microsoft”, you may advance to the head of the class.

Datetimes in Excel are represented as date serial numbers, where dates are counted from an origin, sometimes called an epoch, of January 1st, 1900. The problem is that from the earliest implementations Excel got it wrong. It thinks that 1900 was a leap year, when clearly it isn’t, under Gregorian rules since it is not divisible by 400. This error causes functions like the WEEKDAY() spreadsheet function to return incorrect values in some cases. See the Microsoft support article on this issue.

Now I have no problems with that bug remaining in Excel for backwards compatibility reasons. That’s an issue between Microsoft and their customers and not my concern. However, I am quite distressed to see this bug promoted into a requirement in the Ecma Office Open XML (OOXML) specification. From Section 3.17.41 of SpreadsheetML Reference Material, page 3305 of the OOXML specification (warning 49MB PDF download!) , “Date Representation”:

For legacy reasons, an implementation using the 1900 date base system shall treat 1900 as though it was a leap year. [Note: That is, serial value 59 corresponds to February 28, and serial value 61 corresponds to March 1, the next day, allowing the (nonexistent) date February 29 to have the serial value 60. end note] A consequence of this is that for dates between January 1 and February 28, WEEKDAY shall return a value for the day immediately prior to the correct day, so that the (nonexistent) date February 29 has a day-of-the-week that immediately follows that of February 28, and immediately precedes that of March 1.

So the new OOXML standard now contradicts 400 years of civil calendar practice, encodes nonexistent dates and returns the incorrect value for WEEKDAY()? And this is the mandated normative behavior? Is this some sort of joke?

The “legacy reasons” argument is entirely bogus. Microsoft could have easily have defined the XML format to require correct dates and managed the compatibility issues when loading/saving files in Excel. A file format is not required to be identical to an application’s internal representation.

Here is how I would have done it. Define the OOXML specification to encode dates using serial numbers that respect the Gregorian leap year calculations used by 100% of the nations on the planet. Then, if Microsoft desires to maintain this bug in their product, then have Excel add 1 to every date serial number of 60 or greater when loading, and subtract 1 from every such date when saving an OOXML file. This is not rocket science. In any case, don’t mandate the bug for every other processor of OOXML. And certainly don’t require that every person who wants the correct day of the week in 1900 to perform an extra calculation.

Sure this requires extra code to be added to Excel. Excel has a bug. Of course it will require code to fix a bug. Deal with it. I think the alternative of forcing the rest of the world to a adopt a new calendar system is the ultimate in chutzpah. The burden of a bug should fall on the product that has the bug, not with everyone else in the world.

Further, I’d note that section 3.2.28 (page 2693) defines a workbookPr (Workbook Properties) element with several attributes including the following flag:

date1904 (Date 1904)

Specifies a boolean value that indicates whether the date systems used in the workbook starts in 1904.

A value of on, 1, or true indicates the date system starts in 1904.
A value of off, 0, or false indicates the workbook uses the 1900 date system, where 1/1/1900 is the first day in the system.

The default value for this attribute is false.

What is so special about 1904 you might ask? This is another legacy problem with Excel, that implementations of Excel on the Mac, for reasons unknown to me, had an internal date origin of January 1st, 1904 rather than January 1st, 1900. This is unfortunate for Microsoft’s Mac Business Unit, and has likely been a source of frustration for them, needing to maintain these two date origins in their internal code.

But why is this my problem? Why should a standard XML format care about what Excel does on the Mac? Why should it care about any vendor’s quirks? If RobOffice (a fictional example) wants to internally use a date origin of March 15th, 1903 then that is my business. In my implementation I can do whatever I want. But when it comes to writing a file format standard, then the caprices of my implementation should not become a requirement for all other users of the file format. Further, if I cannot make up my mind and choose a single date origin then my indecisions should not cause other implementations to require extra code because of my indecision.

So there you have it, two ways in which Microsoft has created a needlessly complicated file format, and made your life more difficult if you are trying to work with this format, all to the exclusive advantage of their implementation. I wish I could assure you that this is an isolated example of this approach in OOXML But sadly, it is the rule, not the exception.

Filed Under: OOXML, Popular Posts, Standards

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 61
  • Page 62
  • Page 63
  • Page 64
  • Page 65
  • Interim pages omitted …
  • Page 69
  • Go to Next Page »

Primary Sidebar

Copyright © 2006-2026 Rob Weir · Site Policies