• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

An Antic Disposition

  • Home
  • About
  • Archives
  • Writings
  • Links
You are here: Home / Archives for Standards

Standards

The Case for a Single Document Format: Part III

2007/04/10 By Rob 14 Comments

This is Part III of a four-part post.

In Part I we surveyed of a number of different problem domains, some that resulted in a single standard, some that resulted in multiple standards.

In Part II, we described the forces that tend to unify or divide standards and showed in particular how network effects can drive the adoption of a single standard.

In this Part III we’ll look at the document formats in particular, how we got to the present point, and how and why historically there has been but a single universally-accepted document format.

In Part IV, we’ll tie it all together and show why there should be, and will be, only a single open digital document format.

The Meeting

It is 9:55 on an average Tuesday morning. I’m late (as usual) preparing for a meeting. With 5-minutes to go, I send out an updated meeting invite, with an updated agenda and a URL for the web conference. I also send out another email with an updated presentation attachment. It is the standard last-minute, pre-meeting shuffle that we all do. I expect that an examination of traffic statistics on IBM’s email servers shows a spike 5-minutes before every hour, as we all send out last-minute meeting updates. I login to my web conference and dial into the call. I’ll be meeting with my teammates, some in Westford, some in Raleigh, some in Portsmouth, some in Lexington, some in Dublin and some in Shanghai, a far-flung group. I’ve worked with some of these guys for years but still have never met most of them face-to-face. This is the nature of collaboration in a modern, global company. The call starts and I take a deep breath, push off my slippers and stretch my toes. Yes, I’m leading this meeting from home today.

“Don’t be impatient, Comrade Engineer; We’ve come very far, very fast”, in the words of Yevgraf Zhivago, Alec Guinness’s character in Doctor Zhivago. Let’s flash back 10 years ago and remind ourselves how we worked them…

It is 9:55 on an average Tuesday morning. I’m late (as usual) preparing for a meeting. With 5-minutes to go, I print out the agenda and handouts to the laser printer down the hall. It has printed by the time I arrive, and I sort through the three or four other print jobs to find the one that is mine. I need twelve copies for the meeting, so I join the queue at the photocopier, with everyone else who also waited to the last minute to print out the materials for their meetings. It is the standard last-minute, pre-meeting shuffle that we all do. I expect that an examination of statistics on IBM’s photocopiers shows a spike 5-minutes before every hour. I head over to the conference room and start the meeting. At the end of the call, 80% of the printed materials will be discarded, hopefully into the recycling bin. This was the nature of collaboration in a modern, global company, circa 1995.

What has changed? Why did it change? What does this mean for document formats?

My family in documents

Let me take you on a detour, back in time, to tell a 200-year family story, illustrated with official documents of the period.

I’ll start with the following excerpt from the 1930 Federal Census returns for Abington, Massachusetts, showing my grandmother, Florence Mae Cushing, then age 18, and her parents William and Mary, and household. The columns indicate the following:

  1. Name
  2. Relationship to the head of household
  3. Whether they own or rent their dwelling
  4. Value of their dwelling
  5. Whether they own a radio
  6. Whether they own a farm
  7. Sex
  8. Race
  9. Age
  10. Marital condition
  11. Age at first marriage
  12. Whether they are in school

The thing that caught by eye about this record is that it lists a, “Damon, Mary K” as William’s mother-in-law, widowed, age 73, living with them. Let’s see what we can find out about this woman. First step is to find her maiden name. A search for her marriage record in Abington failed, so we tried for Mary E. Damon’s birth record, which we did find in Abington’s birth register for in 1887 revealing her mother’s maiden name as, “Chessman”:

This then allows us to find Mary K. Chessman’s birth record, also in Abington, from 1856 listing her parents as Edward and Emily:

And then from here we can go back and find the family in the 1860 Federal Census:

We see the family as owning $500 in real estate and $100 in personal property, having 5 children, the oldest 8 years old. Mary K. is only 3.

But when I skip ahead to the 1870 Census, something is clearly wrong:

As you can see above, Emily is listed as head of household, and there is no Edward. And where is our Mary K? Age age 13, she has moved out and is working as a “domestic servant” with a family of factory workers. Her sister Harriet, age 15, is also living there and working in an “eyelet factory”:

So what happened? Resolving this mystery required a bit more sleuthing, but I eventually found the answer in a response to a records request to the National Archives and Records Administration (NARA):

From this I learned that Edward Blanchard Chessman, Mary K’s father, had served in the Civil War with the Massachusetts 32nd Volunteers and had died of disease in 1863 at a military hospital in Alexandria, Virginia. This along, with a dozen pages of additional documents from NARA, detailed the pension application of his widow, the depositions of witnesses who vouched for their marriage and his service, the periodic requests for pension increases, all the way to 1903 when Emily died and her pension file was closed, marked “DEAD” with a big, bold stamp.

Since I was now tipped off to the value of pension records, I next searched for Edward’s grandfather, Ziba Chessman, who I knew had served in the Revolutionary War. I was able to locate his widow’s pension application as well:

The hand of this writer is not so easy to read, but I’d transcribe the start of it as:

Commonwealth of Massachusetts. Norfolk County. On this twenty second day of August 1838 personally appeared before Herman **** The *** of Probate in **** County, Mehitable Chessman a resident in the Town of Braintree in the County of Norfolk and state of Massachusetts aged seventy three years, who being first duly sworn according to law doth on her oath make the following declaration in order to obtain the benefit of the provision made by the Act of Congress passed July 7th 1838 entitled “An Act Granting Half Pay and Pensions to Certain Widows”, that she is the widow of Ziba Chessman late of Braintree in the County of Norfolk and state aforementioned deceased, who was a Solider in the War of the Revolution; that her said husband Ziba Chessman enlisted into Captain Isaac Thayers or Captain Nathaniel Belchers Company in the year 1775 and served a short period of time as a private with the Massachusetts Militia, around the shores of Boston, according to the best of her knowledge….

I am in awe that these records have been maintained and preserved for so long, and made available to people like me who are researching their family tree. There is a continuity of records in New England that goes back almost 400 years. Birth, education records, draft registration, military service, marriage, court appearances and eventually death and burial. Whenever your personal life crossed paths with the government, it generated a record and this record may last forever, and more importantly, once the physical preservation aspects are taken care of, these records can be read forever.

A brief history of document technology

It is somewhat odd that we’ve been debating document formats for so long and have not really said what they are. I’ll recommend the following for our discussion:

A document format consists of the conventions that allow a document to be fixed in a persistent state and then exchanged with other parties who are able to use these same conventions to read and further edit that document. If you and I understand the same document format, then you and I can exchange documents in that format and we can collaborate using that format.

Since around 1450, with Gutenberg’s first notable success of combining document production and automation, and even before (and since) with manual document production, there has been a single globally relevant interoperable document format — ink on paper. Everyone could create it, everyone could read it, everyone could exchange it. It worked then and it works now.

Some noticeable advances in documents since 1450 include the invention of pre-printed forms, around 1850. These seem obvious now, but for many years we had what were called “formulary documents” which had boilerplate text which the clerk wrote out in full for each document, in addition to the customized language for each specific instance. You can get a sense of this from Ziba Chessman’s pension application quoted earlier. From an engineering perspective you can think of this as reuse of design, but not implementation.

Having a pre-printed form was a step forward in productivity, allowing a greater degree of reuse. The Surgeon General’s form shown above is an early example. Such forms were quickly associated with bureaucracy . In fact, the first written use of the word “form” in the English language (according to the Oxford English Dictionary) was this critical view of a 19th century government office:

The waiting-rooms of that Department soon began to be familiar with his presence, and he was generally ushered into them by its janitors much as a pickpocket might be shown into a police-office; the principal difference being that the object of the latter class of public business is to keep the pickpocket, while the Circumlocution object was to get rid of Clennam. However, he was resolved to stick to the Great Department; and so the work of form-filling, corresponding, minuting, memorandum-making, signing, counter-signing, counter-counter-signing, referring backwards and forwards, and referring sideways, crosswise, and zig-zag, recommenced — Dickens, Little Dorrit (1855)

The telegraph (1837) and teletype (1910) gave new, faster ways of moving documents around. Was Morse Code a new document format? Although the telegraph operators may have worked in Morse Code, the author of the document, and the person who ultimately received and read the document still worked with ink on paper.

The typewriter (1872) increase the speed and uniformity of personal document production. This also lead to a new use for carbon paper, an invention of 1806 originally created as an aid for the blind.

In the late 1880’s, Edison’s “Autographic Printing” was commercialized as the Mimeograph, giving a cheaper method of small batch document production.

Melvin Dewey (of Dewey Decimal fame) invents the hanging file folder (1893), leading to increased efficiency of document storage and retrieval.

The Harris Automatic Press Company is incorporated in 1895, ushering in the commercial use of offset printing and a 10-fold increase in document output rates.

The invention of the Soundex algorithm by Robert Russell of Pittsburgh in 1918 allowed more efficient searching of files and cards indexed by surnames, by grouping together names which were phonetically similar.

In 1924 radio facsimile allows pictures, as well as text, to be transmitted long distances.

In 1948 Xerography gave us document duplication without the use of wet, messy chemicals.

In 1969, IBM’s Charles Goldfarb, Ed Mosher and Ray Lorie invented GML, the Generalized Markup Language, the ancestor of SGML, HTML and XML.

The 1970’s saw the rise of the first computer-based word processors, including Wang’s Office Information System.

In 1974 Xerox PARC engineers create Bravo, the first WYSIWYG word processor.

In 1975, with the rise of office automation systems and early word processors, Business Week boldly proclaimed the “Paperless Office”.

At this point we reach an important fork in the road of history. What role would the computer and office automation mean for the future of documents? Does the paperless office become a reality? Or do we remain with paper-based documents? As Xerox PARC engineers were developing the world’s first WYSIWYG word processor, at the same time they were also developing a system for transporting documents electronically, from one computer to another. But this innovation was dropped because it went against Xerox’s core business, the creation and duplication of paper documents. So the choice was made. Paper still ruled. Paper consumption went up, not down. The word processor made it easier to produce more paper, faster. The paperless office did not happen, at least not yet. More first-hand details on this fascinating topic can be read in Sellen & Harper’s The Myth of the Paperless Office. In their words, “…paper became a surrogate for the network, enabling users with different machines to share documents…”.

And so we continued, for another 20 years, of WYSIWYG word processors, WordStar, MacWrite, Writing Assistant, Manuscript, WordPerfect, Word, WordPro, etc. We all created documents and hid the files away on our hard-drives in incompatible formats. When we needed to work with others we usually just printed out the document and exchanged the printout, using the 500-year old format of ink on paper.

Let’s pause here and make some observations.

First, note the areas of sustained and recurring innovation. These have been consistent throughout the past 500 years and reflect the ongoing nature and practical concerns of business communications:

  1. Document authoring
  2. Document duplication
  3. Document distribution
  4. Filling out of forms
  5. Submission of forms
  6. Processing of forms
  7. Storage and Retrieval of documents
  8. Authentication of documents (not mentioned in the history above, but the use of Notary Publics and corporate seals has facilitated this with ink and paper documents, in some forms back to ancient Rome.)

Note also that the engineering progress and increases in efficiencies in these areas occurred without challenging the primacy of a single document format. The universality of ink and paper did not stifle innovation over these 500 years. On the contrary a single standard document format encouraged and focused innovation. We went from documents authored by pen, then set in moveable type, manually pressed, bound and distributed at the speed of a horse, to where we were circa 1995, when I authored documents on a computer, printed to a laser printer and then queued up at the photocopier to make copies of my agenda before the meeting started. Ink on paper — it was the standard document format for 500 years.

But of course, we don’t work this way anymore. Something changed, very recently. I don’t print out agendas any more. I send them via email. I don’t print out reports and review them with a red pen in hand. I mark them up electronically. In fact, unless I need to sign it or staple a receipt to it, I don’t print out anything. I think I can live out the remainder of my professional career on only 2 reams of paper.

What happened then to change this? Why is there less of an emphasis on printed output today? What does this mean for WYSIWYG? And what does this mean for document formats?

These questions and others when I finish up this series in Part IV.


20 April 2007 — Another editing pass, tightening up the language, but still too long. Added link to “The Myth of the Paperless Office”.

Filed Under: Standards

The Case for a Single Document Format: Part II

2007/03/22 By Rob 14 Comments

This is Part II of a four-part post.

In Part I we surveyed of a number of different problem domains, some that resulted in a single standard, some that resulted in multiple standards.

In this post, Part II, we’ll try to explain the forces that tend to unify or divide standards and hopefully make sense of what we saw in Part I.

In Part III we’ll look at the document formats in particular, how we got to the present point, and how and why historically there has always been but a single document format.

In Part IV, if needed, we’ll tie it all together and show why there should be, and will be, only a single open digital document format.

To make sense of the diversity of standardization behavior reviewed in Part I it is necessary to consider the range of benefits that standards bring. Although few standards bring all of these benefits, most will bring one or more.

Variety Reduction

Standards for screw sizes, wire gauges, paper sizes and shoe sizes are examples of “variety-reducing standards”. In order to encourage economies of scale and the resulting lower costs to producers and consumers, goods that may naturally have had a continuum of allowed properties are discretized into a smaller number of varieties that will be good-enough for most purposes.

For example, my feet may naturally fit best in size 9.3572 shoes. But I do not see that size on the shelves. I see only shoes in half-size increments. Certainly I could order custom-made shoes to fit my feet exactly, but this would be rather expensive. So, accepting that the manufacturing, distribution and retail aspects of the footwear industry cannot stock 1,000’s of different shoe sizes and still sell at a price that I can afford, I buy the most comfortable standard size, usually men’s size 9.5.

And yes, Virginia, there is also an ISO Standard for shoe sizes, called ISO 9407:1991 “Mondopoint”.

Decreased Information Asymmetry

A key premise of an efficient & free market is the existence of voluntary sellers and voluntary buyers motivated by self-interest in the presence of perfect information. But the real marketplace often does not work that way. In many cases there is an asymmetry of information which hurts the consumer, as well as the seller.

For example, when you buy a box of breakfast cereal at the supermarket, what do you know about it? You cannot open the box and sample it. You cannot remove a portion of the cereal, bring it to a lab and test it for the presence of nuts or measure the amount of fiber contained in it. The box is sealed and the contents invisible. All you can do is hold and shake the box.

The disadvantage to the consumer from this information asymmetry is obvious. But the manufacturer suffers as well. This stems from the difficulty of charging a premium for special-grade products if this higher grade cannot be verified by the consumer prior to purchase. How can you sell low-fat or high-fiber or all-natural or low-carb foods and charge more for those benefits, if anyone can slap that label on their box?

The government-mandated food ingredient and nutritional labels solves the problem. The supermarket is full of standards like this, from standardized grades of eggs, meat, produce, olive oil, wine, etc. There are voluntary standards as well, like organic food labeling standards, that fulfill a similar purpose.

Compatibility

Compatibility standards, also called interface standards, provide a common technical specification which can be shared by multiple producers to achieve interoperability. In some cases, these standards are mandated by the government. For example, if you want to ship a letter using First Class postage, you must adhere to certain size and shape restrictions on the letter. If you want to to send many letters at once, using the reduced bulk rate, then you must follow additional constraints on how the letters are addressed and sorted. If you want to deal with the Post Office, then these are the standards you must follow.

Similarly, if you are a software developer and you want to write an application that does electronic tax submissions, then you most follow the data definitions and protocols defined by the IRS.

Required interface standards are quite common when dealing with the government. Regulations requiring the use of specific standards also promote public safety, health and environmental protection.

And not just government. A sufficiently dominant company in an industry, a WalMart, an Amazon or an eBay, can often define and mandate the use of specific standards by their suppliers. If you want to do business with WalMart, then you must play by their rules.

Network Goods

Where it gets interesting is when compatibility standards combine with the network effect. I’m sure many of you are familiar with the network effect, but bear with me as I review.

The first person to have a telephone received little immediate value from it. All Mr. Bell could do was call Mr. Watson and tell him to come over. But the value of the telephone grew as each new subscriber was connected to the network, since there were now more people who could be contacted. Each new user brought value to all users, present and future. When the value of a technology increases when more people use it, then you have a network effect.

In a classic, maximally-connected network, like the telephone system, when you double the number of subscribers, you double the value to each user. This also causes the value of the entire network — the total value to all subscribers — to square. So double the number of participants in the network, and the value of the network goes up four-fold.

Of course, this only works up to a point. There are diminishing returns. When the last rural villager in Albania gets a telephone connection, I personally will not notice any incremental benefit. But when we’re talking about the initial growth period of the technology, then the above rule is roughly the behavior we see.

Other familiar network effect technologies include the Internet’s technical infrastructure (TCP/IP, DNS, etc.), eBay, Second Life, social networking sites such as Flickr, del.icio.us or Digg, etc.

If we delve deeper we can talk about two types of network effects: direct and indirect. The direct effect, as described above, is the increased value you receive in using the system as greater numbers of other people also use the system. The indirect effects are the supply-side effects, caused by things like increased choice in vendors, increased choice in after-market options and repairs, increased cost efficiencies and economies of scale by a market that can optimize production around a single standard.

So take the example of eBay. The direct network effect is clear. The more people that use it, the more buyers and sellers are present, and the more value there is to all of the buyers and sellers. The indirect network effect is the number of 3rd party tools for listing auctions, processing sales, watching for wanted items, sniping, etc., which are available because of the concentrated attention on this one online auction site.

It might be helpful to look at this graphically. The following chart attempts to show two things:

  • How the average per-user cost of using the technology C(N) decreases as more people join the network.
  • How the average per-user utility (value) U(N) increases as more people join the network.

A few things to note:

First, utility does not increase without limit and cost does not decrease without limit. There will be diminishing returns to both. Remember that last villager in Albania.

Also, note that initially the average cost is more than the average utility. But this is only the average. Not everyone’s utility function is the same. If they were, then network would never get started. Fortunately, there is a diversity of utility functions. Some users will see more initial value than others, and they will be the early adopters. Some will see far less value than others and they will be the late adopters.

Finally note the point marked as the “tipping point”. This is where the largest growth occurs, when the average user’s utility is greater than the average users’ cost.

Network Effect Compatibility Standards

So what does this all have to do with standards? My observation is that a single standard in a domain naturally results when there are strong direct and indirect network effects. And where these network effects do not exist, or are weak, then multiple standards flourish.

This can be seen as societal value maximization. A network of N-participants has a total value proportionate to N-squared. Split this into two equally-sized incompatible networks and the value is 2*(N/2)^2 or (N^2)/2. The maximal value comes only with a single network governed by a single standard.

Allowing two different networks to interoperate may be technically possible via bridging, adapting or converting, but this at best preserves the direct network effects only. The indirect effects, the economies of scale, the choice of multiple vendors, the 3rd party after-market options, etc., these reach their maximum value with a single network. The indirect network benefits essentially follow from the industry concentrating their attention and effort around a single standard. When split into multiple networks, the industry instead concentrates their attention on adapters, bridges and convertors, which requires effort and expense on their part, with the cost eventually passed on to the consumer, although it brings the consumer no net benefit over having a single network.

The Cases from Part I

Let’s finish by reviewing the cases presented in Part I, in light of the above analysis, to see if those examples make more sense now.

  • Railroad gauge — This is clearly a network compatibility standard, with strong direct and indirect effects. When everyone uses the same gauge, travelers and goods can travel to more places, faster and at less cost. The indirect effect is that it allows the train manufacturer to concentrate on producing a train that fits a single gauge. As this happens the train companies have a greater choice of whom they can buy from. Everyone wins.
  • Standard Time — This is more subtle, but it is also a network effect standard. The more people who use Standard Time, the easier it was to communicate times unambiguously and without error to others who were also using Standard Time. There is also an aspect of variety-reduction to this, where having fewer local times to worry about simplified the train time tables which made it easier for passengers and shippers or interacted with the trains.
  • The single language for civil aeronautics. This is variety-reduction, a mandated safety standard, as well as a networked compatibility standard, where the network consists of pilots and control towers.
  • Beverage can diameters — This is a variety-reducing standard. There is no network effect. Ask yourself, when you buy a can of Coke, does it bring more value to others who have also bought a can of Coke? No, it doesn’t.
  • TV signals — Clearly this is a network compatibility standard, with strong direct and indirect effects. The network is not just of the viewers of TV. It also includes the networks, the local affiliates, and the companies that manufacture the hardware and software, from antennas and transmitters, to camera, editing software, televisions and VCR’s.
  • The complexity of the above network is one reason why the government has stepped in to mandate the switch to digital television. (The other reason is the money they will get from auctioning off the radio spectrum this conversion will free up) The free market is good at many things, but the complex conversion of an entire network of diverse and competing producers and consumers at many levels is not something it has the agility to accomplish.
  • Fire hose couplings — This started as a compatibility standard, but only at a local level. Baltimore had its own standard for its own fire company. However, as the railroad made it practical to transport fire companies from more distant cities, a larger network developed. By using the national standard hose coupling, you not only can now receive mutual assistance from other fire companies (direct value) you also have a greater choice of whom you can buy fire hoses from (indirect value), and fire hose manufacturers now have a larger market they can sell into (indirect value) and the concentration on a single coupling design (variety-reduction) will lead to manufacturing efficiencies and economies of scale (indirect value), as well as concentrated innovation around that standard (indirect value).
  • Safety razors — There is no network effect with razors and razor blades. The value I get from using Gillette does not vary depending on how many other people use Gillette. I would get the same shave if I were the only one using it, as if the entire world used it.
  • Video game consoles — These generally have been free of direct network effects, though there are clearly some indirect ones, in terms of varieties of titles, after-market accessories, etc. The interesting thing to watch will be to see whether the latest generation of game systems, the ones that allow play over the Internet, will lead to direct network benefits. Will this lead to standards in this area?
  • SLR lens mounts, DVD disc standards, coffee filters, vacuum cleaner bags, etc. — These are all similar, compatibility standards with no direct network effects.

Well, this is too long already, so I’ll stop here.

In Part III I’ll look at the history of document formats, and see what factors have influenced their standardization. Some questions to think about until then:

  1. Some technologies, like rail gauges, local time or fire hose couplings went many years without standardization. Then, in a brief surge of activity, they were standardized. Look at the trends or events that participated the need for standardization. Is there any unifying logic to why these changes occurred? Hint, there is something here more general than just the trains.
  2. In the cellular phone industry, Europe and Asia made an early decision to standardize on the GSM network, while the U.S. market fragmented between CDMA, GSM and, earlier, D-AMPS. What effects does this have on the American versus the European consumer, direct and indirect?
  3. Microsoft has repeatedly stated that they are dead-against government mandates of specific standards. But they are a member of the HighTech Digital TV Coalition, an organization which is heavily lobbying the government to mandate Digital TV standards. How do we reconcile these two positions? Are they only against mandatory standards in areas where they have a monopoly?
  4. How does any of this relate to office document formats?

In Part III, we’ll look at that last question in particular, including an illustrated review of the history of document formats.


3/23/07 — Corrections: Bell not Edison invented the telephone (Doh!). Also corrected calculation in value of two networks.

Filed Under: Standards

The Case for a Single Document Format: Part I

2007/03/18 By Rob 15 Comments

This will be a multi-part post, mixing in a little economics, a little history and a little technology — an intellectual smörgåsbord — attempting to make the argument that a single document format is the inevitable and desired outcome.

In Part I we’ll take a survey of a number of different problem domains, some that resulted in a single standard, some that resulted in multiple standards.

In Part II we’ll try to explain the forces that tend to unify or divide standards and hopefully make sense of what we saw in Part I.

In Part III we’ll look at the document formats in particular, how we got to the present point, and how and why historically there has always been but a single document format.

In Part IV, if needed, we’ll tie it all together and show why there should be, and will be, only a single open digital document format.

Let’s get started!

Standards — in some domains there is a single standard, while in other domains there are multiple standards. What is the logic of this? What domains encourage, or even demand a single standard? And where do multiple standards coexist without problems?

Let’s take a look at some familiar examples and see if we can figure out how this works. We’ll start with some examples where a single standard dominates.

Single Standards

The story of the standard rail gauge is probably familiar to you. At first each rail company laid down their own tracks to their own specifications. In the United States there were different gauges used in the North (5′ 9″) and the South (5′). This was not a major issue so long as rail travel remained local or regional. However, as the reach of commerce increased, the pain of dealing with the “break of gauge” between adjacent gauge systems increased. Passengers and goods needed to be offloaded and transferred to a different train, causing time delays and inefficient utilization of equipment. The decision was made to adopt a Standard Gauge of 5′ 9″ and an ambitious migration project took place on May 31st, 1886, when thousands of workers in the South adjusted the west track and moved it 3″ to the east, lining up with the Northern gauge. Eleven-thousand miles of tracks were converted in thirty-six hours.

It should be noted that this unification was not universally celebrated. In particular, riots occurred at some of the junction points, like Erie, Pennsylvania, where local workers stood to lose the high-paying jobs they had unloading and loading cargo onto new trains. Efficiency is often opposed by those who profited from inefficiency.

Another standard prompted by the railroad was the adoption of standard time. In earlier days each town and city had its own local time, roughly based on solar mean time. When it was noon in Chicago, it was 12:09 in Cincinnati, and 11:50 in St. Louis. The instant of local noon would be communicated to residents by a cannon shot or by dropping a ball from a tower, allowing all to synchronize their clocks. The ball drop could be observed by ships in the harbor by telescope and so was much more accurate than the cannon, since the signal was not delayed by the non-negligible travel time of sound. Some memory of this tradition continues to this day with the New Year’s Eve ball drop in Times Square.

When it took days by coach to travel from Chicago to Cincinnati, it did not matter that your watch was 9-minutes slow. Your watch probably wasn’t accurate enough to tell the difference in any case. When noon came in Cincinnati you would synchronize your watch, knowing that some of the correction was caused by the change in longitude, and some was caused by the imperfections in the watch. But the average person did not care because they did not travel all that much.

However, with the coming of the railroad and then the telegraph, everything changed. People, goods and information could be transferred at far greater speeds. The difference of 9 minutes was now significant.

Initially, each rail company defined its own time, based on the local time of its main office. Timetables would be printed up based on this time. So a large train station, which may serve six different lines, would display six different clocks, all set to different times, some 12 minutes ahead, some 15 minutes behind, etc. At one point, trains in Wisconsin were operating on 38 different times! This was not only an inconvenience to travelers, it was also increasingly a safety concern, since the use of different time systems at the same station increased the chance of collisions.

This was addressed by the adoption of Standard Time in the United Stated on November 18th, 1883, the so-called “Day of Two Noons” . This was the day that the Eastern, Central, Mountain, and Pacific time zones took effect, and on this day every town adjusted its local time to the Standard Time of their new time zone. If you were in the eastern-half of your time zone, then when local noon came you would set your clocks back a specified number of minutes, and would thus observe noon twice. If you were on the western-half of your time zone, you would advance your clocks at local noon a specified number of minutes. The contemporary coverage of this event in The New York Times is worth a read.

Over the years, the every increasing rate of commerce and information flow has lead to greater and greater precision in time-keeping, so that today with atomic clocks and UTC we can now account for the slowing of the Earth’s rotation and the insertion of occasional leap seconds.

The International Civil Aviation Organization (ICAO) is a UN agency that maintains various aeronautical standards, such as airport codes, aircraft codes, etc. They are also responsible for making English the required language for air-to-ground communications. So when an Italian plane, with an Italian crew on an Italian domestic flight contacts the approach tower at an Italian airport, manned by Italian personnel, they will contact the tower in English. Why do you think this is so?

The diameter of beverage cans has but little variation. A can of Coca-Cola and a can of Pepsi will both fit in my car’s cup holder. They also fit fine in the cup holders in my beach chair or rider lawnmower. This works with beer cans as well, with innovative holders such as the novelty beer hat . Vending machines seem to take advantage of this standard as well, since it simplifies their design. The whole beverage can ecosystem works because of standards around beverage can sizes. How is this standard maintained? Was it planned this way?

It is interesting to note that, from the beverage company’s perspective this is non-optimal. A can has minimum surface area for a given volume when it has equal height and diameter. But we never see beverage cans of that shape. Why not?

In the United States, our television signals are encoded in the NTSC system. PAL is used in most of Western Europe and Asia, and SECAM is used in France and Eastern Europe. The United States is moving to a new standard, High Definition, HDTV, by February 17th, 2009. This is the law, as enacted by Congress, that we must move to a new television standard, causing expenses to broadcasters and consumers, as well as generating a lot of revenue for electronic manufacturers. Why did this require a law? If it was good for consumers and for manufacturers, wouldn’t the free market make this move on its own?

The Great Baltimore Fire of 1904 quickly grew beyond the control of local fire companies. As the fire spread to encompass the entire central business district, the unprecedented call went out by telegraph for assistance from fire companies from Washington, DC and Annapolis and as far away as Philadelphia, Atlantic City and New York. But when these companies arrived, with their own equipment, they found that their hose couplings were incompatible. This was a large contributing factor to these fire’s duration and destructive power. Over 1,500 buildings were destroyed over 30 hours. Within a year there was a national standard for fire hoses.

To these can be added the hundreds of standardized items that we work with every day, such as standardized electrical connectors, light bulbs, food nutritional labels, gasoline nozzles, network addresses, batteries, staples, toilet paper holders, telephones networks, remote control infrared signals, envelopes, paper sizes and weights, currency, plumbing fixtures, light switch face plates, radio frequencies and modulations, screws, nails and other fasteners, etc.

Multiple Standards

Now let’s switch to some examples of domains where multiple standards have flourished.

The textbook example is the safety razor. When the safety razor was invented by Gillette, they were interchangeable, disposable blades made of carbon steel. As such they rusted and needed to be frequently replaced. Wilkinson Sword, later owner of the Schick brand, started making compatible stainless steel blades, which Gillette then copied. So there was a good amount of competition going on.

In the early 1970’s Gillette moved to embed the blades into disposable cartridges which, due to their patent protection, could not be copied by other manufacturers. This lead to our present situation of having multiple, incompatible razor systems. Competition remains fierce, with a battle to see who can put the most blades in a cartridge, from the Gillette Trac II with two blades and the Mach 3 with three blades, to Schick’s Quattro with 4 blades, to Gillette’s Fusion with 5 blades. Any guesses on what is next?

Video game consoles are in a similar position. In fact, they are often called a “razor and razor blade” business, since they sell the consoles at less than cost and later make their profit selling the game cartridges in proprietary formats. There is little interest, and seemingly little demand for a universal game cartridge standard.

Another example is the realm of SLR camera lens mounts. Each camera manufacturer has their own system of incompatible lens mounts. Is one clearly better than another? Have the multiple standards encouraged innovation in the area of lens mounts over the past 40 years? Good question. All I know is I have a bag full of Minolta lenses that I can’t use anymore since I moved to a Pentax camera.

We’ve all seen the many optical storage formats in recent years. Just in the realm of writable DVD disk standards, we’ve seen DVD-R, DVD-RW, DVD+RW and DVD-RAM, many of them in single and double-sided variations.

In the past 5 years we’ve seen perhaps a dozen or more varieties and variations of memory card formats, all of them proprietary and incompatible with each other. It makes the state of optical disk formats seem regular and peaceful in comparison.

To these can be added the hundreds of daily items that have managed to avoid a single standard, such as vacuum cleaner bags, coffee filters, laptop power supplies, cell phone chargers, high definition video disc formats, surround sound audio disc formats, etc.

That is all for Part I. Some questions to ask yourself:

  1. In the examples given of domains where there is a single standard, most of them did not start off that way. Most started with many competing approaches. What forces led them to a single standard?
  2. Who won and who lost in moving to a single standard? Who decided to make the move?
  3. In the cases where there are multiple, incompatible standards, is there a market demand for unified standards? Why or why not?
  4. If a government decree came down today and mandated a single standard in those areas, what would be gained? What would be lost?

I hope you will continue on with reading Part II.

Filed Under: Standards

Essential and Accidental in Standards

2007/02/25 By Rob 16 Comments

The earliest standards were created to support the administration of the government, which in antiquity primarily consisted of religion, justice, taxation and warfare. Crucial standards included the calendar, units of length, area, volume and weight, and uniform coinage.

Uniform coinage in particular was a significant advance. Previously, financial transactions occurred only by barter or by exchanging lumps of metals of irregular purity, size and shape, called “aes rude”. With the mass production of coins of uniform purity and weight imprinted with the emperor’s portrait, money could now be exchanged by simply counting, a vast improvement over having to figure out the purity and weight of an arbitrary lump of metal. Standards reduced the friction of commercial transactions.

Cosmas Indicopleustes, a widely-traveled merchant, later a monk, writing in the 6th Century, said:

The second sign of the sovereignty which God has granted to the Romans is that all nations trade in their currency, and in every place from one end of the world to the other it is acceptable and envied by every other man and every kingdom

“You can see a lot just by observing,” as Yogi Berra once said. A coin can be read much like a book. So, what can you see by reading a coin, and what does this tell us about standards?

To the left are examples from my collection of a single type of coin. The first picture shows the the obverse of one instance, followed by the reverse of eight copies of the same type.

The legend on the observe is “FLIVLCONSTANTIVSNOBC”. The text is highly abbreviated and there are no breaks between the words as is typical in classical inscriptions whether on coins or monuments. Brass or marble was expensive so space was not wasted. We can expand this inscription to “Flavius Julius Constantius Nobilissimus Caesar” which translates to “Flavius Julius Constantius, Most Noble Caesar”.

So this is a coin of Constantius II (317-361 A.D.), the middle son of Constantine the Great. The fact that he is styled “Caesar” rather than “Augustus” indicates that this coin dates from his days as heir-designate (324-337), prior to his father’s death. We know from other evidence that this type of coin was current around 330-337 A.D.

There is not much else interesting on the obverse. Imperial portraits had become stylized so much by this period that you cannot really tell one from the other purely by the portrait.

The reverse is a bit more interesting. When you consider that the such coins were produced by the millions and circulated to the far corners of the empire, it is clear the coins could have propaganda as well as monetary value. In this case, the message is clear. The legend reads “Gloria Exercitus” or “The Glory of the Army”. Since the army’s favor was usually the deciding factor in determining succession, a young Caesar could never praise the army too much. Not coincidentally, Constantius’s brothers, also named as Caesars, also produced coins praising the army before their father’s death.

At bottom of the reverse, in what is called the “exergue”, is where we find the mint marks, telling where the coin was minted, and even which group or “officina” within the mint produced the coin. From the mint marks, we see that these particular coins were minted in Siscia (now Sisak, Croatia), Antioch (Antakya, Turkey), Cyzicus (Kapu-Dagh, Turkey), Thessalonica (Thessaloníki, Greece) and Constantinople (Istanbul, Turkey).

The image on the reverse shows two soldiers, each holding a spear and shield, with two standards, or “signa militaria”, between them. The standard was of vital importance on the battle field, providing a common point by which the troops could orient themselves in their maneuvers. These standards appear to be of the type used by the Centuries (units of 100 men) rather than the legionary standard, which would have the imperial eagle on top. You can see a modern recreation of a standard here.

If you look closely, you’ll notice that soldiers on these coins are not holding the standards (they already have a spear in one hand and a shield in the other), and they lack the animal skin headpiece traditional to a standard bearer or “signifer”. So this tells us that these soldiers are merely rank and file soldiers encamped, with the standards stuck into the ground.

If you compare the coins carefully you will note some differences, for example:

  • Where the breaks in the legend occur. Some break the inscription into “GLOR-IAEXERC-ITVS”, while others have “GLORI-AEXER-CITVS”. Note that neither match word boundaries.
  • The uniforms of the soldiers differ, in particular the helmets. Also the 2nd coin has the soldiers wearing a sash that the other coins lack. This may reflect legionary or regional differences.
  • The standards themselves have differences, in the number of disks and in the shape of the topmost ornament.
  • The stance of the soldiers varies. Compare the orientation of their spear arm, the forward foot and their vertical alignment.

There are also differences in quality. Consider the mechanics of coin manufacture. These were struck, not cast. The dies were hand engraved in reverse (intaglio) and hand struck with hammers into bronze planchets. The engravers, called “celators”, varied in their skills. Some were clearly better at portraits. Others were better at inscriptions. (Note the serifs in the ‘A’ and ‘X’ of the 4th coin) Some made sharp, deep designs that would last many strikes. Others had details that were too fine and wore away quickly. Since these coins are a little under 20mm in diameter, and the dies were engraved by hand, without optical aids, there is considerable skill demonstrated here, even though this time period is pretty much the artistic nadir of classical numismatics.

Despite the above differences, to an ancient Roman, all of these coins were equivalent. They all would have been interchangeable, substitutable, all of the same value. Although they differed slightly in design, they matched the requirements of the type, which we can surmise to have been:

  • obverse portrait of the emperor with the prescribed legend
  • reverse two soldiers with standards, spears and shields with the prescribed legend
  • mint mark to indicate which mint and officina made the coin
  • specified purity and weight

I’d like to borrow two terms from metaphysics: “essential” and “accidental”. The essential properties are those which an object must necessarily have in order to belong to a particular category. Other properties, those which are not necessary to belong to that category, are termed “accidental” properties. These coins are all interchangeable because they all share the same essential properties, even though they differ in many accidental properties.

As another example, take corrective eye-glasses. A definition in terms of the essential properties might be, “Corrective eyeglasses consist of two transparent lenses, held in a frame, to be worn on the face to correct the wearer’s vision.” Take away any essential property and they are no longer eyeglasses. Accidental properties might include the material used to make the lenses, the shape and color of the frame, whether single or bifocals, the exact curvature of the lens etc.

The distinction between the essential and accidental is common wherever precision in words is required, in legislation, in regulations, in contracts, in patent applications, in specifications and in standards. There are risks in not specifying an essential property, as well as in specifying an accidental property. Underspecification leads to lower interoperability, but over-specification leads to increased implementation cost, with no additional benefit.

Technical standards have dealt with this issue in several ways. One is through the use of tolerances. I talked about light bulbs in a previous post and the Medium Edison Screw. The one I showed, type A21/E26 has an allowed length range of 125.4-134.9 mm. An eccentricity of up to 3-degrees is allowed along the base axis. The actual wattage may be as much as 4% plus 0.5 watts greater than specified. Is this just an example of a sloppy standard? Why would anyone use it if it allows 4% errors? Why not have a standard that tells exactly how large the bulb should be?

The point is that bulb sockets are already designed to accept this level of tolerance. Making the standard more precise would do nothing but increase manufacturing costs, while providing zero increase in interoperability. The reason why we have cheap and abundant lamps and light bulbs is that their interconnections are standardized to the degree necessary, but no more so.

There is often a sweet spot in standardization that gives optimal interoperability at minimal cost. Specify less than this and interoperability suffers. Specify more than that and implementation costs increase, though with diminished interoperability returns.

An allowance for implementation-dependent behaviors is another technique a standard has available to find that sweet spot. A standard can define some constraints, but explicitly state that others are implementation-dependent, or even implementation-defined. (Implementation-defined goes beyond implementation-dependent in that not only can an implementation choose their own behavior in this area, but that they should also document what behavior they made implemented.) For example, in the C or C++ programming languages the size of an integer is not specified. It is declared to be implementation-defined. Because of this, C/C++ programs are not as interoperable as, say Java or Python programs, but they are better able to adapt to a particular machine architecture, and in that way achieve better performance. And even Java specifies that some threading behavior is implementation-dependent, knowing that runtime performance would be significantly enhanced if implementations could directly use native OS threads. Even with these implementation-dependent behaviors, C, C++ and Java have been extremely successful.

Let’s apply this line of reasoning to document file formats. Whether you are talking about ODF or OOXML there are pieces left undefined, important things. For example, neither format specifies the exact pixel-perfect positioning of text. There is a common acceptance that issues of text kerning and font rasterization do not belong in the file format, but instead are decisions best deferred to the application and operating environment, so they can make a decision based on factors such as availability of fonts and the desired output device. Similarly a color can be specified, in RGB or another color space, but these are not exact spectral values. Red may appear one way on a CRT under florescent light, and another way on a LCD monitor in darkness and another way on color laser output read under tungsten lighting. An office document does not specify this level of detail.

In the end, a standard is defined as much by what it does not specify as by what it does specify. A standard that specifies everything can easily end up being merely the DNA sequence of a single application.

A standard provides interoperability within certain tolerances and with allowances for implementation-dependent behaviors. A standard can be evaluated based on how well it handles such concerns. Microsoft’s Brian Jones has criticized ODF for having a flexible framework for storing application-specific settings. He lists a range of settings that the OpenOffice application stores in their documents, and compares that to OOXML, where such settings are part of the standardized schema. But this makes me wonder, where then does one store application-dependent settings in OOXML? For example, when Novell completes support of OOXML in OpenOffice, where would OpenOffice store its application-dependent settings? The Microsoft-sponsored ODF Add-in for Word project has made a nice list of ODF features that cannot be expressed in OOXML. These will all need to be stored someplace or else information will be lost when down-converting from ODF to OOXML. So how should OpenOffice store these when saving to OOXML format?

There are other places where OOXML seems to have regarded the needs of Microsoft Office, but not other implementors. For example, section 2.15.2.32 of the WordprocessingML Reference defines an “optimizeForBrowser” element which allows the notation of optimization for Internet Explorer, but no provision is made for Firefox, Opera or Safari.

Section 2.15.1.28 of the same reference specifies a “documentProtection” element:

This element specifies the set of document protection restrictions which have been applied to the contents of a WordprocessingML document. These restrictions shall be enforced by applications editing this document when the enforcement attribute is turned on, and should be ignored (but persisted) otherwise.

This “protection” relies on a storing a hashed password in the XML, and comparing that to the hash of the password the user enters, a familiar technique. But rather than using a secure hash algorithm, SHA256 for example, or any other FIPS compliant algorithm, OOXML specifies a legacy algorithm of unknown strength. Now, I appreciate the need for Microsoft to have legacy compatibility. They fully acknowledge that the protection scheme they provide here is not secure and is only there for compatibility purposes. But why isn’t the standard flexible enough to allow an implementation to utilize a different algorithm, one that is secure? Where is the allowance for innovation and flexibility?

What makes this worse is that Microsoft’s DRM-based approach to document protection, from Office 2003 and Office 2007, is entirely undocumented in the OOXML specification. So we are left with a standard with a broken protection feature that we cannot replace, while the protection that really works is in Microsoft’s proprietary extensions to OOXML that we are not standardizing. How is this a good thing for anyone other than Microsoft?

Section 2.15.3.54 defines an element called “uiCompat97To2003” which is specified simply as, “Disable UI functionality that is not compatible with Word97-2003”. But what use is this if I am using OOXML in OpenOffice or WordPerfect Office? What if I want to disable UI functionality that is not compatible with OpenOffice 1.5? Or WordPerfect 8? Or any other application? Where is the ability for other implementations to specify their preferences?

It seems to me is that OOXML in fact does have application-dependent behaviors, but only for Microsoft Office, and that Microsoft has hard-coded these application-dependent behaviors into the XML schema, without tolerance or allowance for any other implementations settings.

Something does not cease to be application-dependent just because you write it down. It ceases to be application-dependent only when you generalize it and accommodate the needs of more than one application.

Certainly, any application that stores document content or styles or layout as application-dependent settings rather than in the defined XML standard should be faulted for doing so. But I don’t think anyone has really demonstrated that OpenOffice does this. It would be easy enough to demonstrate if it were true. Delete the settings.xml from an ODF document (and the reference to it from the manifest) and show that the document renders differently without it. If it does, then submit a bug report against OpenOffice or (since this is open source) submit a patch to fix it. A misuse of application-settings is that easy to fix.

But a standard that confuses the accidental application-dependent properties of a single vendor’s application for an essential property that everyone should implement, and to do this without tolerance or allowance for other implementations, this is certainly a step back for the choice and freedom of applications to innovate in the marketplace. Better to use ODF, the file format that has multiple implementations and acknowledges the propriety of sensible application-dependent behaviors, and provides a framework for them to record such settings.


3/16/2007 — added Cosmas Indicopleustes quote

Filed Under: Numismatics, OOXML, Standards

Standards and Enablement

2007/02/24 By Rob 8 Comments

I’d like to synthesize some thoughts I’ve been having in recent weeks. But before I do that, let’s have a joke:

A Harvard Divinity School student reviews a proposed dissertation topic with his advisor. The professor looks over the abstract for a minute and gives his initial appraisal.

“You are proposing an interesting theory here, but it isn’t new. It was first expressed by a 4th Century Syrian monk. But he made the argument better than you. And he was wrong.”

So it is with some trepidation that I make an observation which may not be novel, well-stated, or even correct, but here it goes:

There is (or should be) an important relationship between patents and standards, or more precisely, between patent quality and standards quality.

As we all know, a patent is an exclusive property right, granted by the state for a limited period of time to an inventor in return for publicly disclosing the workings of his invention. In fact the meaning of “to patent” was originally, “to make open”. We have a lingering sense of this in phrases like, “that is patently absurd”. So, some public good ensues for the patent disclosure, and the inventor gets a short-term monopoly in the use of that invention in return. It is a win-win situation.

To ensure that the public gets their half of the bargain, a patent may be held invalid if there is not sufficient disclosure, if a “person having ordinary skill in the art” cannot “make and use” the invention without “undue experimentation”. The legal term for this is “enablement”. If a patent application has insufficient enablement then it can be rejected.

For example, take the patent application US 20060168937, “Magnetic Monopole Spacecraft” where it is claimed that a spacecraft of a specified shape can be powered by AC current and thereby induce a field of wormholes and magnetic monopoles. Once you’ve done that, the spacecraft practically flies itself.

The author describes that in one experiment he personally was teleported through hyperspace over 100 meters, and in another he blew smoke into a wormhole where it disappeared and came out another wormhole. However, although the inventor takes us carefully through the details of how the hull of his spacecraft was machined, the most critical aspect, the propulsion mechanism, is alluded to, but not really detailed.

(Granted, I may not be counted as a person skilled in this particular art. I studied astrophysics at Harvard, not M.I.T. Our program did not cover the practical applications of hyperspace wormhole travel.)

But one thing is certain — the existence of the magnetic monopole is still hypothetical. No one has shown conclusively that they exist. The first person who detects one will no doubt win the Nobel Prize in Physics. This is clearly a case of requiring “undue experimentation” to make and use this invention, and I would not be surprised if it is rejected for lack of enablement.

I’d suggest that a similar criterion be used for evaluating a standard. When a company proposes that one of its proprietary technologies be standardized, they are making a similar deal with the public. In return for specifying the details of their technology and enabling interoperability, they are getting a significant head start in implementing that standard, and will initially have the best and fullest implementation of that standard. The benefits to the company are clear. But to ensure that the public gets their half of the bargain, we should ask the question, is there sufficient disclosure to enable a “person having ordinary skill in the art” to “make and use” an interoperable implementation of the standard without “undue experimentation”. If a standard does not enable others to do this, then it should be rejected. The public and the standards organizations that represent them should demand this.

Simple enough? Let’s look at the new Ecma Office Open XML (OOXML) standard from this perspective. Microsoft claims that this standard is 100% compatible with billions of legacy Office documents. But is anyone actually able to use this specification to achieve this claimed benefit without undue experimentation? I don’t think so. For example, macros and scripts are not specified at all in OOXML. The standard is silent on these features. So how can anyone practice the claimed 100% backwards compatibility?

Similarly, there are a number of backwards-compatibility “features” which are specified in the following style:

2.15.3.26 footnoteLayoutLikeWW8 (Emulate Word 6.x/95/97 Footnote Placement)

This element specifies that applications shall emulate the behavior of a previously existing word processing application (Microsoft Word 6.x/95/97) when determining the placement of the contents of footnotes relative to the page on which the footnote reference occurs. This emulation typically involves some and/or all of the footnote being inappropriately placed on the page following the footnote reference.

[Guidance: To faithfully replicate this behavior, applications must imitate the behavior of that application, which involves many possible behaviors and cannot be faithfully placed into narrative for this Office Open XML Standard. If applications wish to match this behavior, they must utilize and duplicate the output of those applications. It is recommended that applications not intentionally replicate this behavior as it was deprecated due to issues with its output, and is maintained only for compatibility with existing documents from that application. end guidance]

This sounds oddly like Fermat’s, “I have a truly marvelous proof of this proposition which this margin is too narrow to contain”, but we don’t give Fermat credit for proving his Last Theorem and we shouldn’t give Microsoft credit for enabling backwards compatibility. How is this description any different than the patent application claim magnetic monopoles to drive hyperspace travel? The OOXML standard simply does not enable the functionality that Microsoft claims it contains.

Similarly, Digital Rights Management (DRM) has been an increasingly prominent part of Microsoft’s strategy since Office 2003. As one analyst put it:

The new rights management tools splinter to some extent the long-standing interoperability of Office formats. Until now, PC users have been able to count on opening and manipulating any document saved in Microsoft Word’s “.doc” format or Excel’s “.xls” in any compatible program, including older versions of Office and competing packages such as Sun Microsystems’ StarOffice and the open-source OpenOffice. But rights-protected documents created in Office 2003 can be manipulated only in Office 2003.

This has the potential to make any other file format disclosure by Microsoft irrelevant. If they hold the keys to the DRM, then they own your data. The OOXML specification is silent on DRM. So how can Microsoft say that OOXML is 100% compatible with Office 2007, let alone legacy DRM’ed documents from Office 2003? The OOXML standard simply does not enable anyone else to practice interoperable DRM.

It should also be noted that the legacy Office binary formats are not publicly available. They have been licensed by Microsoft under various restrictive schemes over the years, for example, only for use on Windows, only for use if you are not competing against Office, etc., but they have never been simply made available for download. And they’ve certainly never been released under the Open Specification Promise. So lacking a non-discriminatory, royalty-free license for the binary file format specification, how can anyone actually practice the claimed 100% compatibility? Isn’t it rather unorthodox to have a “standard” whose main benefit is claimed to be 100% compatibility with another specification that is treated as a trade secret? Doesn’t compatibility require that you disclose both formats?

Now what is probably true is that Microsoft Office 2007, the application, is compatible with legacy documents. But that is something else entirely. That fact would be true even if OOXML is not approved by ISO standard, or even if it were not an Ecma standard. In fact, Microsoft could have stuck with proprietary binary formats in Office 2007 and this would still be true. But by the criterion of whether a person having ordinary skill in the art can practice the claimed compatibility with legacy documents, this claim falls flat on its face. By accepting this standard, without sufficient enablement in the specification, the public risks giving away its standards imprimatur to Microsoft without getting a fair disclosure or the expectation of interoperability in return.

Filed Under: OOXML, Standards

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 4
  • Page 5
  • Page 6
  • Page 7
  • Page 8
  • Page 9
  • Go to Next Page »

Primary Sidebar

Copyright © 2006-2026 Rob Weir · Site Policies