• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

An Antic Disposition

  • Home
  • About
  • Archives
  • Writings
  • Links
You are here: Home / Archives for 2009

Archives for 2009

ODF Plugfest

2009/06/26 By Rob 14 Comments

Although the term may be alien to some, “plugfests” have been around for around 20 years. A plugfest is when implementors of the same interface get together and test the interoperability of their products. In the beginning this was done with wired standards, USB, etc. (thus ‘plug’). Over the years the term was applied to networking at all higher levels of the protocol stack. The concept is also applicable to document exchange formats like ODF.

We had an ODF Plugfest last week in the Hague. Although we’ve had interoperability workshops and camps before that attracted a handful of vendors, this was the first one that had nearly universal participation from ODF vendors. I’m not going to recap the details of the plugfest. Others have done that already. But I will share with you some of my conclusions, based on long discussions with other participants, from whose insights I have greatly benefited.

In an ideal world, specifications would be perfect and software applications would be bug-free and users would read the manuals and we would achieve perfect interoperability instantly by anointment of the salubrious unction of standardization. But to the extent this planet’s population obdurately persists in imperfection, we are resigned to make additional efforts in pursuit of interoperability. We are not alone in this regard. The only standards that don’t need to work on interoperability are those standards that no one implements.

We should use every licit technique at our disposal to give the user the best experience with ODF we can. In a competitive market you can not get away with telling your customer, “Sorry, your spreadsheet doesn’t work because page 652, clause 23 says ‘should’ rather than ‘shall'”. If you did that you would not have that customer for long. (Unless, of course, you have a monopoly, in which case many seemingly irrational, anti-consumer actions can occur, seemingly without consequences.)

Further, I assert:

  1. Users want real-world interoperability, and not excuses
  2. Real-world interoperability is what users see and achieve in practice
  3. Where vendors have the will to interoperate, achieving interoperability is a known technical problem, with known engineering solutions, but where the will to interoperate is lacking, there are no technical means of compelling interoperability
  4. Interoperability lies at the intersection of technology, engineering standards, competition law, intellectual property and economics. There are no silver bullets, although there are a arsenal of proven techniques that can help to improve interoperability
  5. Achieving interoperability is facilitated by a variety of cooperative activities, including standardization, test case creation, implementation testing, online validators, plugfests, defect collection and reporting

Going forward there is a promising constellation of efforts converging around ODF interoperability:

  • The OASIS ODF Interoperability and Conformance TC, charged with creating an ODF test suite
  • The OASIS ODF TC, finishing up work on ODF 1.2
  • OfficeShots.org, providing a way to test the interoperability of a document in many ODF editors
  • The ODF Toolkit Union, especially their open source ODF Validator
  • The Plugfest participants, who continue to add information and test scenarios to the plugfest’s wiki.
  • Groups such as OpenDoc Society and OpenForum Europe which lend their organizational skills and enthusiasm to the effort, and often much more.

So, we’re moving in the right direction. The key thing will be to sustain the momentum from the Plugfest and transition it into an ongoing effort, a Perpetual and Virtual Plugfest where the effort and the progress is continuous.

[6/29/09: I’ve received some emails on the photo, so here are the details:

The picture was taken at 3:30PM on the 2nd day of the workshop.

The lens was a Pentax DA 10-17mm “fisheye” zoom at 10mm. So that explains the projection distortion. The graininess and B&W was from post-processing using Nik Software’s Silver Efex Pro and Sharpener Pro.]

Filed Under: ODF

ODF TC timeline

2009/06/23 By Rob 3 Comments

I used a variation of this chart at the recent ODF Plugfest in the Netherlands. But the aspect ratio of a presentation slide doesn’t suit this type of chart well, so here is a fuller version of what I showed there.

Those who are not familiar with standards development are sometimes amazed at how long it takes to develop a good standard. Perhaps the single-vendor, 6,000 page, 12-month escapade of OOXML in Ecma has skewed expectations. Fortunately, OOXML is the exception, not the rule. Achieving a multi-vendor consensus around a substantial technical standard will always be time-consuming, but it is time that is well spent.

OASIS standards go through several stages of development:

  1. Working Draft (WD)
  2. Committee Draft (CD)
  3. Public Review Draft
  4. Committee Specification
  5. OASIS Standard

Progressing from one step to another is by ballot. The first 4 stages are advanced by vote of the Technical Committee (TC), while the last stage (OASIS Standard) is by a ballot of all OASIS members. As a draft advances through stages 1-4, an increasing degree of consensus is required. So, a CD requires only simple majority, whereas a Committee Specification requires 2/3 approval, with no more than 1/4 disapproval. Some of these stages allow iteration. So we can, and typically do, have several WD’s and several CD’s.

If you want more detail on the nitty-gritty details, here is a flow chart of the OASIS standards approval process.

I occasionally get a question along the lines of: “What has the ODF TC been doing for the past couple of years?” The following timeline should give you an idea. I’ve indicated the time spent developing ODF 1.0 and ODF 1.1, along with some other milestone activities, such as the PAS transposition of ISO/IEC 26300, the publication of ODF 1.0 Approved Errata 01 and the creation of the various ODF subcommittees. I’ve also indicated the dates of each of the ODF 1.2 WD’s and CD’s.

As you can see, we’ve been quite busy. After iterating on WD’s during 2007 and 2008, we’ve now moved on to CD’s. This is not a drawn out process, but simply the ODF TC working with full transparency, making all of the intermediate drafts available for public inspection.

All of the planned feature work for ODF 1.2 is now completed. The remaining work is to address the various editorial and technical comments that have been submitted to our comment list, as well comments from TC members and JTC1/SC34. The goal is to have no known defects in ODF 1.2 before we send it out for a Public Review. Of course, previously-unknown defects will likely be identified during the Public Review, and we have a process for handling these. I’ll comment more on that process, and Public Reviews in general, when we get closer to that stage.

Filed Under: ODF

ODF Lies and Whispers

2009/06/09 By Rob 21 Comments

There is an interesting disinformation campaign being waged against ODF. You won’t see this FUD splattered across the front pages of blogs or press releases. It is the kind of stuff that is spread by email and whispers, and you or I rarely will see it in the light of day. But occasionally some of it does cross my desk, and I’d like to share with you some recent examples.

First up is this instance, from a small Baltic republic, where a rather large US-based software company was recently arguing to the national standards committee for the adoption of OOXML instead of ODF. Here are some of the points made by this large company in a letter:

There is no software that currently implements ODF as approved by the ISO

(They then link to Alex Brown’s comment from Wikipedia). I think this demonstrates the triangle-trade relationship among Microsoft, Alex Brown (and other bloggers) and Wikipedia, by which Microsoft FUD is laundered via intermediaries to Wikipedia for later reference as newly minted “facts”. No wonder one of Microsoft’s first actions during their OOXML push was to seize control of the Wikipedia articles on ODF and OOXML via paid consultants. In any case, Alex’s claims were rebutted long ago.

ODF has a number (more than a hundred) of technical flaws which haven’t been addressed for 3 years despite change requests addressed to OASIS by countries such as Japan and United Kingdom. There are discussions between OASIS and ISO/IEC JTC 1 SC 34 regarding true ownership of ISO ODF, which is a reason why the flaws in ISO ODF aren’t being addressed. In a recent SC 34 meeting in Prague a new ISO ODF maintenance committee has been formed because ISO / IEC 26300: 2006 is not being presently maintained.

This is not true. First, the ODF TC has received zero defect reports from any ISO/IEC national body other than Japan. Second, we responded to the Japanese defect report last November. Amazingly, Alex Brown is implicated in this FUD one as well. It was false then and it is false now. At the exact time Alex was quoted in the press as saying the the ODF TC was not acting on defect reports (October 8th, 2008), we had in fact already sent our response to the defect report out to public review (August 7th, 2008) and then completed that reivew (August 22nd), after quite a bit of active technical discussion with the submitter of the original defect report (Murata Mokoto). How Alex translated that into “Their defect reports are being shelved” and “Oasis has not been acting on reports of defects” is beyond me. It must be particularly embarrassing that Murata-san wrote to the OASIS list, within days of Alex’s FUD, “I am happy with the way that the errata has been prepared.” How could Alex be ignorant of these facts? Why was he lying to the press? How is this conformant with his leadership role in JTC1/SC34 and his participation in BSI? Also observe the triangle-trade route of FUD in this case from Alex to Doug Mahugh to Wikipedia, this time for negative edits in the OASIS article.

IBM currently recommends not using OASIS ODF 1.1 and to instead use OASIS ODF 1.2 which is currently not complete and will not be complete and ISO certified before 2010/2011. OASIS on the other hand have started work on ODF 2.0 which will not be backwards compatible.

This is an odd one, demonstrably false. IBM Lotus Symphony supports ODF 1.1. We have no ODF 1.2 support at present. I wonder where they came up with this one? It is totally bizarre. Although we have started to gather requirements for “ODF-Next”, the contents of that version, and to what degree it will be backwards compatible, has not even been discussed by the TC, let alone determined. So this is pure FUD, trying to make ODF sound risky to adopt, and then lying about IBM’s support for it, and our position on ODF 1.2.

The list goes on, including claims that no one supports ODF 1.0 or ODF 1.1, etc., but you get the gist of it. The particulars are interesting, of course, but more so the reckless disregard for the truth, and the triangle-trade relationship between notable bloggers, Wikipedia, and Microsoft’s whisper campaign.

Another current example is part of Microsoft’s attempt to duck and cover from criticism over their interoperability-busting ODF support in Office 2007 SP2. I’ve heard variations on the following from three different people in three different countries, including from government officials. So it is getting around. It goes something like this:

We (Microsoft) wanted to be more interoperable with ODF. In fact we submitted 15 proposals to the ODF TC to improve interoperability, but IBM and Sun voted them down.

Nice story, but not true. Certainly Microsoft submitted 15 proposals. But they were never voted on by the TC, because Microsoft chose not to advance them for a vote. They opted not to have these proposals considered for ODF 1.2. It was their choice alone and their decision alone not to put these items up for a vote. I would have been fine with whatever decision Microsoft wanted to make in this situation. I’m not criticizing their decision. I’m just saying we need to be clear that the outcome was entirely due to their decision, and not to blame IBM or Sun for Microsoft’s choice in this matter.

I think I can trace this FUD back to a May 13th blog post from Doug Mahugh where he wrote:

We then continued submitting proposed solutions to specific interoperability issues, and by the time proposals for ODF 1.2 were cut off in December, we had submitted 15 proposals for consideration. The TC voted on what to include in version 1.2, and none of the proposals we had submitted made it into ODF 1.2.

This certainly is an interesting statement. There is nothing I can point to that is false here. Everything here is 100% accurate. However, it seems to be reckless in how it neglects the most relevant facts, namely that the proposals did not make it into ODF 1.2 at Microsoft’s sole election. It is as if Lee Harvey Oswald had written a note: “Went to Dallas and saw a parade today. Tried to see a movie, but had to leave early. Heard later on the radio that the President was shot”. This would have been 100% accurate as well, but not the “whole truth”. In any case, the rundown of the facts in this question are on the TC’s mailing list.

So what is one to do? You obviously can’t trust Wikipedia whatsoever in this area. This is unfortunate, since I am a big fan of Wikipedia. I want it to succeed. But since the day when Microsoft decided they needed to pay people to “improve” the ODF and OOXML articles, these articles have been a cesspool of FUD, spin and outright lies, seemingly manufactured for Microsoft’s re-use in their whisper campaign. My advice would be to seek out official information on the standards, from the relevant organizations, like OASIS, the chairs of the relevant committees, etc. Ask the questions in public places and seek a public, on-the-record response. More people are willing to lie than face the consequences of being caught lying. That is the ultimate weakness of lies. They cannot stand the light of public exposure. Sunlight is the best antiseptic.

Filed Under: ODF

The Battle for ODF Interoperability

2009/05/17 By Rob 33 Comments

Last year, when I was socializing the idea of creating the OASIS ODF Interoperability and Conformance TC, I gave a presentation I called “ODF Interoperability: The Price of Success”. The observation was that standards that fail never need to deal with interoperability. The creation of test suites, convening of multi-vendor interoperability workshops and plugfests is a sign of a successful standard, one which is implemented by many vendors, one which is adopted by many users, one which has vendor-neutral venues for testing implementations and iteratively refining the standard itself.

Failed standards don’t need to work on interoperability because failed standards are not implemented. Look around you. Where are the OOXML test suites? Where are the OOXML plugfests? Indeed, where are the OOXML implementations and adoptions? Microsoft Office has not implemented ISO/IEC 29500 “Office Open XML”, and neither has anyone else. In one of the great ironies, Microsoft’s escapades in ISO have left them clutching a handful of dust, while they scramble now to implement ODF correctly. This is reminiscent of their expensive and failed gamble on HD DVD on the XBox, followed eventually by a quick adoption of Blue-ray once it was clear which direction the market was going. That’s the way standards wars typically end in markets with strong network effects. They tend to end very quickly, with a single standard winning. Of course, the user wins in that situation as well. This isn’t Highlander. This is economic reality. This is how the world works.

Although this may appear messy to an outside observer, our current conversation on ODF interoperability is a good thing, and further proof, to use the words Microsoft’s National Technology Director, Stuart McKee, that “ODF has clearly won“.

Fixing interoperability defects is the price of success, and we’re paying that price now. The rewards will be well worth the cost.

We’ve come very far in only a few years. First we had to fight for even the idea and acceptance of open standards, in a world dominated by a RAND view of exclusionary standards created in smoke filled rooms, where vendors bargained about how many patents they could load up a standard with. We won that battle. Then we had to fight for ODF, a particular open standard, against a monopolist clinging to its vendor lock-in and control over the world’s documents. We won that battle. But our work doesn’t end here. We need to continue the fight, to ensure that users of document editors, you and I, get the full interoperability benefits of ODF. Other standards, like HTML, CSS, EcmaScript, etc., all went through this phase. Now it is our turn.

With an open standard, like ODF, I own my document. I choose what application I use to author that document. But when I send that document to you, or post it on my web site, I do so knowing that you have the same right to choose as I had, and you may choose to use a different application and a different platform than I used. That is the power of ODF.

Of course, the standard itself, the ink on the pages, does not accomplish this by itself. A standard is not a holy relic. I cannot take the ODF standard and touch it to your forehead say “Be thou now interoperable!” and have it happen. If a vendor wants to achieve interoperability, they need to read (and interpret) the standard with an eye to interoperability. They need to engage in testing with other implementations. And they need to talk to their users about their interoperability expectations. This is not just engineering. Interoperability is a way of doing business. If you are trying to achieve interoperability by locking yourself in a room with a standard, then you’ll have as much luck as trying to procreate while locked in a room with a book on human reproduction. Interoperability, like sex, is a social activity. If you’re doing it alone then you’re doing it wrong.

Standards are written documents — text — and as such they require interpretation. There are many schools of textual interpretation: legal, literary, historic, linguistic, etc. The most relevant one, from the perspective of a standard, is what is called “purposive” or “commercial” interpretation, commonly applied by judges to contracts. When interpreting a document using an purposive view, you look at the purpose, or intent, of a document in its full context, and interpret the text harmoniously with that intent. Since the purpose of a standard is to foster interoperability, any interpretation of the text of a standard which is used to argue in favor of, or in defense of, a non-interoperable implementation, has missed the mark. Not all interpretations are equal. Interpretations which are incongruous with the intent of standardization can easily be rejected.

Standards can not force a vendor to be interoperable. If a vendor wishes deliberately to withhold interoperability from the market, then they will always be able to do so, and, in most cases, devise an excuse using the text of the standard as a scapegoat.

Let’s work through a quick example, to show how this can happen.

OpenFormula is the part of ODF 1.2 that defines spreadsheet formulas. The current draft defines the addition operator as:

6.3.1 Infix Operator “+”

Summary: Add two numbers.
Syntax: Number Left + Number Right
Returns: Number
Constraints: None
Semantics: Adds numbers together.

I think most vendors would manage to make an interoperable implementation of this. But if you wanted to be incompatible, there are certainly ways to do so. For example, given the expression “1+1” I could return “42” and still claim to be interoperable. Why? Because the text says “adds numbers together”, but doesn’t explicitly say which numbers to add together. If you decided to add 1 and 41 together, you could claim to be conformant. OK, so let’s correct the text so it now reads:

6.3.1 Infix Operator “+”

Summary: Add two numbers.
Syntax: Number Left + Number Right
Returns: Number
Constraints: None
Semantics: Adds Left to Right.

So, this is bullet-proof now, right? Not really. If I want to, I can say that 1+1 =10, if I want to claim that my implementation works in base 2. We can fix that in the standard, giving us:

6.3.1 Infix Operator “+”

Summary: Add two numbers.
Syntax: Number Left + Number Right, both in base 10 representations
Returns: Number, in base 10
Constraints: None
Semantics: Adds Left to Right.

Better, perhaps. But if I want I can still break compatibility. For example, I could say 1+1=0, and claim that my implementation rounds off to the nearest multiple of 5. Or I could say that 1+1 = 1, claiming that the ‘+’ sign was taken as representing the logical disjunction operator rather than arithmetic addition. Or I could do addition modulo 7, and say that the text did not explicitly forbid that. Or I could return the correct answer some times, but not other times, claiming that the standard did not say “always”. Or I could just insert a sleep(5000) statement in my code, and pause 5 seconds every time the an addition operation is performed, making a useless, but conformant implementation And so on, and so on.

The old adage holds, “It is impossible to make anything fool- proof because fools are so ingenious.” A standard cannot compel interoperability from those who want resist it. A standard is merely one tool, which when combined with others, like test suites and plugfests, facilitates groups of cooperating parties to achieve interoperability.

Now is the time to achieve interoperability among ODF implementations. We’re beyond kind words and empty promises. When Microsoft first announced, last May, that it would add ODF support to Office 2007 SP2, they did so with many fine words:

  • “Microsoft Corp. is offering customers greater choice and more flexibility among document formats”
  • Microsoft is “committed to work with others toward robust, consistent and interoperable implementations”
  • Chris Capossela, senior vice president for the Microsoft Business Division: “We are committed to providing Office users with greater choice among document formats and enhanced interoperability between those formats and the applications that implement them”
  • “Microsoft recognizes that customers care most about real-world interoperability in the marketplace, so the company is committed to continuing to engage the IT community to achieve that goal when it comes to document format standards”
  • Microsoft will “work with the Interoperability Executive Customer Council and other customers to identify the areas where document format interoperability matters most, and then collaborate with other vendors to achieve interoperability between their implementations of the formats that customers are using today. This work will continue to be carried out in the Interop Vendor Alliance, the Document Interoperability Initiative, and a range of other interoperability labs and collaborative venues.”
  • “This work on document formats is only one aspect of how Microsoft is delivering choice, interoperability and innovative solutions to the marketplace.”

So the words are there, certainly. But what was delivered fell far, far short of what they promised. Excel 2007 SP2 strips out spreadsheet formulas when it reads ODF spreadsheets from every other vendor’s spreadsheets, and even from spreadsheets created by Microsoft’s own ODF Add-in for Excel. No other vendor does this. Spreadsheet formulas are the very essence of a spreadsheet. To fail to achieve this level of interoperability calls into question the value and relevance of what was touted as an impressive array of interoperability initiatives. What value is an Interoperability Executive Council, an Interop Vendor Alliance, a Document Interoperability Initiative, etc., if they were not able to motivate the most simple act: taking spreadsheet formula translation code that Microsoft already has (from the ODF Add-in for Office) and using it in SP2?

The pretty words have been shown to be hollow words. Microsoft has not enabled choice. Their implementation is not robust. They have, in effect, taken your ODF document, written by you by your choice in an interoperable format, with demonstrated interoperability among several implementations, and corrupted it, without your knowledge or consent.

There are no shortage of excuses from Redmond. If customers wanted excuses more than interoperability they would be quite pleased by Microsoft’s prolix effusions on this topic. The volume of text used to excuse their interoperability failure, exceeds, by an order of magnitude, the amount of code that would be required to fix the problem. The latest excuse is the paternalistic concern expressed by Doug Mahugh, saying that they are corrupting spreadsheets in order to protect the user. Using a contrived example, of a customer who tries to add cells containing text to those containing numbers, Doug observes that OpenOffice and Excel give different answers to the formula = 1+ “2”. Because all implementations do not give the same answer, Microsoft strips out formulas. Better to be the broken clock that reads the correct time twice a day, than to be unpredictable, or as Doug puts it:

If I move my spreadsheet from one application to another, and then discover I can’t recalculate it any longer, that is certainly disappointing. But the behavior is predictable: nothing recalculates, and no erroneous results are created.

But what if I move my spreadsheet and everything looks fine at first, and I can recalculate my totals, but only much later do I discover that the results are completely different than the results I got in the first application?

That will most definitely not be a predictable experience. And in actual fact, the unpredictable consequences of that sort of variation in spreadsheet behavior can be very consequential for some users. Our customers expect and require accurate, predictable results, and so do we. That’s why we put so much time, money and effort into working through these difficult issues.

This bears a close resemblance to what is sometimes called “Ben Tre Logic”, after the Vietnamese town whose demise was excused by a U.S. General with the argument, “It became necessary to destroy the village in order to save it.”

Doug’s argument may sound plausible at first glance. There is that scary “unpredictable consequences”. We can’t have any of that, can we? Civilization would fall, right? But what if I told you that the same error with the same spreadsheet formula occurs when you exchange spreadsheets in OOXML format between Excel and OpenOffice? Ditto for exchanging them in the binary XLS format. In reality, this difference in behavior has nothing to do with the format, ODF or OOXML or XLS. It is a property of the application. So, why is Microsoft not stripping out formulas when reading OOXML spreadsheet files? After all, they have exactly the same bug that Doug uses as the centerpiece of his argument for why formulas are stripped from ODF documents. Why is Microsoft not concerned with “unpredictable consequences” when using OOXML? Why do users seem not to require “accurate, predictable results” when using OOXML? Or to be blunt, why is Microsoft discriminating against their own paying customers who have chosen to use ODF rather than OOXML? How is this reconciled with Microsoft’s claim that they are delivering “choice, interoperability and innovative solutions to the marketplace”?

Filed Under: Interoperability, ODF Tagged With: ODF, Office, OOXML, OpenFormula

A follow-up on Excel 2007 SP2’s ODF support

2009/05/07 By Rob 36 Comments

Wow. My previous post seems to have attracted some attention. When I woke up on Monday morning, made my coffee and logged into to my email, I found out that my geeky little analysis of Office 2007 SP2’s ODF support had sparked some interest. I did not intend it to be more than an update for the handful of the “usual suspects” who regularly follow ODF issues via various blogs, many of which you see listed to your right. If I had any foreknowledge or expectation that this post would end up being on SlashDot, GrokLaw, ZDnet, IDG, Reuters, CNet, etc., I would have done a better job spell checking, and maybe toned down the rhetoric a little (just a little).

But this widespread interest in the topic tells me one thing: ODF is important. People care about it. People want it to succeed, and when this success is threatened, whether for deliberate or accidental reasons, they are upset. Although Office 2007 SP2 also added PDF and XPS support, you don’t see many stories on that at all.

I’ve been trying to respond to the many comments by anonymous FUDsters and Fanboys on various web sites where my post is being discussed. However, it is getting rather laborious swatting all the gnats. They obviously breed in stagnant waters, and there is an awful lot of that on the web. Since all links lead back here anyways, it will be much simpler to do a recap here and address some of the more widespread errors.

The talking points from Redmond seem to be consistent, along the lines of:

We did a 100% perfect and conforming implementation of ODF 1.1 to the letter of the standard. If it is not interoperable, then it is the fault of the standard or the other applications or some guy we saw sneaking around back on the night of the fire. In any case, it is not our fault. We just design, write, test and sell software to users, businesses, governments and educational institutions. We have no influence over whether our products are interoperable or not. What effect SP2 has on users or the market — that’s not our concern. Come back in 50 years when you have a 100% perfect standard and maybe we’ll talk.

In other words, all of those Interoperability Directors and Interoperability Architects at Microsoft seem to have (hopefully temporarily) switched into Minimal Conformance Directors and Minimal Conformance Architects, and are gazing at their navels. I hope they did not suffer a reduction in salary commensurate with the reduction in their claimed responsibilities.

In any case, their argument might be challenged on several grounds. First up is the question of whether the ODF documents written by Excel 2007 SP2 indeed conform to the ODF 1.1 standard. This is not a hard question to answer, but please excuse this short technical diversion.

Let’s see what the ODF 1.1 standard says in section 8.1.3 (Table Cell):

Addresses of cells that contain numbers. The addresses can be relative or absolute, see section 8.3.1. Addresses in formulas start with a “[“ and end with a “]”. See sections 8.3.1 and 8.3.1 for information about how to address a cell or cell range.

And the referenced section 8.3.1 further says:

To reference table cells so called cell addresses are used. The structure of a cell address is as follows:

  1. The name of the table.
  2. A dot (.)
  3. An alphabetic value representing the column. The letter A represents column 1, B represents column 2, and so on. AA represents column 27, AB represents column 28, and so on.
  4. A numeric value representing the row. The number 1 represents the first row, the number 2 represents the second row, and so on.
  5. This means that A1 represents the cell in column 1 and row 1. B1 represents the cell in column 2 and row 1. A2 represents the cell in column 1 and row 2.

    For example, in a table with the name SampleTable the cell in column 34 and row 16 is referenced by the cell address SampleTable.AH16. In some cases it is not necessary to provide the name of the table. However, the dot must be present. When the table name is not required, the address in the previous example is .AH16

So, going back to my test spreadsheets from all of the various ODF applications, how do these applications encode formulas with cell addresses:

  • Symphony 1.3: =[.E12]+[.C13]-[.D13]
  • Microsoft/CleverAge 3.0: =[.E12]+[.C13]-[.D13]
  • KSpread 1.6.3: =[.E12]+[.C13]-[.D13]
  • Google Spreadsheets: =[.E12]+[.C13]-[.D13]
  • OpenOffice 3.01: =[.E12]+[.C13]-[.D13]
  • Sun Plugin 3.0: [.E12]+[.C13]-[.D13]
  • Excel 2007 SP2: =E12+C13-D13

I’ll leave it as an exercise to the reader to determine which one of these seven is wrong and does not conform to the ODF 1.1 standard.

Next is the question of the relationship between interoperability and conformance. So we are not building skyscrapers in the air, let’s start with a working definition of interoperability, say that given by ISO/IEC 2382-01, “Information Technology Vocabulary, Fundamental Terms”:

The capability to communicate, execute programs, or transfer data among various functional units in a manner that requires the user to have little or no knowledge of the unique characteristics of those units

I think we probably have a better sense of what conformance is. Something conforms when it meets the requirements defined by a standard.

So let’s explore explore the relationship between conformance to a standard and interoperability.

First, does interoperability require a standard? No. There have been interoperable systems without formal standards. For example, there is a degree of interoperability among spreadsheet vendors on the basis of the legacy Excel binary file format (XLS), even though the binary format was never standardized and never defines spreadsheet formulas. Another example is the SAX XML parsing API. Widely implemented, but never standardized. We may call them informal or de facto standards.

Additionally, many standards start out as informal technical agreements and specifications that achieve interoperability among a small group of users, who then move it forward to standardization so that a broader audience can benefit. But the interoperability came first and the formal standard came second. See the history of the Atom syndication format for a good example.

Second, Is interoperability possible in the presence of non-conformance? Yes. For example, it is well known that the vast majority of web pages (93% by one estimate) on the web today do not conform to the HTML standard. But there is a not unsubstantial degree of interoperability on the web today in spite of this lack of conformance. Generally, interoperability does not require perfection. It requires good faith and hard work. If perfection were required, nothing would work in this world, would it?

Third, if a standard does not define something (like spreadsheet formulas) then I am allowed to do whatever I want, right? This is true. But further, even if ODF 1.1 did define spreadsheet formulas you would still be allowed to do whatever you want. Remember, these are voluntary standards. We can’t force you to do anything, whether we define it or not.

So what then is the precise relationship between conformance and interoperability? I’d state it as:

  • In general, conformance is neither necessary nor sufficient for to achieve interoperability.
  • But interoperability is most efficiently achieved by conformance to an open standard where the standard clearly states those requirements which must be met to achieve interoperability.

In other words, the relationship is due to the efficiency of this configuration to those who wish to interoperate. Conformance is neither necessary nor sufficient to achieve interoperability in general, but interoperability is most efficiently achieved when conformance guarantees interoperability. When I talk about “standards-based interoperability” I’m talking about the situation when you are in the neighborhood of that optimal point.

The inefficiency of other orientations is seen with HTML and Web browsers. Because of the historically low level of HTML conformance by authoring tools and users who hand-edit HTML, browsers today are much more complex then they would otherwise need to be. They need to handle all sorts of mal-formed HTML documents. This complexity extends to any tool that needs to process HTML. Sure, we have a pretty good grip on this now, with tools like HTML Tidy and other robust parsers, but this has come at a cost. Complexity eats up resources, both to coders and testers, but also runtime resources, memory and processing cycles. More complex code is harder to maintain and secure and tends to have more bugs. Greater conformance would have lead to a more efficient relationship between conformance and interoperability.

Similarly, the many years of non-conformance in browsers, most notably Internet Explorer, to the CSS2 standard has resulted in an inefficiency there. From the perspective of web designers, tool authors and competing browser vendors, the lack of conformance to the standards has increased the cost needed to achieve interoperability, a cost transferred from a dominate vendor who chose not to conform to the standards, to other vendors who did conform.

The efficiency of conformance to open standards in particular is the clarity and freedom it provides around access to the standard and the contingent IP rights needed to implement the standard.

So back to ODF 1.1. What is the relationship between conformance and interoperability there? Clearly, it is not yet at that optimal point (which few standards ever achieve) where interoperability is most-efficiently achieved. We’re working on it. ODF 1.2 will be better in that regard than ODF 1.1, and the next version will improve on that, and so on.

Does this mean that you cannot create interoperable solutions with ODF? No, it just means that, like most standards in IT today, you need to do some interoperability testing with other vendor’s products to make sure your product interoperates, and make conformant adjustments to your product in order to achieve real-world nteroperability. Most vendors who don’t have a monopoly would do this naturally and in fact have done this, as my chart indicated. Complaining about this is like complaining about gravity or friction or entropy. Sure, it sucks. Deal with it. Although it may not pay as much as being a professional mourner, work as a programmer is more regular. And giving value to customers will always bring more satisfaction than than standing there weeping about how code is hard.

In any case, this comes down to why do you implement a standard. What are your goals? If your goal is be interoperable, then you perform interoperability testing and make those adjustments to your product necessary to make it be both conformant and interoperable. But if your goal is to simply fulfill a checkbox requirement without actually providing any tangible customer benefit, then you will do as little as needed. However, if your goal is to destroy a standard, then you will create a non-conformant, non-interoperable implementation, automatically download it to millions of users and sow confusion in the marketplace by flooding it with millions of incompatible documents. It all depends on your goals. Voluntary standards do not force, or prevent, one approach or another.

To wrap this up, I stand on the table of interoperability results in the previous post. SP2 has reduced the level of interoperability among ODF spreadsheets, by failing to produce conforming ODF documents, and failing to take note of the spreadsheet formula conventions that had been adopted by all of the other vendors and which are working their way through OASIS as a standard.

If we note the arguments used by Microsoft in the recent past, they have argued that OOXML must be exactly what it is — flaws and all — in order to be compatible with legacy binary Office documents. Then they argued that OOXML can not be changed in ISO, because that would create incompatibility with the “new legacy” documents in Office 2007 XML format. But when it comes to ODF, they have disregarded all legacy ODF documents created by all other ODF vendors and take an aloof stance that looks with disdain on interoperability with other vendor’s documents, or even documents produced by their own ODF Add-in. The sacrosanctness of legacy compatibility appears to be reserved, for strategic reasons, for some formats but not others. We’ll redefine the Gregorian calender in ISO to be interoperable with one format if we need to, but we won’t deign, won’t stoop, won’t dirty ourselves to use the code we already have from the ODF Add-in for Microsoft Office, to make SP2 formulas interoperable with the other vendors’ products, to benefit our own users who are asking for ODF support in Office. As I said before, this ain’t right.

Filed Under: ODF

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Page 5
  • Page 6
  • Interim pages omitted …
  • Page 9
  • Go to Next Page »

Primary Sidebar

Copyright © 2006-2026 Rob Weir · Site Policies