• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

An Antic Disposition

  • Home
  • About
  • Archives
  • Writings
  • Links
You are here: Home / Archives for OOXML

OOXML

Top 10 Blog Posts of 2009

2010/01/01 By Rob 6 Comments

The 2009 wall calendar is now tossed in recycling bin, and I look to 2010 with renewed energy and dedication.  But I did want to take once last parting look at 2009, from the perspective of this blog’s server logs.

Top Blog Posts

  1. Update on ODF Spreadsheet Interoperability (May 2009)
  2. ODF Lies and Whispers (June 2009)
  3. A Game of Zendo (July 2006)
  4. A follow-up on Excel 2007 SP2’s ODF support (May 2009)
  5. The Final OOXML Update: Part I (October 2009)
  6. The Formats of Excel 2007 (January 2007)
  7. The Final OOXML Update: Part III (October 2009)
  8. Taking Control of Your Documents March 2009)
  9. The Battle for ODF Interoperability (May 2009)
  10. The Chernobyl Design Pattern (October 2006)

Top Browsers

  • Firefox: 57.3%
  • Internet Explorer: 22.9%
  • Safari: 5.2%
  • Mozilla 4.7%
  • Chrome: 3.8%
  • Opera: 3.4%
  • Mobile (various browsers): 1.4%
  • Konqueror :1.3%

Top Operating Systems

  • Windows: 62.1%
  • Linux 26.8%
  • Mac 9.7%
  • Mobile 1.4%
  • Tweet

Filed Under: Blogging/Social Tagged With: ODF, OOXML

How Not to Read a Patent

2009/08/13 By Rob 11 Comments

There is perhaps no occasion where one can observe such profound ignorance, coupled with reckless profligacy, as when a software patent is discussed on the web. Note the recurring pattern, which is repeated every two weeks or so. A patent issues, or a patent application is published or patent infringement suit is brought, and within minutes the web is full of instant pundits, telling us what the patent covers, how it should not have been granted, how it is entirely obvious, or how it applies to everything in the world, and how it presages a self-induced mutually assured destruction that now leads us on to the plains of Armageddon. If I had a nickel for every time this happens…

By way of disclaimer, I am not a lawyer, but I am blessed that my self-avowed ignorance in this area is coupled with a certain knowledge of the limits of my understanding, a handicap seemingly not shared by many other commentators. I know what I do not know, and know when to seek an expert.

In the past few days we have had a bumper crop of pontification on the significance of two XML-related patents, one newly issued to Microsoft (7,571,169), and another older one (5,787,449) owned by i4i, whose infringement has resulted in a large judgment and injunction against Microsoft. I’ve found the web coverage of both patents to be an unmitigated muddle.

I’m not going to comment on the merits of either one of these patents, but I’d like to make a few basic observations that may be of some assistance to those who comment on future patent issues.

  1. A patent has a description known as the “specification”. And it has a list of numbered “claims”. Although the specification can define terms that are then referred to in the claims, it is the granted claims that define the scope of the patent, not the specification.
  2. If all you do is read the abstract and the first few paragraphs of a patent, then you may know the general topic of the patent, but you do not really know its scope. If you then go off and cry, “Oi vey, this patent covers XHTML, SVG, RDF, Pringles and Obama Healthcare Plan” then you do your readers a disservice. You must parse the very specific and often obtuse language of the claims in order to understand exactly what a patent covers. There is no short cut. This is not like a book where you can understand the plot by reading the back cover. But over and over again, I see people who just read the abstract, maybe glanced at a diagram, and then feel equipped to hold forth at length on the substance of the patent.
  3. When you try to understand patent claims, you will encounter a dense form of legal English. Claims are not written for a layperson and do not presume that you will understand it easily. The drafting of patent claims is a black art, like writing device drivers, and if you are not versed in its intricacies, then your statements on any given patent are apt to be wide of the mark. Claims are full of magic words. Know what you do not know. If you do not recognize at sight and know the interpretation requirements of a means-plus-function claim (which is key in the ‘449 patent), or you are not crystal clear on the distinction between the verbs “consist” and “comprise”, then you probably should not be the first (or loudest) person to speak on what a patent claims.
  4. If you are reading an application, know that during the “prosecution” of that patent, when it is reviewed by the USPTO, some of the claims of the patent may be thrown out, for any of several reasons, including prior art identified by the examiners. However, the specification of the patent is unlikely to change much. So an issued patent often has a very broadly-written specification, that covers the entirety of the originally claimed invention, though the the issued patent might have only a subset of the original claims allowed. So if you are have an issued patent and you look at only the specification, you can easily be fooled into thinking it covers far more than it does. For example, the ‘169 patent from Microsoft had half the original claims thrown out in the prosecution. If you don’t know that and are reading only the specification, not the granted claims, then you will incorrectly think the patent is far broader than it actually is.
  5. Know what a priority date is, and how that is affected by a continuation. I’ve read all sorts of nonsense based on not appreciating that. Take a look at the ‘169 patent, for example. It says it was filed in 2004. But if you look closely you see it was a continuation of a 2002 application. You can moan and groan all you want about prior art, but if you don’t get your dates right you’re off to a bad start in your analysis.
  6. In an infringement suit, like with the ‘449 patent, be sure to look at the actual court record. Typically there is a Markman (claim construction) hearing, where the court will determine the meaning of terms used in the patent claims. If you have not read the court’s claims construction opinion in the i4i versus Microsoft case, then your commentary is uninformed by probably the most important document in this case (well, next to the patent itself).

Well, that’s enough for tonight. Repent. Sleep on it. And realize that making sense of a complex patent takes time, if you’re going to do it right. Ergo, the first impressions you read from the instant pundits on the web will tend to be shallow, imperfectly informed and often wrong. Heck, even everything I said in this post may be wrong.

  • Tweet

Filed Under: Intellectual Property Tagged With: i4i, Microsoft Word, OOXML, Patents

The Battle for ODF Interoperability

2009/05/17 By Rob 33 Comments

Last year, when I was socializing the idea of creating the OASIS ODF Interoperability and Conformance TC, I gave a presentation I called “ODF Interoperability: The Price of Success”. The observation was that standards that fail never need to deal with interoperability. The creation of test suites, convening of multi-vendor interoperability workshops and plugfests is a sign of a successful standard, one which is implemented by many vendors, one which is adopted by many users, one which has vendor-neutral venues for testing implementations and iteratively refining the standard itself.

Failed standards don’t need to work on interoperability because failed standards are not implemented. Look around you. Where are the OOXML test suites? Where are the OOXML plugfests? Indeed, where are the OOXML implementations and adoptions? Microsoft Office has not implemented ISO/IEC 29500 “Office Open XML”, and neither has anyone else. In one of the great ironies, Microsoft’s escapades in ISO have left them clutching a handful of dust, while they scramble now to implement ODF correctly. This is reminiscent of their expensive and failed gamble on HD DVD on the XBox, followed eventually by a quick adoption of Blue-ray once it was clear which direction the market was going. That’s the way standards wars typically end in markets with strong network effects. They tend to end very quickly, with a single standard winning. Of course, the user wins in that situation as well. This isn’t Highlander. This is economic reality. This is how the world works.

Although this may appear messy to an outside observer, our current conversation on ODF interoperability is a good thing, and further proof, to use the words Microsoft’s National Technology Director, Stuart McKee, that “ODF has clearly won“.

Fixing interoperability defects is the price of success, and we’re paying that price now. The rewards will be well worth the cost.

We’ve come very far in only a few years. First we had to fight for even the idea and acceptance of open standards, in a world dominated by a RAND view of exclusionary standards created in smoke filled rooms, where vendors bargained about how many patents they could load up a standard with. We won that battle. Then we had to fight for ODF, a particular open standard, against a monopolist clinging to its vendor lock-in and control over the world’s documents. We won that battle. But our work doesn’t end here. We need to continue the fight, to ensure that users of document editors, you and I, get the full interoperability benefits of ODF. Other standards, like HTML, CSS, EcmaScript, etc., all went through this phase. Now it is our turn.

With an open standard, like ODF, I own my document. I choose what application I use to author that document. But when I send that document to you, or post it on my web site, I do so knowing that you have the same right to choose as I had, and you may choose to use a different application and a different platform than I used. That is the power of ODF.

Of course, the standard itself, the ink on the pages, does not accomplish this by itself. A standard is not a holy relic. I cannot take the ODF standard and touch it to your forehead say “Be thou now interoperable!” and have it happen. If a vendor wants to achieve interoperability, they need to read (and interpret) the standard with an eye to interoperability. They need to engage in testing with other implementations. And they need to talk to their users about their interoperability expectations. This is not just engineering. Interoperability is a way of doing business. If you are trying to achieve interoperability by locking yourself in a room with a standard, then you’ll have as much luck as trying to procreate while locked in a room with a book on human reproduction. Interoperability, like sex, is a social activity. If you’re doing it alone then you’re doing it wrong.

Standards are written documents — text — and as such they require interpretation. There are many schools of textual interpretation: legal, literary, historic, linguistic, etc. The most relevant one, from the perspective of a standard, is what is called “purposive” or “commercial” interpretation, commonly applied by judges to contracts. When interpreting a document using an purposive view, you look at the purpose, or intent, of a document in its full context, and interpret the text harmoniously with that intent. Since the purpose of a standard is to foster interoperability, any interpretation of the text of a standard which is used to argue in favor of, or in defense of, a non-interoperable implementation, has missed the mark. Not all interpretations are equal. Interpretations which are incongruous with the intent of standardization can easily be rejected.

Standards can not force a vendor to be interoperable. If a vendor wishes deliberately to withhold interoperability from the market, then they will always be able to do so, and, in most cases, devise an excuse using the text of the standard as a scapegoat.

Let’s work through a quick example, to show how this can happen.

OpenFormula is the part of ODF 1.2 that defines spreadsheet formulas. The current draft defines the addition operator as:

6.3.1 Infix Operator “+”

Summary: Add two numbers.
Syntax: Number Left + Number Right
Returns: Number
Constraints: None
Semantics: Adds numbers together.

I think most vendors would manage to make an interoperable implementation of this. But if you wanted to be incompatible, there are certainly ways to do so. For example, given the expression “1+1” I could return “42” and still claim to be interoperable. Why? Because the text says “adds numbers together”, but doesn’t explicitly say which numbers to add together. If you decided to add 1 and 41 together, you could claim to be conformant. OK, so let’s correct the text so it now reads:

6.3.1 Infix Operator “+”

Summary: Add two numbers.
Syntax: Number Left + Number Right
Returns: Number
Constraints: None
Semantics: Adds Left to Right.

So, this is bullet-proof now, right? Not really. If I want to, I can say that 1+1 =10, if I want to claim that my implementation works in base 2. We can fix that in the standard, giving us:

6.3.1 Infix Operator “+”

Summary: Add two numbers.
Syntax: Number Left + Number Right, both in base 10 representations
Returns: Number, in base 10
Constraints: None
Semantics: Adds Left to Right.

Better, perhaps. But if I want I can still break compatibility. For example, I could say 1+1=0, and claim that my implementation rounds off to the nearest multiple of 5. Or I could say that 1+1 = 1, claiming that the ‘+’ sign was taken as representing the logical disjunction operator rather than arithmetic addition. Or I could do addition modulo 7, and say that the text did not explicitly forbid that. Or I could return the correct answer some times, but not other times, claiming that the standard did not say “always”. Or I could just insert a sleep(5000) statement in my code, and pause 5 seconds every time the an addition operation is performed, making a useless, but conformant implementation And so on, and so on.

The old adage holds, “It is impossible to make anything fool- proof because fools are so ingenious.” A standard cannot compel interoperability from those who want resist it. A standard is merely one tool, which when combined with others, like test suites and plugfests, facilitates groups of cooperating parties to achieve interoperability.

Now is the time to achieve interoperability among ODF implementations. We’re beyond kind words and empty promises. When Microsoft first announced, last May, that it would add ODF support to Office 2007 SP2, they did so with many fine words:

  • “Microsoft Corp. is offering customers greater choice and more flexibility among document formats”
  • Microsoft is “committed to work with others toward robust, consistent and interoperable implementations”
  • Chris Capossela, senior vice president for the Microsoft Business Division: “We are committed to providing Office users with greater choice among document formats and enhanced interoperability between those formats and the applications that implement them”
  • “Microsoft recognizes that customers care most about real-world interoperability in the marketplace, so the company is committed to continuing to engage the IT community to achieve that goal when it comes to document format standards”
  • Microsoft will “work with the Interoperability Executive Customer Council and other customers to identify the areas where document format interoperability matters most, and then collaborate with other vendors to achieve interoperability between their implementations of the formats that customers are using today. This work will continue to be carried out in the Interop Vendor Alliance, the Document Interoperability Initiative, and a range of other interoperability labs and collaborative venues.”
  • “This work on document formats is only one aspect of how Microsoft is delivering choice, interoperability and innovative solutions to the marketplace.”

So the words are there, certainly. But what was delivered fell far, far short of what they promised. Excel 2007 SP2 strips out spreadsheet formulas when it reads ODF spreadsheets from every other vendor’s spreadsheets, and even from spreadsheets created by Microsoft’s own ODF Add-in for Excel. No other vendor does this. Spreadsheet formulas are the very essence of a spreadsheet. To fail to achieve this level of interoperability calls into question the value and relevance of what was touted as an impressive array of interoperability initiatives. What value is an Interoperability Executive Council, an Interop Vendor Alliance, a Document Interoperability Initiative, etc., if they were not able to motivate the most simple act: taking spreadsheet formula translation code that Microsoft already has (from the ODF Add-in for Office) and using it in SP2?

The pretty words have been shown to be hollow words. Microsoft has not enabled choice. Their implementation is not robust. They have, in effect, taken your ODF document, written by you by your choice in an interoperable format, with demonstrated interoperability among several implementations, and corrupted it, without your knowledge or consent.

There are no shortage of excuses from Redmond. If customers wanted excuses more than interoperability they would be quite pleased by Microsoft’s prolix effusions on this topic. The volume of text used to excuse their interoperability failure, exceeds, by an order of magnitude, the amount of code that would be required to fix the problem. The latest excuse is the paternalistic concern expressed by Doug Mahugh, saying that they are corrupting spreadsheets in order to protect the user. Using a contrived example, of a customer who tries to add cells containing text to those containing numbers, Doug observes that OpenOffice and Excel give different answers to the formula = 1+ “2”. Because all implementations do not give the same answer, Microsoft strips out formulas. Better to be the broken clock that reads the correct time twice a day, than to be unpredictable, or as Doug puts it:

If I move my spreadsheet from one application to another, and then discover I can’t recalculate it any longer, that is certainly disappointing. But the behavior is predictable: nothing recalculates, and no erroneous results are created.

But what if I move my spreadsheet and everything looks fine at first, and I can recalculate my totals, but only much later do I discover that the results are completely different than the results I got in the first application?

That will most definitely not be a predictable experience. And in actual fact, the unpredictable consequences of that sort of variation in spreadsheet behavior can be very consequential for some users. Our customers expect and require accurate, predictable results, and so do we. That’s why we put so much time, money and effort into working through these difficult issues.

This bears a close resemblance to what is sometimes called “Ben Tre Logic”, after the Vietnamese town whose demise was excused by a U.S. General with the argument, “It became necessary to destroy the village in order to save it.”

Doug’s argument may sound plausible at first glance. There is that scary “unpredictable consequences”. We can’t have any of that, can we? Civilization would fall, right? But what if I told you that the same error with the same spreadsheet formula occurs when you exchange spreadsheets in OOXML format between Excel and OpenOffice? Ditto for exchanging them in the binary XLS format. In reality, this difference in behavior has nothing to do with the format, ODF or OOXML or XLS. It is a property of the application. So, why is Microsoft not stripping out formulas when reading OOXML spreadsheet files? After all, they have exactly the same bug that Doug uses as the centerpiece of his argument for why formulas are stripped from ODF documents. Why is Microsoft not concerned with “unpredictable consequences” when using OOXML? Why do users seem not to require “accurate, predictable results” when using OOXML? Or to be blunt, why is Microsoft discriminating against their own paying customers who have chosen to use ODF rather than OOXML? How is this reconciled with Microsoft’s claim that they are delivering “choice, interoperability and innovative solutions to the marketplace”?

  • Tweet

Filed Under: Interoperability, ODF Tagged With: ODF, Office, OOXML, OpenFormula

How to Write a Standard (If You Must)

2006/12/06 By Rob 6 Comments

Standards are generally a bad idea. They reduce freedom-of-action and limit choice. But sometimes you must have one in order to pacify an anti-business regulator or other socialist-leaning bureaucrat. So what should you do if you find you find yourself in the awkward position of coming up short in the standards department? By all means, create a standard and do it quickly! I offer here some observations on best-practices, and tried-and-true advice on how to make a standard quickly, with little pain or risk.

First some background. Standards writing, as generally practiced, is a multilateral, deliberative process where multiple points of view are considered and discussed, where consensus is reached and then documented. This must be avoided at all costs. The delays introduced by such a consensus process are considerable and the outcome of such a process does not justify that time investment. If you already have a monopoly, why waste time on a consensus? Consider the notable failures of XHTML, XForms, SVG, XSLT, etc. Look at the multiple implementations of these standards, including viral copyleft and IP-deficient products. Do you really want to see this trend continued?

Start with a complete product implementation. This makes the entire process much faster since there is no time wasted discussing such abstract, heady things as interoperability, reuse, generality, elegance, etc. Only Perpetual Adoration of the One True Implementation (the one you already have) will quickly lead to a specification. Avoid consideration of alternatives. Consideration of alternatives is your prime risk factor for introducing delay.

If possible choose an implementation that has layers of complexity from years of undisciplined agglomeration of features. Of course this will lead to a specification of Byzantine complexity and epic length. But since no one will actually read the specification, there is no harm. In fact the length and complexity can bring several benefits:

  1. Any criticism of the specification can automatically be dismissed as nitpicking. For example, if you are presented with a list of 500 faults in a 6,000 pages specification, you can respond, “That is less than 10%. You are just nitpicking. We can fix that in release 1.1”. Or you can even just rely on the familiar justification, “Shipping is a feature”. Any finite list of defects can be made minuscule by a sufficiently large specification.
  2. Further, since review periods at ISO and most other standards bodies are of fixed length, regardless of the length of the specification, a sufficiently large specification will ensure that it receives no, or only cursory review. Divide the length of the specification in pages, by the length of the review period in days. A Freshman English major might be assigned reading of 50 pages per day. Aim to double or triple this reading load. 100+ pages/day is a good rule of thumb to ensure that a volunteer on a standards committee will not be able conduct a thorough review.
  3. The pure size of the specification will imply to some that it is a well-considered, comprehensive and cohesive document. Just like the IRS tax regulations and the federal budget.
  4. In case of emergency the specification can be burned as fuel

Shop around for the best standards development organization (SDO), one that knows how to get the job done quickly. Evaluation criteria include:

  1. A proven ability to approve standards quickly. You are not interested in advancing the state of the art here. You want fast-in-fast-out-stamp-my-ticket processing so you can get on with your business.
  2. A membership model that effectively exclude individual experts and open source projects from the process.
  3. A demonstrated competency in maintaining needed secrecy when developing sensitive standards.
  4. The right to make FastTrack submissions to ISO

Ecma International approved the DVD-RAM, DVD-R, DVD+R, DVD-RW and DVD+RW standards. Although some falsely claim that these overlapping standards have confused consumers, it is clear that having these multiple formats has given manufacturers ample opportunity for upselling multi-format DVD players and burners. With a single format, where is the upsell? Ecma clearly understands the purpose of standards and can be relied upon.

Once you are in an SDO and are ready to create your Technical Committee, be sure to carefully consider the topics of membership and charter. Of course, you’ll want to assemble a team of willing partners. Loyalty can be obtained in many ways. Your consigliari may have some ideas.

You charter is your first line of defense. Since your Technical Committee may contain some technical people, you will want to strictly limit what they discuss. Technical people are dangerous if they are given too much freedom. Who knows what they may do if left on their own? They might even innovate (shudder), and innovation is so disruptive. A well-written Charter can prevent innovation, that unwelcome guest, from ever knocking on your door.

Since its terms restricts the scope of your work, you should ensure that your charter contains any restrictions that you do not want to discuss or defend in the future. Best to get these front loaded into the charter so you can just say, “We have no choice; that is our Charter”. Your goal is to describe the One True Implementation, so a good charter restriction to add is one which will focus the technical committee on that one single task. A roundabout way of doing this is to require that the produced specification must remain 100% compatible with a different specification, one which is your secret. That way you, and only you, can decide whether or not a proposed change is within scope of the charter. This provides a lot of flexibility and avoids unnecessary discussions. “We checked the secret specification and it says that October has 40 days in it. Sorry guys, there is really nothing we can do. The Charter won’t let us.”

A few additional recommendations for the the day-to-day work of describing the One True Implementation:

  • Observe other successful standards and the process that lead to them. Look at the size of their specifications and how long it took to develop that. Assume that you can safely progress at a rate 10-20x faster. This pace is justified by the superiority of your One True Implementation and your lack of deliberations, discussions or considerations of alternatives.
  • At all costs avoid reusing any existing standards. Reuse of standards will only lead to generality, interoperability and increased reuse and risk getting you involved in discussions with other standards bodies. This delay must be avoided. The One True Implementation has everything you or anyone else needs to know. It is the true fons et origo of all wisdom and knowledge.
  • This also implies that you do not engage other standards groups. Assume that your hand-picked technical committee is an expert on everything. In any case, expertise is irrelevant, since you are merely describing the One True Implementation. All the decision making essentially already occurred years ago. Your task is just writing it down.
  • Secrecy is paramount. If the unwashed masses find out what you are discussing and what issues you face, they might do things like offer you suggestions or alternatives. That is so annoying. So all meetings, minutes, mailing lists, etc., should all be strictly private.

That’s about it. Eveything else is just common sense. Think Speed. Think Heft. Focus on the One True Implementation. I believe the liberal application of the above principles should enable anyone to quickly and painlessly create an International Standard.

  • Tweet

Filed Under: Standards Tagged With: Ecma, OOXML, Satire

VML and OOXML: Cum mortuis in lingua mortua

2006/07/24 By Rob 7 Comments

In this post, I will look at the history of Vector Markup Language (VML), how it lost out to the W3C’s SVG back in the 1990’s, but has come back from the dead, showing up in the draft Ecma Office Open XML (OOXML) specification. I offer some opinions on why this is a bad thing.

First, a bit of history. The field is vector graphics, the type of graphics composed of lines and shapes and background fills, the type of graphics that scales nicely to different sizes/resolutions, and different devices, as opposed to raster graphics which is a bunch of pixels such as a GIF or JPEG file. This is a gross oversimplification, but it will suffice.

Vector Markup Language (VML) was an XML vocabulary for vector graphics submitted to the W3C by Microsoft and others back in mid-1998. I will not comment on its quality or merits, but merely note that it was rejected by the W3C in favor of Scalable Vector Graphics (SVG) specification which became a W3C Recommendation (that’s what the W3C calls their standards) in 2001. Since then, SVG 1.0 was upgraded to SVG 1.1. in 2003 and several mobile profiles (SVG Tiny and SVG Basic) were created. SVG has native support in Firefox and Opera, with Plugins available for most other browsers. There is support on mobile phones and PDA’s. A search of Amazon.com shows 19 books dedicated to SVG. The SVGOpen Conference has been going for 5 years strong. This all adds up to SVG being an established, open standard, widely implemented with a thriving implementor/user community and signs of continued innovation. It is a standard with a past, a present and a future.

But what ever happened to VML? VML has been a dead-end, from a standards perspective, for 8 years now, an eternity in web time. I was not able to find any VML books on Amazon. I could not find any VML conferences (unless one counts the Virgina Municipal League’s get-together at Virgina Beach in October). However, there is some lingering VML support in Internet Explorer and Office. Developers still use VML to target those applications, but I wonder, is it done out of preference or out of necessity? Although it is the users who are portrayed as dinosaurs for not upgrading to Office 2003, doesn’t it seem like Microsoft Office and Internet Explorer are the ones in need of an upgrade? They should join the rest of the world and start using SVG rather dragging along a dead spec.

But it is worse than this. I wouldn’t have bothered writing this just to point out something you already know, that Internet Explorer slowly or never adopts relevant web standards. The thing I wish to bring to your attention is that VML, the same VML rejected in 1998, is now being proposed as part of the draft Ecma Office Open XML. Take a look for yourself (warning, 25 MB specification download!).

Section 8.6.2 of the spec (Ecma Office Open XML, Working Draft 1.3) says:

VML specifies the appearance and content of certain shapes in a document. This is used for shapes such as text boxes, as well as shapes which must be stored to maintain compatibility with earlier versions of consumer/producer applications.

How should one parse “earlier versions of consumer/producer applications”? Is this a circuitous way of saying “MS Office and Internet Explorer”?

Now take a look at Chapter 23, VML, pages 3571-3795 (PDF pages 3669-3893). We see here 224 pages of “VML Reference Material”, which appears to be a rehash of the 1999 VML Reference from MSDN, and in this form it hides itself in a 4,081-page OOXML specification, racing through Ecma and then straight into ISO. Is this right? Should a rejected standard from 1998, be fast-tracked to ISO over a successful, widely implemented alternative like SVG?

Why should you care? It is all about reuse.

  1. If a standard reuses an already successful standard, it reuses the collective community wisdom that went into making that standard. It also reuses the considerable editorial effort in writing, editing and reviewing a technical specification. Reusing this effort lets the TC focus their time on on truly innovative aspects of their specification, and leads to a higher quality standard.
  2. When you reuse a standard, you allow implementors to reuse the experience and knowledge they have already developed around that standard. Remember the 19 books dedicated to SVG? There is a lot of SVG knowledge out there. Why waste it?
  3. Reusing an existing standard, especially a popular one like SVG, allows implementors to reuse the various code bases, both commercial and open source that support it. Why reinvent the wheel? Do you really want to rewrite a vector graphics engine? SVG has several good open source libraries including Apache Batik and librsvg.

Use SVG and you get reuse on three fronts. Stick with VML and the only thing that is reused is Microsoft’s legacy code. Using SVG is clearly the better choice.

I suggest that Ecma TC45 investigate this issue and consider moving off of VML and move to SVG, or at least demonstrate why it is impossible to do so. Why does the world need yet another XML vector graphics standard? If there is something missing in SVG (which I doubt) then why not work with the W3C to propose a enhancement for SVG rather than re-proposing the VML standard which was rejected back in 1998?

Again, I make no technical argument why SVG is or isn’t superior to VML. I merely note that SVG has been an adopted W3C standard for 5 years now and should have a presumption of suitability for the task, especially over a specification which were rejected 8 years ago.

  • Tweet

Filed Under: OOXML Tagged With: OOXML, Open Standards, SVG, VML

  • Go to page 1
  • Go to page 2
  • Go to Next Page »

Primary Sidebar

Copyright © 2006-2023 Rob Weir · Site Policies