• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

An Antic Disposition

  • Home
  • About
  • Archives
  • Writings
  • Links
You are here: Home / 2008 / Archives for May 2008

Archives for May 2008

The Challenge

2008/05/05 By Rob 17 Comments

<?xml version="1.0" encoding="UTF-8"?>
<office:document-content
xmlns:office="urn:oasis:names:tc:opendocument:xmlns:office:1.0"
xmlns:text="urn:oasis:names:tc:opendocument:xmlns:text:1.0"
office:version="1.0">
<office:body>
<office:text>
<text:p>Dear Alex Brown. Please prove that I am invalid ODF 1.0 (ISO 26300:2006).
I do not think that I am. In fact I think that your statement that there are
no valid ISO ODF documents in the world, and that there cannot be, is a brash,
irresponsible and indefensible piece of bombast that you should retract.</text:p>
<text:p>(Please note that this document contains no ID, IDREF or IDREFS attributes.
Nor does it contain custom content.)</text:p>
</office:text>
</office:body>
</office:document-content>
  • Tweet

Filed Under: ODF Tagged With: Alex Brown, ISO/IEC 26300, ODF, OpenDocument Format, XML

Release the OOXML final DIS text now !

2008/05/04 By Rob 24 Comments

The JTC1 Directives [pdf] are quite clear on this point. After a Ballot Resolution Meeting (BRM), if the text is approved, the edited, final version of the text is to be distributed to NB’s within 1 month. This requirement is in the Fast Track part of JTC1 Directives, specifically in 13.12:

13.12 The time period for post ballot activities by the respective responsible parties shall be as follows:
.
.
.

  • In not more than one month after the ballot resolution group meeting the SC Secretariat shall distribute the final report of the meeting and final DIS text in case of acceptance.

The OOXML BRM ended on February 29th. One month after February 29th, if my course work in scientific computing does not fail me, is… let’s see, carry the 3, multiply, convert to sidereal time, account for proper nutation of the solar mean, subtract the perihelion distance at first point of Aries, OK. Got it. Simple. One month later is approximately March 29th +/- 3 days.

So the SC34 Secretariat should have distributed the “final DIS text” by March 29th, or at the very least, when the final ballot results on OOXML were known a few days later.

But that didn’t happen. Nothing. Silence. What is the hang up? I note that when NB’s said that the Fast Track schedule did not give sufficient time to review OOXML, the response from ISO/IEC was “There is nothing we can do. The Directives only permit 5 months”. And when NB’s protested at the arbitrary 5 day length of the OOXML BRM, the response was similarly dismissive. But when Microsoft needs more time to edit OOXML, well that appears to be something entirely different. “Directives, Schmerectives. You don’t worry yourself about no stinkin’ Directives. Take whatever time you need, Sir.”

It makes you wonder who ISO/IEC bureaucracy is working for? The rights and prerogatives of NB’s? Or of large corporations? Almost every decision they made in the OOXML processing was to the the detriment of NB prerogatives.

This delay has practical implications as well. Consider the following:

  1. We are currently approaching a two month period where NB’s can lodge an appeal against OOXML. Ordinarily, one of the grounds for appeal would be if the Project Editor did not faithfully carry out the editing instructions approved at the BRM. For example, if he failed to make approved changes, made changes that were not authorized, or introduced new errors when applying the approved changes. But with no final DIS text, the NB’s are unable to make any appeals on those grounds. By delaying the release of the final DIS text, JTC1 is preventing NB’s from exercising their rights.
  2. Law suits, such as the recent one in the UK, are alleging process irregularities, including (if I read it correctly) that BSI approved OOXML without seeing the final text. I imagine that having the final DIS text in hand and being able to point to particular flaws in that text that should have justified disapproval would bolster their case. But if JTC1 withholds the text, then they cannot make that point as effectively.
  3. There are obvious anti-competitive effects at play here. Microsoft has the final DIS version of the ISO/IEC 29500:2008 standard, and by JTC1 delaying release to NB’s, Microsoft is able to have 2+ extra months, free of competition, to produce a fix pack to bring their products in line with the final standard, while other competitors like Sun or Corel are left behind. So much for transparency. So much for open standards. How can this can considered open if some competitors are given a significant time and access advantage?

Note that I’m not talking about the publication of the IS here. I’m talking about the requirements of 13.12 and the release of the final DIS text. Obviously ITTF will have a lot of work to do prepping OOXML for publication. For ODF it took 6 months. For OOXML I would expect it to take at least that long. But that does not prevent adhearance to the Directives, in particular the requirement to distribute the final DIS text.

JTC1/SC34, noticing the delay in the release of this text, adopted the following Resolution at their Plenary in early April:

Resolution 8: Distribution of Final text of DIS 29500

SC 34 requests the ITTF and the SC34 secretariat to distribute the already received final text of DIS 29500 to the SC 34 members in accordance with JTC 1 directives section 13.12 as soon as possible, but not later than May 1st 2008. Access to this document is important for the success of various ISO/IEC 29500 maintenance activities.

This indicates that the final DIS text had already been received by SC34 (but not distributed) as of that date (April 9th).

Well, here we are, May 4th, over two months since the final DIS text was due, and past the date requested by the SC34 Plenary (who by they way have no authority to extend the deadline required by JTC1 Directives, but that is another story). We have nothing.

So, I’ll make my own personal appeal. JTC1 has the text. The Directives are clear. The delay is unnecessary and harmful in the ways I outlined above. Release the final DIS text now. Not next month. Not next week. Release it now.

  • Tweet

Filed Under: OOXML

ODF Validation for Dummies

2008/05/02 By Rob 32 Comments

[Updated 4 May 2008, with additional rebuttal at the end]

Alex Brown has a problem. He can’t figure out how to validate ODF documents. Unfortunately, when he couldn’t figure it out, he didn’t ask the OASIS ODF TC for help, which would have been the normal thing to do. Indeed, the ODF TC passed a resolution back in February 2007 that said, in part:

That the ODF TC welcomes any questions from ISO/IEC JTC1/SC34 and
member NB’s regarding OpenDocument Format, the functionality it
describes, the planned evolution of this standard, and its relationship
to other work on the technical agenda of JTC1/SC34. Questions and
comments can be directed to the TC chair and secretary whose email
addresses are given at

http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=office

or through the comments facility at

http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=office

So it is rather uncollegial of Alex to refuse such an open, transparent way of getting his questions answered. But Alex didn’t avail himself of that avenue. He just assumed if he couldn’t figure out how to validate ODF then it simply couldn’t be done, and that ODF was to blame. This is presumptuous. Does he think that in the three years since ODF 1.0 became a standard, that no one has tried to validate a document?

Alex is so sure of himself that he publicly exults on the claimed significance of his findings:

  • For ISO/IEC 26300:2006 (ODF) in general, we can say that the standard itself has a defect which prevents any document claiming validity from being actually valid. Consequently, there are no XML documents in existence which are valid to ISO ODF.
  • Even if the schema is fixed, we can see that OpenOffice.org 2.4.0 does not produce valid XML documents. This is to be expected and is a mirror-case of what was found for MS Office 2007: while MS Office has not caught up with the ISO standard, OpenOffice has rather bypassed it (it aims at its consortium standard, just as MS Office does).

I think you agree that these are bold pronouncements, especially coming from someone so prominent in SC34, the Convenor of the ill-fated OOXML BRM, someone who is currently arguing that SC34 should own the maintenance of OOXML and ODF, indeed someone who would be well served if he could show that all consortia standards are junk, and that only SC34 (and he himself) could make them good.

Of course, I’ve been known to pontificate as well. There is nothing necessarily wrong with that. The difference here is that Alex Brown is totally wrong.

But let’s see if we can help show Alex, or anyone else similarly confused, the correct way to validate an ODF document.

First start with an ODF document. When Alex tested OOXML, he used the Ecma-376 OOXML specification. Let’s do the analogous test and validate the ODF 1.0 text. You can download it from the OASIS ODF web site. You’ll want this version of the text, ODF 1.0 (second edition), which is the source document for the ISO version of ODF.

You’ll also want to download the Relax NG schema files for OASIS ODF 1.0, which you can download in two pieces: the main schema, and the manifest schema.

Next you’ll need to get a Relax NG validator. Alex recommends James Clark’s jing, so we’ll use that. I downloaded jing-20030619.zip the main distribution for use with the Java Runtime Environment. Unzip that to a directory and we’re almost there.

Since jing operates on XML files and knows nothing about the Zip package structure of an ODF file, you’ll need to extract the XML contents of the ODF file. There are many ways to do this. My preference, on Windows, is to associate WinZip with the ODF file extensions (ODT, ODS and ODP) so I can right-click on these files unzip them. When you unzip you will have the following XML files, along with directories for images files and other non-XML resources you can ignore:

  • content.xml
  • styles.xml
  • meta.xml
  • settings.xml
  • META-INF/manifest.xml

So now we’re ready to validate! Let’s start with content.xml. The command line for me was:

java -jar c:/jing/bin/jing.jar OpenDocument-schema-v1.0-os.rng content.xml

(Your command may vary, depending on where you put jing, the ODF schema files and the unzipped ODF files)

The result is a whole slew of error messages:

C:\temp\odf\OpenDocument-schema-v1.0-os.rng:17658:18: error: conflicting ID-types for attribute "targetElement" from namespace "urn:oasis:names:tc:opendocument:xmlns:smil-compatible:1.0" of element "command" from namespace "urn:oasis:names:tc:opendocument:xmlns:animation:1.0"
C:\temp\odf\OpenDocument-schema-v1.0-os.rng:10294:22: error: conflicting ID-types for attribute "targetElement" from namespace "urn:oasis:names:tc:opendocument:xmlns:smil-compatible:1.0" of element "command" from namespace "urn:oasis:names:tc:opendocument:xmlns:animation:1.0"

Oh no! Emergency, emergency, everyone to get from street!

I wonder if this is one of the things that tripped Alex up? Take a deep breath. These in fact are not Relax NG (ISO/IEC 19757-2) errors at all, but errors generated by jing’s default validation of a different set of constraints, defined in the Relax NG DTD Compatibility specification which has the status of a Committee Specification in OASIS. It is not part of ISO/IEC 19757-2.

Relax NG DTD Compatibility provides three extensions to Relax NG: default attribute values, ID/IDREF constaints and a documentation element. The Relax NG DTD Compatibility specification is quite clear in section 2 that “Conformance is defined separately for each feature. A conformant implementation can support any combination of features.” And in fact, ODF 1.0, in section 1.2 does just that: “The schema language used within this specification is Relax-NG (see [RNG]). The attribute default value feature specified in [RNG-Compat] is used to provide attribute default values”.

It is best to simple disable the checking of Relax NG DTD Compatibility constraints by using the documented “-i” flag in jing. If you want to validate ID/IDREF cross-references, then you’ll need to do that in application code, and not using jing in Relax NG DTD Compatibility mode. Note that jing was not complaining about any actual ID/IDREF problem in the ODF document.

So, false alarm. You can walk safely on the streets now.

(That said, if we can make some simple changes to the ODF schemas that will allow it to work better with the default settings of jing, or other popular tools, then I’m certainly in favor of that. Alex’s proposed changes to the schema are reasonable and should be considered.)

So, let’s repeat the validation with the -i flag:

java -jar c:/jing/bin/jing.jar -i OpenDocument-schema-v1.0-os.rng content.xml

Zero errors, zero warnings.

java -jar c:/jing/bin/jing.jar -i OpenDocument-schema-v1.0-os.rng styles.xml

Zero errors, zero warnings.

java -jar c:/jing/bin/jing.jar -i OpenDocument-schema-v1.0-os.rng meta.xml

Zero errors, zero warnings.

java -jar c:/jing/bin/jing.jar -i OpenDocument-schema-v1.0-os.rng settings.xml

Zero errors, zero warnings.

java -jar c:/jing/bin/jing.jar -i OpenDocument-manifest-schema-v1.0-os.rng META-INF/manifest.xml

Zero errors, zero warnings.

So, there you have it, an example that shows that there is at least one document in the universe that is valid to the ODF 1.0 schema, disproving Alex’s statement that “there are no XML documents in existence which are valid to ISO ODF.”

The directions are complete and should allow anyone to validate the ODF 1.0 specification, or any other ODF 1.0 document. Now that we have the basics down, let’s work on some more advanced topics.

First, the reader should note that there are two versions of the ODF schema, the original 1.0 from 2005, and the updated 1.1 from 2007. (This is also a third version underway, ODF 1.2, but that needn’t concern us here.)

An application, when it creates an ODF document, indicates which version of the ODF standard it is targeting. You can find this indication if you look at the office:version attribute on the root element of any ODF XML file. The only values I would expect to see in use today would be “1.0” and “1.1”. Eventually we’ll also see “1.2”.

It is important to use the appropriate version of the ODF schema to validate a particular document. Our goal, as we evolve ODF, is that an application that knows only about ODF 1.0 should be able to adapt and “degrade gracefully” when given an ODF 1.1 document, by ignoring the features it does not understand. But an application written to understand ODF 1.1 should be able to fully understand ODF 1.0 documents without any additional accommodation.

Put differently, from the document perspective, a document that conforms to ODF 1.0 should also conform to ODF 1.1. But the reverse direction is not true.

To accomplish this, as we evolve ODF, within the 1.x family of revisions, we try to limit ourselves to changes that widen the schema constraints, by adding new optional elements, or new attribute values, or expanding the range of values permitted. Constraint changes that are logically narrowing, like removing elements, making optional elements mandatory, or reducing the range of allowed values, would break this kind of document compatibility.

Now of course, at some point we may want to make bolder changes to the schema, but this would be in a major release, like a 2.0 version. But within the ODF 1.x family we want this kind of compatibility.

The net of this is, an ODF 1.1 document should only be expected to be valid to the ODF 1.1 schema, but an ODF 1.0 document should be valid to the ODF 1.0 and the ODF 1.1 schemas.

That’s enough theory! Let’s take a look now at the test that Alex actually ran. It is a rather curious, strangely biased kind of test, but the bad thinking is interesting enough to devote some time to examine in some detail.

When he earlier tested OOXML, Alex used the OOXML standard itself, a text on which Microsoft engineers had lavished many person-years of attention for the past 18 months, and he validated it with the current version of the OOXML schema. That is pretty much the best case, testing a document that has never been out of Microsoft’s sight for 18 months and testing it with the current version of the schema. I would expect that this document would have been a regular test case for Microsoft internally, and that its validity has been repeatedly and exhaustively tested over the past 18 months. I know that I personally tested it when Ecma-376 was first released, since it was the only significant OOXML document around. So, essentially Alex gave OOXML the softest of all soft pitches.

I think Microsoft’s response, that the validity errors detected by Alex are due to changes made to the schema at the BRM, is a reasonable and accurate explanation. The real story on OOXML standardization is not how many changes were made that were incompatible with Office 2007, but how few. It appears that very few changes, perhaps only one, will be required to make Office 2007’s output be valid OOXML.

So when testing ODF, what did Alex do? Did he use the ODF 1.0 specification as a test case, a document that the OASIS TC might have had the opportunity to give a similar level of attention to? No, he did not, although that would have validated perfectly, as I’ve demonstrated above. Instead, Alex uses the OOXML specification, a document which by his own testing is not valid OOXML, then converts it into the proprietary .DOC binary format, then translates that binary format into ODF and then tries to validate the results with the ODF 1.0 schema (i.e., the wrong version of the ODF schema since OpenOffice 2.4.0’s output is clearly declared as ODF 1.1), and then applies a non-applicable, non-standard DTD Compatibility constraint test during the Relax NG validation.

Does anyone see something else wrong with this testing methodology?

Aside from the obvious bias of using an input document that Microsoft has spent 18 months perfecting, and using the wrong schemas and validator settings, there is another, more subtle problem.

Alex’s test of OOXML and ODF are testing entirely different things. With OOXML, he took a version N (Ecma-376) OOXML document and tried to validate it with a version N+1 (ISO/IEC 29500) version of the OOXML schema.

But what he did with ODF was take a version N+1 (ODF 1.1) document and tried to validate it with an version N (ODF 1.0) of the ODF schema.

These are entirely different operations. One test is testing the backwards compatibility of the schema, the other is testing the backwards compatibility of document instances. It takes no genius to figure out that if ODF 1.1 adds new elements, then an ODF 1.1 document instance will not validate with the ODF 1.0 schema. We don’t ordinarily expect backwardly compatible validity of document instances. Again, Alex’s tests are biased in OOXML’s favor, giving ODF a much more difficult, even impossible task, compared the the versions ran for OOXML.

If we want to compare apples to apples, it is quite easy to perform the equivalent test with ODF. I gave it a try, taking a version N document (the ODF 1.0 standard itself, per above) and validated it with the version N+1 schema (ODF 1.1 in this case). It worked perfectly. No warnings, no errors.

In any case, in his backwards test Alex reports 7,525 errors, “mostly of the same type (use of an undeclared soft-page-break element)” when validating the OOXML text with ODF 1.0 schema. Indeed, all but 39 of these errors are reports of soft-page-break.

Soft page breaks are a new feature introduced in ODF 1.1. It has two primary advantages for accessibility. First it allows easier collaboration between people using different technologies to read a document. Not all documents are deeply structured, with formal divisions like section 3.2.1, etc. Most business documents are loosely structured, and collaboration occurs by referring to “2nd paragraph on page 23” or “the bottom of page 18”. But when using different assistive technologies, from larger fonts, to braille, to audio renderings, the page breaks (if the assistive technology even has the concept of a page break) are usually located differently from the page breaks in the original authoring tool. This makes collaboration difficult. So, ODF 1.1 added the ability for applications to write out “soft” page breaks, indicating where the page breaks occurred when the original source document was saved.

Although this feature was added for accessibility reasons, like curb cuts, its likely future applications are more general. We will all benefit. For example, a convertor for translating from ODF to HTML would ordinarily only be able to calculate the original page breaks by undertaking complex layout calculations. But with soft page breaks recorded, even a simple XSLT script can use this information to insert indications of page breaks, or to generate accurate page numbering, etc. Although the addition of this feature hinders Alex’s idiosyncratic attempt to validate ODF 1.1 documents with the ODF 1.0 schema, I think the fact that this feature helps blind and visually impaired users, and generally improves collaboration makes it a fair trade-off.

Wouldn’t you agree?

That leaves 39 validation errors in Alex’s test. 12 of them are reports of invalid values in an xlink:href attribute value. This appears to be an error in the original DOCX file. Garbage In, Garbage Out. For example, in one case the original document has HYPERLINK field that contains a link to content in Microsoft’s proprietary CHM format (Compiled HTML). The link provided in the original document does not match the syntax rules required for an XML Schema anyURI (the URL ends with “##” rather than “#”) Maybe it is correct for markup like this, with non-standard, non-interoperable URI’s, to give validation errors. This is not the first time that OOXML has been found polluting XML with proprietary extensions. But realize that OpenOffice 2.4.0 did not create this error. OpenOffice is just passing the error along, as Office 2007 saved it. It is interesting to note that this error was not caught in MS Office, and indeed is undetectable with OOXML’s lax schema. But the error was caught with the ODF schema. This is a good thing, yes? It might be a good idea for OpenOffice to add an optional validation step after importing Microsoft Office documents, to filter out such data pollution.

For the remaining validation errors, they are 27 instances of style:with-tab. Honestly, I have no explanation for this. This attribute does not exist in ODF 1.0 or ODF 1.1. That it is written out appears to be a bug in OpenOffice. Maybe someone there can tell us why the story is on this? But I don’t see this problem in all documents, or even most documents.

For fun I tried processing this OOXML document another way. Instead of the multi-hop OOXML-to-DOC-to-ODF conversion Alex did, why not go directly from OOXML to ODF in one step, using the convertor that Microsoft/CleverAge created? This should be much cleaner, since it doesn’t have all the legacy code or messiness of the binary formats or legacy application code. It is just a mapping from one markup to another markup, written from scratch. Getting the output to be valid should be trivial.

So I download the “OpenXML/ODF Translator Command Line Tools” from SourceForge. According to their web page, this tool targets ODF 1.0, so we’ll be validating against the ODF 1.0 schemas.

This tool is very easy to use once you have the .NET prerequisites installed. The command line was:

odfconvertor /I "Office Open XML Part 4 - Markup Language Reference.docx"

The convertor then chugs along for a long, long, long time. I mean a long time. The conversion from OOXML to ODF eventually finished, after 11 hours, 10 minutes and 41 seconds! And this was on a Thinkpad T60p with dual-core Intel 2.16Ghz processor and 2.0 GB of RAM.

I then rang jing, using the validation command lines from above. It reported 376 validation errors, which fell into several categories:

  • text:s element not allowed in this context
  • bad value for text:style:name
  • bad value for text:outline-level
  • bad value for svg:x
  • bad value for svg:y
  • element tetx:tracked-changes not allowed in this context
  • “text not allowed here”

In any case, not a lot of errors, but a handful of errors repeated. But it is surprising to see that this single-purpose tool, written from scratch, had more validation errors in it than OpenOffice 2.4.0 does.

In the end we should put this in perspective. Can OpenOffice produce valid ODF documents? Yes, it can, and I have given an example. Can OpenOffice produce invalid documents? Yes, of course. For example when it writes out a .DOC binary file, it is not even well-formed XML. And we’ve seen one example, where via a conversion from OOXML, it wrote out an ODF 1.1 document that failed validation. But conformance for an application does not require that it is incapable of writing out an invalid document. Conformance requires that it is capable of writing out a valid document. And of course, success for an ODF implementation requires that its conformance to the standard is sufficient to deliver on the promises of the standard, for interoperability.

It is interesting to recall the study that Dagfinn Parnas did a few years ago. He analyzed 2.5 million web pages. He found that only 0.7% of them were valid markup. Depending on how you write the headlines, this is either an alarming statement on the low formal quality of web content, or a reassuring thought on the robustness of well-designed applications and systems. Certainly the web seems to have thrived in spite of the fact that almost every web page is in error according to the appropriate web standards. In fact I promise you that the page you are reading now is not valid, and neither is Alex Brown’s, nor SC34’s, nor JTC1’s, nor Ecma’s, nor ISO’s, nor the IEC’s.

So I suggest that ODF has a far better validation record than HTML and the web have, and that is an encouraging statement. In any case, Alex Brown’s dire pronouncements on ODF validity have been weighed in the balance and found wanting.


4 May 2008

Alex has responded on his blog with “ODF validation for cognoscneti“. He deals purely with the ID/IDREF/IDREFS questions in XML. He does not justify his biased and faulty testing methodology, not does he reiterate his bold claims that there are no valid ODF 1.0 documents in existence.

Since Alex’s blog does not seem to be allowing me to comment, I’ll put here what I would have put there. I’ll be brief because I have other fish to fry today.

Alex, no one doubts that ID/IDREF/IDREFS constraints must be respected by valid ODF document instances. I never suggested otherwise. But what I do state is that this is not a concern of a Relax NG validator. You can read James Clark saying the same thing in his 2001 “Guidelines for using W3C XML Schema Datatypes with RELAX NG“, which says in part:

The semantics defined by [W3C XML Schema Datatypes] for the ID, IDREF and IDREFS datatypes are purely lexical and do not include the cross-reference semantics of the corresponding [XML 1.0] datatypes. The cross-reference semantics of these datatypes in XML Schema comes from XML Schema Part 1. Furthermore, the [XML 1.0] cross-reference semantics of these datatypes do not fit into the RELAX NG model of what a datatype is. Therefore, RELAX NG validation will only validate the lexical aspects of these datatypes as defined in [W3C XML Schema Datatypes].

Validation of ID/IDREF/IDREFS cross-reference semantics is not the job of Relax NG, and you are incorrect to suggest otherwise. Your logic is also deficient when you take my statement of that fact and derive the false statement that I believe that ID/IDREF semantics do not apply to ODF. One does not follow from the other.

You know, as much as anyone, that conformance is a complex topic. One does not ordinarily expect, except in trivial XML formats, that the complete set of conformance constraints will be expressed in the schema. Typically a multi-layered approach is used, with some syntax and structural constraints expressed in XML Schema or Relax NG, some business constraints in Schematron, and maybe even some deeper semantic constraints that are expressed only in the text of the standard and can only be tested by application logic.

For example, a document that defines a cryptographic algorithm might need to store a prime number. The schema might define this as an integer. The fact that the schema does not state or guarantee that it is a prime number is not the fault of the schema. And the inability of a Relax NG validator to test primality is not a defect in Relax NG. The primality test would simply need to be carried out at another level, with application logic. But the requirement for primality in document instances can still be a conformance requirement and it is still testable, albeit with some computational effort, in application logic.

I believe that is the source of your confusion. The initial errors you saw when running jing with the Relax NG DTD Compatibility flag enabled were not errors in the ODF document instances. What you saw was jing reporting that it could not apply the Relax NG DTD Compatibility ID/IDREF/IDREFS constraint checks using the ODF 1.0 schema. That in no way means that the constraints defined in XML 1.0 are not required on ODF document instances. It simply indicates that you would need to verify these constraints using means other than Relax NG DTD Compatibility.

So I wonder, have you actually found ODF document instances, say written from OpenOffice 2.4.0, which have ID/IDREF/IDREFS usage which violates the constraints expressed in ODF 1.0?

Finally, in your professional judgment, do you maintain that this is a accurate statement: “For ISO/IEC 26300:2006 (ODF) in general, we can say that the standard itself has a defect which prevents any document claiming validity from being actually valid. Consequently, there are no XML documents in existence which are valid to ISO ODF.”

  • Tweet

Filed Under: ODF

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2

Primary Sidebar

Copyright © 2006-2023 Rob Weir · Site Policies