• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

An Antic Disposition

  • Home
  • About
  • Archives
  • Writings
  • Links
You are here: Home / Archives for ODF

ODF

Introducing Planet ODF

2009/03/23 By Rob 8 Comments

I have an early Document Freedom Day present for you. Planet ODF is a feed aggregator based on Sam Ruby’s Planet Venus, which itself is a refactoring of Planet 2.0.

Planet ODF aggregates several blogs, news sources, discussion forums and other online services related to ODF. I’ve tried to be semi-intelligent so you don’t get random stories about the Oregon Department of Foresty or non-ODF blog posts by me. I’ll tune the feeds over time, but the hope is to make it 100% ODF relevant content.

If you have a blog, discussion forum or any other ODF-related content with an Atom or RSS feed and want it included, then please let me know. It doesn’t need to be 100% ODF. You can discuss your cats 90% of the time and ODF 10% of the time and I can set up a filter to bring in the relevant content.

Also, I’ve set up an OpenDocument Format group on the social bookmarking site Diigo. (I abandoned del.icio.us when the Microsoft/Yahoo takeover rumors started.) Even if you don’t have a blog or a web site with a feed, you can use Diigo to bookmark any articles you think are relavent to ODF. If you send those links to the OpenDocument Format group, then they will automatically be included in the Planet ODF feed.

Enjoy, and pass on the good news.

Filed Under: ODF

From the Statute of Frauds to WYSIWYS: Document Format Implications

2009/03/04 By Rob 11 Comments

I’d like explore the topic of electronic documents, digital signatures, and what properties are required of them them to be considered accurate and reliable written records. Since this is as much a social question as it is a technical one, we’ll start with some history.

“An Act for prevention of Frauds and Perjuryes” 29 Carol. II (1677), commonly called “The Statute of Frauds”, begins:

For prevention of many fraudulent Practices which are commonly endeavoured to be upheld by Perjury and Subornation of Perjury Bee it enacted by the Kings most excellent Majestie by and with the advice and consent of the Lords Spirituall and Temporall and the Commons in this present Parlyament assembled and by the authoritie of the same That from and after the fower and twentyeth day of June which shall be in the yeare of our Lord one thousand six hundred seaventy and seaven All Leases Estates Interests of Freehold or Termes of yeares or any uncertaine Interest of in to or out of any Messuages Mannours Lands Tenements or Hereditaments made or created by Livery and Seisin onely or by Parole and not putt in Writeing and signed by the parties soe makeing or creating the same or their Agents thereunto lawfully authorized by Writeing, shall have the force and effect of Leases or Estates at Will onely and shall not either in Law or Equity be deemed or taken to have any other or greater force or effect, Any consideration for makeing any such Parole Leases or Estates or any former Law or Usage to the contrary notwithstanding.

Or, to loosely paraphrase in modern English: “We’ve noticed that verbal agreements are being abused. So in certain specific important agreements you better put it in writing and sign it, otherwise don’t bother to bring any dispute to court.”

A few things to note about the Statute and its context:

  1. As the preface notes, frauds were being perpetrated, involving oral contracts and perjury. Before this Statute, oral testimony, even without any evidence of a written agreement, could be used to deprive a person of real or personal property.
  2. The Statute is concerned with private agreements. Although it was already well-established practice by this time for official acts, writs, etc., to be recorded in written form and sealed, literacy, even among tradesmen, was not high, and private agreements were made only orally.
  3. The imposition of a stamp duty or tax to seal official documents, followed this Statute a few years later, ostensibly to raise funds to fight a war against France. But like all forms of taxation, they seem to outlive their original intent, and exist even to the present day, even though England apparently is now at peace with France.

This Statute spread to the American Colonies, where in modified form it lives on in various state laws, and in the Uniform Commercial Code (UCC) today, in §2-201:

A contract for the sale of goods for the price of $5,000 or more is not enforceable by way of action or defense unless there is some record sufficient to indicate that a contract for sale has been made between the parties and signed by the party against which enforcement is sought or by the party’s authorized agent or broker.

I’d like to look a little at what it is about a written agreement that gives it its particular value. Why did they require it to be written? Why not just require witnesses to an oral agreement?

A few salient properties of a written agreement:

  1. A written agreement states the parties to the agreement, the terms of the agreement and is signed by the parties.
  2. Once signed, the agreement may not be altered but by mutual consent of the parties. In the judgement of Brett v. Ridgen, Plowd. Comm., 345, Lord Dyer wrote that “…men’s deeds and wills, by which they settle their estates, are the laws which private men are allowed to make, and they are not to be altered, even by the King in his court of law or conscience. We must take it as we find it.”
  3. The “mirror image” rule applies. Both parties must agree to the same terms. If part A makes an offer, and party B says they accept, but in fact adds or qualifies the terms of the offer, then this is properly treated as a counter-offer. The agreement is not made until both parties agree to the same terms.
  4. The underlying mechanics and notation of the agreement are flexible, unless otherwise specified. Whether scribbled with a crayon on a napkin, sent by telegram, teletype, fax or email, these may all be considered written agreements.

The affordances of paper and ink, which lends itself particularly well to the above concerns include:

  1. Paper/ink expresses symmetric information. What you see is what I see and is what will be seen in court if we end up there some day.
  2. There is no invisible ink, no hidden pages. The text of the agreement does not say something different under the florescent lights at the court house versus the sunlight at the construction site. Although these things in theory could be done, via special inks and papers, the use of these techniques in an agreement would be prima facie evidence of fraud.
  3. Certainly, if it is poorly written, the terms of the agreement could be ambiguous and subject to various interpretations. Paper/ink cannot make you or your lawyer smarter. It only makes the agreement an accurate and reliable record. If a particular word is smudged or a number is crudely written, I can see this flaw and you can see this flaw and either of us can require the flaw to be fixed before we sign the agreement. If there is text that is unclear in meaning, I can ask my lawyer to explain it. I am able to understand the document perfectly should I take care to do so.
  4. Paper/ink is accurate and reliable over the time scale of personal and commercial contracts.
  5. A person’s signature or mark on an agreement, absent evidence of fraud or coercion, clearly indicates their assent to the terms of the agreement. We do not commonly write our signature unless we intend to express assent.

Jump ahead to the present day, with the increasing use of electronic documents and digital signatures. Digital signatures offer some of the same affordances we traditionally had with paper/ink. Provided the chain of certificates and keys have not been compromised, that the underlying applications have not been compromised and that the act of signing requires an affirmative and unambiguous action by the signer, a digital signature is evidence of:

  1. What was signed
  2. Who signed it
  3. the intention to sign, i.e., give validity to the agreement

However, there is a weakness in electronic agreements, even when digitally signed. The weakness is in what is signed. When you sign a electronic document, you are signing the stream of bits and bytes that comprise that document in a particular document format. The average person lacks the ability to directly inspect or understand the underlying representation of an electronic document. They can only see what a particular software application running on a particular operating system running on a particular computer shows when loading the document. Will that signed document appear the same on a different computer, to a different person using a different software application or a different operating system? That is the critical question. Unfortunately, the affordances of paper/ink for symmetrical information, lack of hidden information, invariability over time and venue changes, etc., are not necessarily guaranteed with electronic documents.

The digital signature guys call out an additional requirement needed for a digital signatures to give the same guarantees as paper/ink agreements. It goes by the acronym WYSIWYS, or “What You See is What You Sign”.

So what is required for electronic documents to have the same affordances as paper/ink for use as accurate and reliable records? I suggest the following:

  1. The format used by the electronic document must be specified in an open standard.
  2. The format standard must define the characteristics of semantically equivalent documents and specify the format sufficiently so that implementations of the standard can display semantically equivalent renderings of the document. Semantic equivalence is not broken by minute differences in layout, so it should be possible to have semantically equivalent renderings on different devices, e.g., a laptop versus a smart phone versus a screen reader.
  3. The application used to view and sign the electronic document must conform to standard, specifically those stated parts of the standard necessary to render a semantically equivalent document.
  4. The document must be strictly conformant to the standard, with no extensions. Just as you would not physically sign a paper document that contained interpolated text in a language that you do not understand, you should not sign an electronic document that contains unknown extensions. Otherwise semantic equivalence is not guaranteed between the two parties and a “mirror image” problem.
  5. Semantic equivalence must not rely on graphics. Although graphical content is permissible, such content must be redundant with respect to the text. Otherwise the “mirror image” problem is unresolvable between sighted and blind persons.

Further, I believe these criteria are of more general applicability. Although the Statute of Frauds may have been intended for marriage contracts and the like, the need to have accurate, reliable written records is a ubiquitous requirement for business and public administrations today. Wherever misunderstanding would be liability, where it is particularly important for multiple parties to be “on the same page” with respect to the contents and meaning of a document, these considerations apply.

For editable formats like ODF, I think it points out the need to describe a formal content model that describes the semantic content of a document, aside from its formatting and layout. So text + lists + tables + headers + footers + footnotes + images + captions, etc. Visual appearance is nice to have as well, but it is less robust when rendered on different devices, different operating systems, and is less likely to be robust when rendered on OpenOffice 10.0 in 2015. But the equivalence of the semantic content of an unextended ODF document should provide the same ability to have an accurate and reliable record in an electronic document as we have had traditionally with paper and ink.

Filed Under: ODF

Low-Fat ODF

2009/03/03 By Rob 2 Comments

Jack Sprat could eat no fat.
His wife could eat no lean.
And so betwixt them both, you see,
They licked the platter clean!

Is dietary fat good? Or is it bad? Without getting into a discussion of saturated versus unsaturated fats, or the virtues of omega-3 oils, let me make a few basic, reasonable observations:

  1. Individuals differ in their preferences and requirements for fat intake. There is no single answer for all people at all times.
  2. Experts differ in their recommendations for fat intake.
  3. Standards exist for how to measure and report the fat contents of food products.
  4. Standards also exist for the specific conditions under which a vendor may call their food products “low fat” or “light” or “fat -free”. For example, “low fat” products must have 3g or less fat per serving.
  5. The government requires vendors of retail packaged food to label the fat content in accordance with standards #3 and make only claims regarding fat content that conform with standards #4.

The above system generally works. Food vendors have the freedom to add as much fat as they want to their products. If they want to sell deep -fried bacon-wrapped cheese, then fine. No problem. It is a free country. But this is balanced by the consumer’s ability to know the fat content of the products that they purchase. This gives control to the consumer, allowing informed choice.

But take away the standards, take a way the reporting requirements, and the manufacturer has all of the control. Let’s imagine a world where there were no such fat content standards. Medical research would still progress and the long-term dangers of high-fat diets would still be known. But the consumer’s ability to control their fat content would vastly reduced. There would be no informed choice.

Imagine further that Company A, observing the medical research and consumer interest in healthy food, decides to offer a low-fat cheese. But if Company A sells their low-fat cheese, the label “low fat” itself would have no formal meaning. In this hypothetical, there are no standards. Nothing prevents Company B and Company C from also advertising their existing cheeses as “low fat”. Without standards there is no differentiation. Since consumers have no effective way to test the fat content of cheese on their own, they are at the mercy of the non-verifiable claims of vendors and the advertising agencies. Because there are no acknowledged standards for fat content, the market for low-fat cheese is stunted. The consumer does not benefit and the innovative Company A does not benefit. No one wins.

This is a general concern for markets where the consumer cannot directly verify the quality of the goods, because they are packaged and inaccessible to inspection, or because the consumer lacks the technical ability to determine the quality themselves. From fat content to auto gas mileage efficiency, this leads to standards for measuring and reporting qualities of interest to consumers.

So back to reality. We do have fat content standards, for measurement and reporting. Suppose that Company A sells its low-fat cheese and it is very popular, because it is what the consumer wants. Company B is envious of the higher margins on low-fat products, but it would take too long for them to revamp their production line to make a cheese with 3g or less fat per serving. They can only get it down to 5g per serving. What can they do? Well, they can hire a lobbyist, go to Washington, DC, and spread some influence around. They could try to get the FDA to change their definition of “low-fat” so it includes their higher-fat products as well. If you can’t change your product to meet the standards that consumers want, then dumb down the standards!

Sound far-fetched? This is actually happening all the time with certified organic food in the United States. Non-organic ingredients are routinely being allowed in organic food products based on requests from big food manufacturers. The consumer has very little visibility or voice in this process.

So what does this all have to do with ODF? Fair question. The analogy is to extensions of ODF, a topic currently being hotly debated on the OASIS ODF Technical Committee. Extensions are additions to an ODF document which are not defined by the ODF standard. They may be proprietary vendor extensions, or extensions using other open standards. But regardless, since their use in an ODF document is not defined by the ODF standard, they are difficult or impossible to use in an interoperable fashion, at least by those who do not know the secret details of the extension. However, such extended documents may be immensely useful in some situations.

So are extensions good? Are they bad? Are you more concerned with interoperability? Or with a particular use that requires the extension? There is no single answer for all people at all times. Because of this, it is important to put control firmly in the hand of the consumer of ODF products, so they can make the appropriate choice for themselves.

Similar to the mechanism of food labeling, putting control in the consumer’s hands requires that we:

  1. Have a formal definition of what an extended ODF document is versus an unextended ODF document.
  2. Have something like a reporting requirement, so it is clear to the consumer whether a particular document is extended or not.

The proper pace to address these points is in the conformance clause of the ODF Standard. To that end, the current draft of ODF 1.2 defines two conformance classes, one for extended documents and one for unextended documents. The aim, in the end, is to give the consumer greater control and allow them to make a more intelligent choice. We can’t force vendors to implement one or the other conformance class. And we can’t force consumers to use one or the other. But we can formally define what an extended document is and let the free market operate based on the additional information made available.

This is a small step and I know it doesn’t sound like much, but even this modest step provoked such a paroxysms on the ODF TC that you would have thought I was splashing holy water at an exorcism. I suspect this means that I must be doing something right!

Filed Under: ODF

ODF Spreadsheet Interoperability: Theory and Practice

2009/03/01 By Rob 9 Comments

This is a follow up to some work we did at the ODF Interoperability Workshop in Beijing last November. We had good participation there: IBM, Sun, Google, Novell and Redflag from the big vendor side, as well as a good number of users. It was a full-day workshop and we covered a number of topics. One of them was spreadsheet formulas. I gave a short presentation on spreadsheet interoperability, specifically on the work we’ve done on OpenFormula for ODF 1.2. We also did a short exercise to look for spreadsheet formula bugs.

As many of you know, neither ODF 1.0 nor ODF 1.1 defines a spreadsheet formula language. They leave it implementation-defined. The specification makes only a few broad statements, such as a recommendation that formula attributes be qualified by namespace, that formulas begin with ‘=’ , that cell addresses be surrounded by ‘[‘ and ‘]’ and that formula parameters be delimited by ‘;’. So in theory, this is a mess. But in practice it has worked out quite well, since implementations have played “follow the leader” and have nearly converged on interoperable spreadsheet formulas. With ODF 1.2, we’ll standardize the consensus on spreadsheet formulas, giving even greater certainties.

Let’s see how this works in practice. I created a simple spreadsheet document in several ODF-supporting applications, including Microsoft Office using the various plugins. Here is what I tested:

  1. Microsoft Office 2003 with the Microsoft-sponsored CleverAge Add-in version 2.5
  2. Google Spreadsheets
  3. KOffice’s KSpread 1.6.3
  4. Lotus Symphony 1.1
  5. OpenOffice 2.4
  6. Microsoft Office 2003 with Sun’s ODF Plugin

I used what I had installed on my two machines, Windows and Ubuntu. There may be updates to some of these applications that do even better.

I created the same basic spreadsheet from scratch in each editor and saved it as ODF format. I then looked at each document to see how formulas were being stored in the XML:

  1. CleverAge stores it in the OpenOffice namespace (xmns:oooc=”http://openoffice.org/2004/calc”)
  2. Google also uses the OpenOffice namespace.
  3. KSpread doesn’t use namespace-qualified formula attributes.
  4. Symphony also doesn’t use namespace-qualified formula attributes.
  5. OpenOffice uses the OpenOffice namespace.
  6. Sun’s Plugin also uses the OpenOffice namespace.

OK. So there is some variation in how the formulas are stored, with two approaches in use. How does this then impact interoperability? In theory it is horrible. In practice it works out pretty well.

I took each of the 6 spreadsheet documents and opened each one in each of the other 5 applications — 30 interoperability tests — to see whether the formulas were loaded and calculated correctly. Here is what I saw:

Created In
CleverAge Google KSpread Symphony OpenOffice Sun Plugin

Read In

CleverAge OK OK Fail Fail OK OK
Google OK OK OK OK OK OK
KSpread OK OK OK OK OK OK

Symphony OK OK OK OK OK OK
OpenOffice OK OK OK OK OK OK
Sun Plugin OK OK OK OK OK OK

So the formulas came through OK, in almost all instances. The only exception was the CleverAge add-in, which failed to process formulas from KSpread and Symphony. For example, loading the Symphony spreadsheet into Office 2003 results in cells with contents containing errors such as “=#REF!+#REF!-#REF!” which is tantamount to data loss.

I think we can do better than this with a few simple changes.

The Law of Robustness as stated in RFC 1122 is “Be liberal in what you accept, and conservative in what you send.” Adapting that principle to ODF spreadsheets, I recommend the following practice for ensuring interoperability using ODF 1.0 and ODF 1.1:

  1. When writing ODF 1.0 or ODF 1.1 spreadsheet documents, write formula attribute values using the OpenOffice namespace prefix: “http://openoffice.org/2004/calc”. All ODF spreadsheet applications I have tested accept and correctly process formulas in that namespace. Note that the CleverAge add-in is not doing the namespace checks in a XML-correct fashion. They are comparing only the text of the prefix, not resolving it to a namespace URI and comparing the URI’s. So you should be sure to also use “oooc” as the namespace prefix.
  2. When reading ODF 1.0 or ODF 1.1 spreadsheet documents, be prepared to handle formulas with no namespace qualification as well as those with the OpenOffice namespace.

Specifically, Symphony and KSpread should consider making changes to accommodate #1 and CleverAge should consider changes needed to do #2. In the CleverAge case, a trivial, one-line change to OdfConditionalPostProcessor.cs will quickly restore compatibility with Symphony and KSpread documents.

Now, if you are entirely satisfied with what I have said above, and have no lingering doubts, then you are not thinking enough. It is not enough to merely bring the spreadsheet formulas over intact. Interoperability also requires that we interpret the formulas in the same way.

So let’s look at that side of the equation (no pun intended). Fortunately, we are all quite close to what is being defined in ODF 1.2’s OpenFormula specification. This is not so surprising, since OpenFormula was based on actual spreadsheet practice, looking at a variety of spreadsheet applications. I did a quick test of the 6 ODF spreadsheet applications to see how well they fared against a test suite of 509 core tests that OpenFormula defines for spreadsheet functions. The results were:

  • CleverAge 455/509 = 89%
  • Google 457/509 = 90%
  • KSpread 472/509 = 93%
  • Symphony 487/509 = 96%
  • OpenOffice 493/509 = 97%
  • Sun Plugin 500/509 – 98%

So, we’re not yet perfect, but we’re getting pretty close. Interestingly, the lowest scores (CleverAge) and highest scores (Sun Plugin) are both for the same calculation engine (Excel).

Looking forward, we’ll continue to edit and refine OpenFormula and its test cases. You might look for it when it comes out for public review, hopefully in a couple of months. Unlike other parts of ODF 1.2, OpenFormula is essentially XML-free. It is a mini-expression language, defined by a BNF grammar and accompanied by hundreds of spreadsheet functions from mathematics, finance, engineering, statistics, etc. So review by subject matter experts in these disciplines is especially needed, even if they have zero XML experience. If you want to see the current OpenFormula Working Draft, currently in its 71st revision, take a look. Comments may be submitted to the ODF TC’s comment list.

I’m also looking forward to testing Office 2007 SP2’s ODF support when it comes out, to see how their ODF support is improving. Anything less than the 500/509 results that Excel 2003 gives with the Sun Plugin will be a disappointment. KOffice has a 2.0 version in beta I should look at. OpenOffice has their 3.0 update. Sun also has an updated ODF Plugin. I’ll lean on the Symphony team as well, and see if we can beat 500/509. Game on!

Filed Under: Interoperability, ODF

Whither ODF?

2009/02/25 By Rob 23 Comments

Whether ODF will wither or weather
depends on us as we work together.

The question is where we should go: whither?
The answer is clear at once.
The question of “whither” is not so dense,
and is easy to answer when we start with “whence?”.

Of the topic today
I will no longer delay nor dither to say
whether we will whither or weather
but will now give you my 2-cents.

Rob’s ODF-Next Rant

  1. The word processor and spreadsheet, as we have them today, are relics of the 1980’s, designed when the web did not exist and collaboration occurred predominantly by exchanging paper documents. If we were designing a document author and collaboration system to meet modern circumstances and capabilities, it would likely bear little resemblance to Word. So the question is how much do we let the sunk costs of yesterday continue to determine our future? How much longer do we paint speed stripes on a horse and pretend that it is a racing car?
  2. Products like Word and Excel have evolved via the uncritical accretion of functionality over the past decades to a point where the products are overly complex resource gluttons with a knack for having a critical security flaw reported in them every other week.
  3. Increasingly users are getting work done via email, wikis and blogs rather than using heavy-weight document editing solutions. Why is this so? Why is the modern word processor losing users rather than gaining them?
  4. WYSIWYG is a fine paradigm if you are doing all of your work targeting printed output. But it is a sub-optimal approach for creating documents for almost any other use.
  5. The revered Bold, Italics and Underline icons, along with the font selection drop down list, which define the modern editor GUI, should be forcibly removed from the user interface, stripped of rank, and put on trial for crimes against productivity. You are writing a document, not decorating a cake. You need to ask yourself “Why should this text be italics?” Is it a book title, a foreign phrase, a name of a movie, the name of a legal case? Then choose a named style that indicates why that text is special. Let the named style take care of how it is displayed.
  6. Unless you are designing a poster for a modern art gallery you should stick to the named styles in your template. Power users might define additional named styles. But direct application of random attributes to random text selections should be considered a form of data corruption.
  7. Few documents today are ever printed. The are born, live and die entirely in digital form. We should be optimizing for the most common cases, not just for what our parents or grandparents did with WordPerfect 1.0.
  8. The most common sources of reused content come from other documents and from PDF and from HTML. Current cut & paste mechanisms today make a mess of styles. Paste in the content with the styles of the source document? According to the styles of the destination document? Mapping to the nearest local style? All are wrong answers. The only correct answer is to give me the choice.
  9. PowerPoint is pure evil. It has elevated form over substance and turned every form of business communication into a “pitch”.
  10. I should be able to call spreadsheet functions using named parameters, like PV(rate=1%,periods=12,payment=$1000.00) rather than PV(0.01,12,10000) so my model is self-documenting and avoids errors from incorrect ordering of parameters.
  11. Security needs to be designed into the document authoring environment, including the format, not patched on as an afterthought.
  12. I want Greasemonkey for my word processor and my spreadsheet.
  13. Connections between documents may be as important as the documents themselves.
  14. The less control the user asserts over the appearance of a document during editing, the more flexibility he or she has over the final published appearance. In today’s multi-modal, multi-device world, it is essential that we do not prematurely commit our documents to a particular rendering. We need late binding of presentation to content, not early binding. If we had done this for the past decade, we would have perfect interoperability today between all word processors. If we start doing it now, we will have perfect interoperability among word processors going forward.
  15. Spreadsheets should have functions that access web-based data stores for common financial, economic, political and scientific data sets. Mathematica does something similar, presumably using local caching.
  16. Presentation should be a mode of displaying another document, not just document type itself. For example, I should be able to take a report and push a button to enter a slide-show mode, where all images are shown as slides, with their captions, and each top level section header becomes a slide with 2nd level headers as bullet items. During the presentation I should be able to seemlessly drill down into the real document.
  17. I want to be able to share data ranges, text ranges and presentation slides with others and to subscribe to theirs via feeds. I rarely write a document from scratch. Reuse, reuse, reuse. But the tools only support this at a scavenger level.
  18. We lack high level support for the compositing or assembling a document from fragments. Once I cut & paste, my new docment has lost all knowledge of the document I copied from. This is great if I am a professional plagiarist. But it is bad if I am a CIA analyst and my report has copied information claiming uranium production in Africa, and I never know when that information is repudiated, and I pass my flawed report onto the President. Very bad. When I cite an authority for an argument, my argument is only as good as the authority. I owe it to myself and my readers to make it easy to know whether the information I cited is still accurate and vouched for by that authority.
  19. Current tools are impoverished when it comes to the social side of documents. Review/comment reflects old, hierarchical thinking and doesn’t scale to the network. How can I have 100 people comment on my document? What if I want 100 people to jointly author a document? The Wiki knows where Word cannot go…
  20. Most user woes in modern word processor are caused by our attempts to remain compatible with the design choices made by Microsoft Office developers 15 years ago. It is time to move on and learn from past mistakes, but not perpetuate them.
  21. I want to use the same text editor to edit documents, web pages, emails, blog posts, discussion forums and wikis. Why do I need a different brand hammer for every nail?
  22. I want a spreadsheet function that can call a web service. It might lookup a book title by ISBN, do currency conversions, or geocode data. There should be thousands of such spreadsheet functions, backed by web services, interoperable based on standard protocols. Some might be free, others fee-based. Some might be both, e.g., 20-minute delayed quotes for free, real-time for a fee.
  23. Spreadsheet functions express a core analystic function and should be usable in all tables, in word processors and presentations, not just in spreadsheets. They should also be usable in fields in forms and in text passages.
  24. The inability of word processors to output clean, readable and valid HTML or XHTML should be an embarrassment to all vendors.
  25. HTML + JS + XHR + HTML DOM = AJAX. ODF + JS + XHR + ODF DOM = ?
  26. We must define power as in “power user” based on results, on productivity. Power is as much about what a system allows you to ignore as what it allows you to control.
  27. Today trust is based on digital signatures and classical questions of authentication, integrity and non-repudiation, all backed by a chain of trust traceable back to some well-known certification authority. In some contexts, this hierarchical, binary view of trust is adequate. But the network sees trust based on reputation, rating, scoring, voting, reverse citation counts and other non-hiearachical values. How do we account for these?
  28. Spreadsheets are unnecessarily dangerous, based on a muddled view of data types which leads to silent errors and inconsistencies. This might have made sense in the memory and processor constrained systems of the 1980’s. But today, with our better sense of the errors and the cost of errors, we need a spreadsheet system that is type-safe, aware of measurement units, and which enforces consistency and accuracy. We obviously can’t prevent someone from making a stupid spreadsheet model for subprime mortgages, but we can at least ensure that they don’t make stupid cut & paste errors when creating that model.
  29. Spreadsheets should have instrinsic support for image, sound and geographic data. Not just embedded media, but as an intrinsic data type, so a function could take an image as input, or return an audio clip as a result.
  30. A grid in a spreadsheet provides a logical addressing scheme as well as a visual layout scheme. But what if I want the former without the latter? Why can’t I do a spreadsheet calculation in a text document? Why am I always stuck in in a grid?
  31. Spreadsheets should have built-in support for sensitivity and risk analysis, perhaps via monte carlo methods. Yes, I know support is available via 3rd party plugins, but this should be a core feature in the repetoire of every user. We might not be in the global financial mess we’re in now if spreadsheet users all could easily “stress test” their models.
  32. The Holy Trinity of Word/Excel and Powerpoint is only a convention, mainly enforced by Microsoft’s definition of their office suite. It is not a law of nature. Other applications types should be considered to be part of the core desktop authoring environment, such as project management and mind maps.
  33. Outliners and other pre-draft tools have lagged far behind the core editing functions of a word processor. And what is the equivalent of an outliner for a spreadsheet?
  34. Microsoft is as much a prisoner to the predominent model of end user producitivty as the user is. Their need to support legacy documents constraints their freedom of action and has contributed to the relative lack of innovation in Microsoft Office over the past decade.
  35. An editor should allow a user to verify interoperability as easily as it lets them do a print preview.

Filed Under: ODF

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 8
  • Page 9
  • Page 10
  • Page 11
  • Page 12
  • Interim pages omitted …
  • Page 25
  • Go to Next Page »

Primary Sidebar

Copyright © 2006-2026 Rob Weir · Site Policies