• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

An Antic Disposition

  • Home
  • About
  • Archives
  • Writings
  • Links
You are here: Home / Archives for Standards

Standards

Sinclair’s Syndrome

2008/04/17 By Rob 10 Comments

A curious FAQ put up by an unnamed ISO staffer on MS-OOXML. Question #1 expresses concerns about Fast Tracking a 6,000 page specification, a concern which a large number of NB’s also expressed during the DIS process. Rather than deal honestly with this question, the ISO FAQ says:

The number of pages of a document is not a criterion cited in the JTC 1 Directives for refusal. It should be noted that it is not unusual for IT standards to run to several hundred, or even several thousand pages.

Now certainly there are standards that are several pages long. For example, Microsoft likes to bring up the example of ISO 14496, MPEG 4, at over 4,000 pages in length. But that wasn’t a Fast Track. And as Arnaud Lehors reminded us earlier, MPEG 4 was standardized in 17 parts over 6 years.

So any answer in the FAQ which attempts to consider what is usual and what is unusual must take account of past practice JTC1 Fast Track submissions. That, after all, was the question the FAQ purports to address.

Ecma claims (PowerPoint presentation here) that there have been around 300 Fast Tracked standards since 1987 and Ecma has done around 80% of them. So looking at Ecma Fast Tracks is a reasonable sample. Luckily Ecma has posted all of their standards, from 1991 at least, in a nice table that allows us to examine this question more closely. Since we’re only concerned with JTC1 Fast Tracks, not ISO Fast Tracks or standards that received no approval beyond Ecma, we should look at only those which have ISO/IEC designations. “ISO/IEC” indicates that the standard was approved by JTC1.

So where did things stand on the eve of Microsoft’s submission of OOXML to Ecma?

At that point there had been 187 JTC1 Fast Tracks from Ecma since 1991, with basic descriptive statistics as follows:

  • mean = 103 pages
  • median = 82 pages
  • min = 12 pages
  • max = 767 pages
  • standard deviation = 102 pages

A histogram of the page lengths looks like this:

So the ISO statement that “it is not unusual for IT standards to run to several hundred, or even several thousand pages” does not seem to ring true in the case of JTC1 Fast Tracks. A good question to ask anyone who says otherwise is, “In the time since JTC1 was founded, how many JTC1 Fast Tracks have been submitted greater than 1,000 pages in length”. Let me know if you get a straight answer.

Let’s look at one more chart. This shows the length of Ecma Fast Tracks over time, from the 28-page Ecma-6 in 1991 to the 6,045 page Ecma-376 in 2006.

Let’s consider the question of usual and unusual again, the question that ISO is trying to inform the public on. Do you see anything unusual in the above chart? Take a few minutes. It is a little tricky to spot at first, but with some study you will see that one of the standards plotted in the above chart is atypical. Keep looking for it. Focus on the center of the chart, let your eyes relax, clear your mind of extraneous thoughts.

If you don’t see it after 10 minutes or so, don’t feel bad. Some people and even whole companies are not capable of seeing this anomaly. As best as I can tell it is a novel cognitive disorder caused by taking money from Microsoft. I call it “Sinclair’s Syndrome” after Upton Sinclair who gave an early description of the condition, writing in 1935: “It is difficult to get a man to understand something when his salary depends upon his not understanding it.”

To put it in more approachable terms, observe that Ecma-376, OOXML, at 6,045 pages in length, was 58 standard deviations above the mean for Ecma Fast Tracks. Consider also that the average adult American male is 5′ 9″ (175 cm) tall, with a standard deviation of 3″ (8 cm). For a man to be as tall, relative to the average height, as OOXML is to the average Fast Track, he would need to be 20′ 3″ (6.2 m) tall !

For ISO, in a public relations pitch, to blithely suggest that several thousand page Fast Tracks are “not unusual” shows an audacious disregard for the truth and a lack of respect for a public that is looking for ISO to correct its errors, not blow smoke at them in a revisionist attempt to portray the DIS 29500 approval process as normal, acceptable or even legitimate. We should expect better from ISO and we should express disappointment in them when they let us down in our reasonable expectations of honesty. We don’t expect this from Ecma. We don’t expect this from Microsoft. But we should expect this from ISO.

  • Tweet

Filed Under: OOXML, Standards

New Paths in Standardization

2008/04/02 By Rob 20 Comments

The world should be pleased to note, that with the approval of ISO/IEC 29500, Microsoft’s Vector Markup Language (VML), after failing to be approved by the W3C in 1998 and after being neglected for the better part of a decade, is now also ISO-approved. Thus VML becomes the first and only standard that Microsoft Internet Explorer fully supports.

Congratulations are due to the Internet Explorer team for reaching this milestone!

Now that it has been demonstrated that pushing proprietary interfaces, protocols and formats through ISO is cheaper and faster than writing code to implement existing open standards, one assumes that the future is bright for more such boutique standards from Redmond. Open HTML, anyone?

  • Tweet

Filed Under: Standards

Seeking Open Standards Activists

2008/03/25 By Rob 22 Comments

Some thoughts for Document Freedom Day 2008.

Back a few weeks ago in Geneva, OpenForum Europe hosted an evening of mini-talks and a discussion panel with various well-known personalities in our field: Vint Cerf, Bob Sutor, Andy Updegrove and Håkon Lie. I wasn’t able to comment on the event at the time, due to my self-imposed blog silence that week, but I’d like to take the opportunity today to carry forward one of the topics discussed then.

I’d like to take as my launching point the theme of Andy Updegrove’s talk, which was “Civil ICT Standards”. Andy treats this subject more fully on his blog, and also speaks to the topic in his taped interview with Groklaw’s Sean Daly.

Thus spake Updegrove:

But as the world becomes more interconnected, more virtual, and more dependent on ICT, public policy relating to ICT will become as important, if not more, than existing policies that relate to freedom of travel (often now being replaced by virtual experiences), freedom of speech (increasingly expressed on line), freedom of access (affordable broadband or otherwise), and freedom to create (open versus closed systems, the ability to create mashups under Creative Commons licenses, and so on.

This is where standards enter the picture, because standards are where policy and technology touch at the most intimate level.

Much as a constitution establishes and balances the basic rights of an individual in civil society, standards codify the points where proprietary technologies touch each other, and where the passage of information is negotiated.

In this way, standards can protect – or not – the rights of the individual to fully participate in the highly technical environment into which the world is now evolving. Among other rights, standards can guarantee:

  • That any citizen can use any product or service, proprietary or open, that she desires when interacting with her government.
  • That any citizen can use any product or service when interacting with any other citizen, and to exercise every civil right.
  • That any entrepreneur can have equal access to marketplace opportunities at the technical level, independent of the market power of existing incumbents.
  • That any person, advantaged or disadvantaged, and anywhere in the world, can have equal access to the Internet and the Web in the most available and inexpensive method possible.
  • That any owner of data can have the freedom to create, store, and move that data anywhere, any time, throughout her lifetime, without risk of capture, abandonment or loss due to dependence upon a single vendor.

Let us call these “Civil ICT Rights,” and pause a moment to ask: what will life be like in the future if Civil ICT Rights are not recognized and protected, as paper and other fixed media disappear, as information becomes available exclusively on line, and as history itself becomes hostage to technology?

This rings true to me. Technology, computer technology in particular, now permeates our lives. We interact with it daily, from the moment the internet-radio alarm clock goes off, until days end, when we check our email “one last time” before going to bed.

Similarly, the standards that define the interfaces between these devices are also of increasing importance. There was once a time when standards dealt only with the “infrastructure”, the stuff in the walls and under the panel floor, or in that funny little locked door off the hallway, with all the cables and flashing lights, where strange men with clipboards would occasionally emerge, accompanied by a poof of cold air and the buzzing of machines.

But today, the technology and the standards that mediate the technology are now directly in front of your face. Think MP3 players. Think DVD’s. Think DRM. Think cellular phones. Think web pages. Think encryption. Think privacy. Think documents. Think documents-privacy-security-DRM, your data and what you are allowed to do with it, and what others are allowed to do with it, and whether you control any bit of this in this mad world of ours.

Between you and the tasks that want to do today stands technology and the standards that mediate that technology. Standards are damn important.

Now, although the reach of technology and ICT standards has progressed over the years, the organizations and the processes that create these standards have not always kept up. In many cases standardization remains the creature of big industry with little or no consumer input. It is back-room discussions, where companies connive to see how many patents of their own portfolio they can encumber the standard with. A successful standard is one where no major company is left hungry. Consensus means everyone at the table has been fed. That is the traditional world of technology standards. It brings to mind the famous line from Adam Smith:

People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices — The Wealth of Nations (I.x.c.27)

Luckily, there is some hope. The proponents of “open standards” seek standards based on principles of open participation, consensus decision making, non-profit stewardship, royalty-free IP, and free access to standards. The web itself, with the underlying network protocol stack, HTML family of formats with DOM and scripting API’s is a shining example of what open standards can accomplish. Tim Berners-Lee says it best, in his FAQ’s:

Q: Do you have had mixed emotions about “cashing in” on the Web?

A: Not really. It was simply that had the technology been proprietary, and in my total control, it would probably not have taken off. The decision to make the Web an open system was necessary for it to be universal. You can’t propose that something be a universal space and at the same time keep control of it.

But it is important to realize that “control” mechanisms in standards go well beyond IP and organization issues. There are other important factors at play, and we need to address these as well. Knut Blind discusses some of these issues a section called “Anti-Competitive Effects of Standards” from his The Economics of Standards (2004).

The negative impact of standards for competition are mostly caused by a biased endowment with resources available for the standardization process itself. Therefor, even when the consensus rule is applied, dominant large companies are able to manipulate the outcomes of the process, the specification of the standard, into a direction which leads to skewed distribution of benefits or costs in favor of their interests.

In other words, participation in standardization activities is time consuming and expensive, and large companies are much more able to make this kind of commitment than small companies, organizations or individuals. So ,large companies rule the world.

This is especially true with standardization at the international level, where decisions are often made at meetings in very expensive international locations. JTC1 is still discussing what technologies would be required to allow participation in meetings without travel. (Hint — its called a “telephone”) To put this in perspective, my week in Geneva cost $3687.52. I flew coach, ate most of my meals on the cheap, often just grabbing hors d’oeuvres at receptions, and I received negotiated IBM corporate rates for air and hotel. This is one JTC1 meeting. What if I wanted to be really active? Add in two SC34 Plenary meeting (Norway/Kyoto). Add in JTC1 Plenary meetings. Add in US NB meetings. Add in US NB membership fees, consortium fees, conferences, etc. This starts adding up, around $40,000/year to participate actively in tech standards, and this doesn’t include the cost of my time.

How many small companies are going to pay this amount? How many non-profit organizations? How many individuals? Not many.

But in spite of the expense, in spite of the large company bias of the international standardization system, I saw reason for hope at the Geneva BRM. I saw younger participants, with fire in their bellies. I saw FOSS supports from developing countries. I saw Linux on laptops. I saw participants from FOSSFA, SIUG, EFFI, ODF Alliance Brazil, COSS, etc. They joined their NB’s, participated in their NB debates and were appointed to represent their countries in the BRM.

Sure, it is only a foot in the door. One in five BRM participants were Microsoft employees. But it was a hopeful sign. We’ve planted the seed. We must plant more. And we must see that they grow.

Strength in standards participation comes with time, with participation, with networking, with learning the rules (written and unwritten) learning from others, etc. Just as we have FOSS experts in the software engineering, in law, in business, in training/education, we also need experts in standardization. Certainly the bread and butter participation will be from individual engineers, participating for the duration of a particular proposal or group of proposals. But we also need the institutional linchpin participants, those who have taken on leadership positions within standards organizations, and whose influence is broad and deep.

FOSS also needs a standards agenda. In a world of patent encumbered standards controlling the central networks, open source software dies, and dies quickly. We must protect and grow the open standards, for without them we cease to exist.

What standards are important? Which demand FOSS representation? Remember just a few weeks ago, when there was a lot of concern about how the DIS 29500 BRM added explicit mention of the patent-encumbered MP3 standard, but failed to mention Ogg Vorbis at all? Although I sympathize with this concern, the fact is the BRM could not have added Ogg Vorbis at all, because it is not a standard. Are we willing to do more than lament about this? I tell you that if Ogg Vorbis had been an ISO standard it would have been explicitly added to OOXML at the BRM. Are we willing to do something about it?

What are the standards critical to FOSS, and what are we doing about it? What standards, existing or potential, should we be focusing on? I suggest the following for a start:

  1. Ogg Vorbis
  2. Ogg Theora
  3. PNG, ISO/IEC 15948
  4. ODF, ISO/IEC 26300
  5. PDF, ISO 3200
  6. Linux Standard Base (LSB), ISO/IEC 23360
  7. Most of the W3C Recommendations
  8. Most of the IETF RFC’s

I’m sure you can suggest many others.

Let’s put it all together. Some ICT standards directly impact what we can do with our data and our digital lives. These are the Civil ICT Standards. We need to ensure that these standards remain open standards, so anyone can implement them freely. However, the standardization system, both at the national and international levels is biased in favor of those large corporations best able to afford dedicated staff to work within those organizations and develop personal effectiveness and influence in the process. Showing up once a year is not going to work. If FOSS is going to maintain any level of influence in formal standardization world, especially at the high-stakes international level, it needs to find a way to identify, nurture and support participation of “Open Standards Activists”. The GNOME Foundation’s joining of Ecma, or KDE’s membership in OASIS are examples how this could work. Umbrella organizations like Digistan also are critical and can be a nucleus for standards activists. But what about taking this to the next level, to NB membership? Another example is the Linux Foundation’s Travel Fund, designed to sponsor attendance of FOSS developers at technical conferences. Imagine what could be done with a similar fund for attendance at standards meetings?

So that is my challenge to you on this first Document Freedom Day. We’re near the end of what promises to be one of many battles. The virtual networks of the future are just as lucrative as the railroad and telephone networks of the last century were. These include the network of compatible audio formats, or the network of IM users using a compatible protocol, or the network of users using a single open document format. If FOSS projects and organizations want to secure the value for their users that comes from being part of these networks, then FOSS projects must encourage the use of open standards, and must also encourage and nurture new talent for the next generation of open standards activists.

I’m looking forward to the day, soon, when I can search Google for “open standards activist” and not find a paid Microsoft shill among the listings on the first page.

  • Tweet

Filed Under: Standards

What every engineer knows

2008/01/25 By Rob 25 Comments

Let’s work through a few hypothetical “what if” scenarios to illustrate some common engineering themes related to quality control and the inherent stresses between those who build, those who test, and those who sell. Every engineer is deeply familiar with these patterns, but I believe even the general reader will understand the dynamics better by reading these scenarios.

Let us start by imagining that a new bridge is being built in your area. The company that is building the bridge is very eager to have it open by a particular date. In fact, their contract calls for monetary penalties for every day the opening is delayed beyond that date. However, before it can be opened to traffic, it must be inspected to ensure that the welds conform to the applicable standard. For sake of argument let’s say the standard is the AASHTO/AWS D1.5M/D1.5:2002 Bridge Welding Code.

The inspectors may inspect all of the welds and find that they are all acceptable. What do you you think of this, as someone who will soon ride over that bridge? Is this good news? Yes, if you trust the expertise and independence of the inspectors, and their testing process and equipment. If the inspectors do their job properly, and they find no defects, then this indeed is cause for celebration.

But what if the inspectors found a handful of defects, perhaps some welds that failed fatigue testing? If indeed the defects are few, and are localized, then they can be fixed and retested and we can still open the bridge on time. But it is critical that the changes are localized, that there are no far reaching changes. A bridge is not just a collection of independent pieces of metal. They all work together, and as a whole have static and dynamic mechanical properties and relate to load capacity, stresses, thermal characteristics, resonance, etc. Although some fixes may be only localized in their impact, meaning only the area changed needs to be retested, other fixes may have a larger impact and require that everything be retested.

In any complex system, some defects are expected. A sign of good of engineering process is that larger, structural defects are detected or prevented at the earliest possible moment, when they are easiest and least expensive to fix. Where this is not accomplished, large design defects may be first detected at final inspection time, and costly and pervasive rework and retesting may be required, or in the extreme, the bridge may need to be torn down.

The engineering maxim is “fail early”. Now this may seem like an odd thing to say. Shouldn’t we always try to prevent failure or at least delay it as long as possible? Certainly, if you can prevent failure, then do so. But it is rarely the case where all defects can be prevented. But as engineers, we can design systems, and testing procedures so that flaws become evident as early in the process as possible, when they can be fixed in architecture and design documents rather than in built structures, or at least be found as early in the construction process as possible. This is a frequent source of stress between those who build and those who sell. The important thing for all to understand is that failing early is actually a form of risk reduction. The sooner you fail, the sooner you can fix the defect and start again.

Back to the analogy.

Let’s build another bridge. Along comes MegaCorp, who wants to build a bigger bridge, a much bigger bridge than any attempted previously, a MegaBridge. There is nothing wrong with that per se. The history of engineering is the history of making bigger pyramids, wider vaulted ceilings, taller skyscrapers and longer bridges.

Of course, the fact that MegaBridge is right down the street from the new bridge that just opened last week is a bit odd. But MegaCorp tells us that is OK. We’re not required to use their bridge if we don’t want to.

Further suppose MegaCorp also wants to construct this MegaBridge in record time, faster than others have constructed bridges even a fraction of their size. This is certainly ambitious, but there is no law against ambition. Progress is made by those who are ambitious. We learn from their successes as well as their failures. The important thing is that an ambitious MegaBridge, like any other bridge, is held to the same standards as any other bridge, that proper inspections are carried out and that quality criteria are satisfied.

Months later and the construction of MegaBridge is complete. Time for inspection. But one problem — the MegaBridge is so large that it is impossible to carry out an inspection in the scheduled time. There are simply not enough inspectors available to carry out the task and complete it by the targeted opening time.

What should we do?

It is useful at this time to consider another engineering maxim, “fail safe“. If a system is overloaded, or detects an error condition, it should fail to a safe state, a state least likely to cause damage. We see this applied in many of the systems we use every day. Traffic lights fail safe to flashing red, GFCI circuits fail safe by switching off current if a ground fault is detected, and train air brakes fail safe by applying the breaks if air pressure is lost.

The concept of a “fail safe” applies to processes as well as mechanical systems. A committee, by having a quorum requirement, ensures that it fails to a harmless, inactive state if a snowstorm prevents a representative portion of the committee from attending a meeting. A criminal trial, by presuming innocence and requiring a unanimous verdict to convict, ensures that in case of deadlock, the defendant is let free. Similarly, a bridge quality inspection protocol should include a fail safe provision, that if the inspection cannot be completed, the bridge should not be certified as fit for use. The inspection process should fail safe to non-certification.
Ordinarily, engineering practice would be to take whatever time is necessary to inspect the bridge fully, or fail the inspection.

(Here our tale diverges from standard engineering practice and starts to relay, by analogy, the increasingly bizarre tale of OOXML’s exploits in and of ISO.)

But MegaCorp wants the MegaBridge to open on time. They force the inspection to continue, even though the inspectors claim there is not enough time. In order to “help” the inspection and despite the obvious conflict of interest, MegaCorp instructs a large number of its own employees, qualified or unqualified, to volunteer as bridge inspectors. They further recruit employees from subsidiaries and suppliers to become inspectors as well. In at least one case, MegaCorp tells a supplier, newly-minted as an inspector, “Don’t worry if you know nothing about bridges. We’ll tell you what to say. All you need to do is say that the bridge is safe. You’ll be rewarded later for helping us here.”

So the bridge inspectors go out, old and new, qualified and unqualified and come back with their individual preliminary reports. The older, more experienced inspectors are critical in their evaluation:

The bridge is full of defects. Although, as we mentioned earlier, the mandated schedule did not permit us to test all of the critical welds, of the ones we did test, we found numerous defects. In fact, the number of defects we report is artificially low, since it was limited by our available inspection time. If we had been able to complete a full inspection, we would have detected and reported many more problems.

We further found pervasive structural problems. This bridge is unsound. We can not certify it. We further question why it is necessary to open up a new toll bridge at all, when we just opened up a new free bridge down the street.

The newly-minted inspectors, who for the most part are economically dependent on MegaCorp, were more supportive:

Although some minor problems were indicated, we believe these can all be fixed during routine maintenance. We are not concerned about the time permitted for inspection. We did what the process required. And when you count all the new inspectors that MegaCorp has brought to the process, no bridge has been more inspected. Considering the number of defects reported, this is the most-inspected bridge in history. We recommend that MegaBridge be certified and opened as scheduled.

Of course, from an quality control perspective, this is seriously flawed. The checks and balances between those who build, those who test and those who sell have been eliminated. Although it would not be unusual for some MegaCorp inspectors to be involved in the inspection process, the late arrival of so many unqualified, newly-minted inspectors, and the shift of balance to MegaCorp’s hand-picked inspectors, calls into question the independence and technical sufficiency of the entire inspection process.

The inspectors are polled to see whether the bridge can be certified. The vote is close, but the answer is no, the MegaBridge cannot be certified in its current condition. The inspectors, mainly the older, more experienced ones, record a report of 3,522 specific defects in the MegaBridge, far more defects than have ever been found in any other bridge.

MegaCorp is irate. They blast the experienced inspectors in the press, while simultaneously reassuring their stockholders that this setback is just the next step forward to success. They give their engineers the inspection report and demand a quick response. “We must open the bridge on time!” they yell. The MegaCorp engineers work day and night, over weekends, over the holidays even, in order to develop written proposals to address each of the reported flaws in the bridge.

The inspectors are given the proposals and asked whether they believe the proposals are sufficient to allow the MegaBridge to be certified. Although the newly-minted inspectors are quick to affirm the adequacy of the proposal, the old-timers just shake their heads in disbelief, with one stating to the press:

You could fix every last defect in that report and the MegaBridge would still not be sound. Since we never inspected all of the critical welds in the first place, fixing only the defects we reported is insufficient. It is not enough for us to merely retest the ones we reported as defective. We need to test all of them.

Also, the fact that you are making pervasive changes to the road surface, the suspension materials and the pillar diameters, far-reaching design changes which were clearly rushed and have not gone through normal review procedures, I’m afraid that all of our previous tests are now invalidated as well.

Additionally, many of your proposals either avoid addressing the flaws, paper around the flaws, or even introduce new flaws. We need to re-certify the new design before we can even think about retesting the bridge.

However considering the huge number of defects reported, the even larger number of defects undetected because of lack of inspection time, the questionable competency of the newly-minted inspectors, and overt corruption of the process by MegaCorp, my recommendation would be to tear this thing down before it falls over and hurts someone.

Thus ends the tale of what every engineer knows.


  • Tweet

Filed Under: OOXML, Standards

A Brief History of Open

2008/01/03 By Rob Leave a Comment

Circa 1700 BC, the Babylonian king Hammurabi ordered the laws of his kingdom be engraved on a black stone slab and displayed in the city center for all to see. This was mostly for show, since the number of people who could read Akkadian cuneiform were probably as small then as now. But the symbolism was clear: the Law is fixed (indeed carved in stone) and not improvised at the whim of the magistrate. Of course, the Law was not popularly determined, and certainly all were not treated equal, but still this was progress.

1215 A.D., along the Thames in Surrey, at the meadow called Runnymede, a group of English Barons joined their powers together to force their monarch, King John, to affix his seal to Magna Carta, establishing that even the King himself was bound by the Law.

Philadelphia, the hot summer of 1787, representatives from 12 of the 13 American states met in Convention, at first with the aim of enhancing their current loose affiliation, but eventually agreeing upon a much more ambitious Federal form of government in their proposed Constitution. This document was far from perfect and needed ten major additions before it was considered ready for use (thus the Bill of Rights). Over the next 200 years additional problems were detected and fixed (via further Amendments) according to an process that emphasized openness and participation of all concerned parties.

These of course are three examples of the progress of openness. All can be called “open” but they are not all of the same degree of “openness”.

For example, the Code of Hammurabi was open in a sense, since it was publicly documented for all to read. But this openness is of small consequence to the builder’s son who could be legally stoned to death if a house his father built collapsed and killed it’s owner’s son. The Code was “open” but still allowed crimes committed by one person to be judicially imputed onto another party.

Similarly, the U.S. Constitution was even more open, since it was publicly documented and also well-deliberated and formed as part of a consensus process. But it still allowed slavery and denied the majority of its population the right to vote.

At the risk of falling into a teleological argument that sees all of human history leading inexorably to modern America, it does seem that the general flow of history has been:

  1. A move from undocumented or improvised laws to laws that are fixed and publicly documented.
  2. A move from laws created by a single entity to laws formed as part of a deliberative, multilateral, consensual process.
  3. A move toward increasing inclusiveness as to whose interests are considered.

So we should never stop at a claim of “openness” and say that with the mere application of this label that all diligence has been performed. You need to ask yourself always, whose interests have been taken into account? All? Many? Few? One?

There seems to me to be a natural parallel here with the “open standard” moniker. Is it a single fixed and unitary concept that admits of no degrees? Or are there a wide range of standards which share the concept “open” to one degree or another? How thinly can the concept be diluted? Can it be homeopathically prepared, with one drop enough to inoculate gallons?

I think the key is to move away from the mere consideration of the process of standardization and to also consider the content of the standard. Just as a Constitution that held that women could not vote was far from open, even though it was drafted in an open committee process, a standard that does not facilitate use by competitors is not open, regardless of the process that created it. We need to move beyond strictly process-oriented definitions of openness and bring in considerations of content and results. A standard can be per-se non-open if its content violates important principles of openness.

  • Tweet

Filed Under: Standards

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Interim pages omitted …
  • Go to page 9
  • Go to Next Page »

Primary Sidebar

Copyright © 2006-2023 Rob Weir · Site Policies