• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

An Antic Disposition

  • Home
  • About
  • Archives
  • Writings
  • Links
You are here: Home / Archives for Microsoft

Microsoft

Yet Another Browser Choice Fail

2010/04/18 By Rob 14 Comments

A few weeks ago I wrote about Microsoft’s “browser choice” ballot page in Europe, which in its debut used a flawed algorithm when attempting to perform a “random shuffle” of the browser choices, a feature specifically called for in their agreement with the EU.  This bug was fixed soon after it was reported.  But I recently received an email from a correspondent going by the name “Skoon” who reported a more serious bug, but one that is seen only in the Polish-language translation of the ballot choice screen.

You can go directly to this version of the page via this URL www.browserchoice.eu/BrowserChoice/browserchoice_pl.htm.  Try loading it a few times.  Does it look random to you?  I tried it in Internet Explorer, Firefox, Chrome and Opera and get the same result each time.  The order is unchanging, with Internet Explorer always first, followed always by Firefox, Opera, Chrome and Safari, in that order.  There is no shuffling going on at all.

I won’t bore you with the details of why this is so.  Let’s just say that this is a JavaScript error involving a failure to properly escape embedded quotations in one of the browser descriptions.  Because of the error, the script aborts and the randomization routine is never called.

See if you can find the bug.  Hint: turn on your JavaScript error checking in your browser (e.g., Tools\Error Console in Firefox) and the error will pop out immediately:

If you can detect this error in 30 seconds by enabling Internet Explorer’s own JavaScript error detection facility — and I believe you can — then we can assume that anyone could have done this, even Microsoft.  The odd thing is that evidently no one at Microsoft bothered to check this page for JavaScript errors, or even check the page to see if it actually worked.  We’re not talking about sophisticated statistical testing here.  Any QA on the page, any at all, would have found this error.

  • Tweet

Filed Under: Microsoft

The New & Improved Microsoft Shuffle

2010/03/06 By Rob 27 Comments

A quick update on my post from last week on the “Microsoft Shuffle“, where I looked at how Microsoft’s “random” browser ballot was far from random.

First, I’d like to thanks those who commented on that post, or sent me notes, offering additional analysis. I think we nailed this one. Within a few days of my report Microsoft updated their JavaScript on the browserchoice.eu website, fixing the error. But more on that in a minute.

Some random observations

Several commenters mentioned that if you search Google for “javascript random array sort” the first link returned will be a JavaScript tutorial that has the same offending code as Microsoft’s algorithm. This is not surprising. As I said in my original post, this is a well-known mistake. But it is no less a mistake. If you use Google Code Search for the query “0.5 – Math.random()” lang:javascript you will find 50 or so other instances of the faulty algorithm. So if anyone else is using this same algorithm, they should evaluate whether it is really sufficiently random for their needs. In some case, such as a children’s game, it might be fine. But know that there are better and faster algorithms available that are not much more complicated to code.

Another thing to note is that the Microsoft Shuffle algorithm is bad enough with 5-elements in the array, but the non-randomness gets more pronounced as you increase the length of the array. Regardless of the size of the array, it appears that on Internet Explorer the 1st element will end up in last place 50% of the time. There are other pronounced patterns as well. You can see this yourself this this test file, which allows you to specify the size of the array as well as the number of iterations. Try a 50-element array for 10,000 iterations to get a good sense of how non-random the results can be.

I used that script to run a large test of 1,000,000 iterations of a 1024-element array. The raw results are here. I took that table, and using R’s image() function produced a rendering of that matrix. You can see here the clear over-representation at some positions, including (in the lower left) the flip of the first position to last place. (I’m not quite satisfied with this rendering. Maybe someone can get a better-looking visualization of this same data.)

Evaluating Microsoft’s new shuffle

Sometime last week — I don’t know the exact date — Microsoft updated the code for the browser choice website with a new random shuffle algorithm. You see the code, in situ, here. The core of it is in this function:

function ArrayShuffle(a)
{
    var d, c, b=a.length;
    while(b)
    {
        c=Math.floor(Math.random()*b);
        d=a[--b];
        a[b]=a;
        a=d
     }
}

This looks fine to me. I created a new test driver for this routine, which you can try out here. Aside from being much faster, it is gives much better results. Here is a run with a million iterations:

 

Raw counts

Position I.E. Firefox Opera Chrome Safari
1 199988 200754 199944 199431 199883
2 200320 200016 199838 199752 200074
3 199702 199680 199911 200865 199842
4 200408 200286 199740 199861 199705
5 199582 199264 200567 200091 200496

Fraction of total

Position I.E. Firefox Opera Chrome Safari
1 0.2000 0.2008 0.1999 0.1994 0.1999
2 0.2003 0.2000 0.1998 0.1998 0.2001
3 0.1997 0.1997 0.1999 0.2009 0.1998
4 0.2004 0.2003 0.1997 0.1999 0.1997
5 0.1996 0.1993 0.2006 0.2001 0.2005

And the results of the Chi-square test:

X-squared = 18.9593, df = 16, p-value = 0.2708

Final thoughts

In the end I don’t think it is reasonable to expect every programmer to memorize the Fisher-Yates algorithm. These things belong in our standard libraries.   But what I would expect every programmer to know is:

  1. That the problem here is one that requires a “random shuffle”. If you don’t know what it is called, then it will be difficult to lookup the known approaches. So this is partially a vocabulary problem. We, as programmers, have a shared vocabulary which we use to describe data structures and algorithms; binary searches, priority heaps, tries, and dozens of other concepts. I don’t blame anyone for not memorizing algorithms, but I would expect a programmer to know what types of algorithms apply to their work.
  2. How to research which algorithm to use in a specific context, including where to find reliable information, how to evaluate the classic trade-offs of time and space, etc.  There is almost always more than one way to solve a problem.
  3. That where randomized outputs are needed,  the outputs should be statistically tested. I would not expect the average programmer to know how to do a chi-square test, or even to know what one is. But I would expect a mature programmer to know either find this out or seek help.
  • Tweet

Filed Under: Microsoft

Doing the Microsoft Shuffle: Algorithm Fail in Browser Ballot

2010/02/27 By Rob 189 Comments

March 6th Update:  Microsoft appears to have updated the www.browserchoice.eu website and corrected the error I describe in this post.  More details on the fix can be found in The New & Improved Microsoft Shuffle.  However, I think you will still find the following analysis interesting.

-Rob


Introduction

The story first hit in last week on the Slovakian tech site DSL.sk.  Since I am not linguistically equipped to follow the Slovakian tech scene, I didn’t hear about the story until it was brought up in English on TechCrunch.  The gist of these reports is this: DSL.sk did a test of the “ballot” screen at www.browserchoice.eu, used in Microsoft Windows 7 to prompt the user to install a browser.  It was a Microsoft concession to the EU, to provide a randomized ballot screen for users to select a browser.  However, the DSL.sk test suggested that the ordering of the browsers was far from random.

But this wasn’t a simple case of Internet Explorer showing up more in the first position.  The non-randomness was pronounced, but more complicated.  For example, Chrome was more likely to show up in one of the first 3 positions.  And Internet Explorer showed up 50% of the time in the last position.  This isn’t just a minor case of it being slightly random.  Try this test yourself: Load www.browserchoice.eu, in Internet Explorer, and press refresh 20 times.  Count how many times the Internet Explorer choice is on the far right.   Can this be right?

The DLS.sk findings have lead to various theories, made on the likely mistaken theory that this is an intentional non-randomness.  Does Microsoft have secret research showing that the 5th position is actually chosen more often?  Is the Internet Explorer random number generator not random?  There were also comments asserting that the tests proved nothing, and the results were just chance, and others saying that the results are expected to be non-random because computers can only make pseudo-random numbers, not genuinely random numbers.

Maybe there was cogent technical analysis of this issue posted someplace, but if there was, I could not find it.  So I’m providing my own analysis here, a little statistics and a little algorithms 101.  I’ll tell you what went wrong, and how Microsoft can fix it.  In the end it is a rookie mistake in the code, but it is an interesting mistake that we can learn from, so I’ll examine it in some depth.

Are the results random?

The ordering of the browser choices is determined by JavaScript code on the BrowserChoice.eu web site.  You can see the core function in the GenerateBrowserOrder function.  I took that function and supporting functions,  put it into my own HTML file, added some test driver code and ran it for 10,000 iterations on Internet Explorer.  The results are as follows:

Internet Explorer raw counts
Position I.E. Firefox Opera Chrome Safari
1 1304 2099 2132 2595 1870
2 1325 2161 2036 2565 1913
3 1105 2244 1374 3679 1598
4 1232 2248 1916 590 4014
5 5034 1248 2542 571 605
Internet Explorer fraction of total
Position I.E. Firefox Opera Chrome Safari
1 0.1304 0.2099 0.2132 0.2595 0.1870
2 0.1325 0.2161 0.2036 0.2565 0.1913
3 0.1105 0.2244 0.1374 0.3679 0.1598
4 0.1232 0.2248 0.1916 0.0590 0.4014
5 0.5034 0.1248 0.2542 0.0571 0.0605

This confirms the DSL.sk results.  Chrome appears more often in one of the first 3 positions and I.E. is most likely to be in the 5th position.

You can also see this graphically in a 3D bar chart:

But is this a statistically significant result?  I think most  of us have an intuitive feeling that results are more significant if many tests are run, and if the results also vary much from an even distribution of positions.  On the other hand, we also know that a finite run of even a perfectly random algorithm will not give a perfectly uniform distribution.  It would be quite unusual if every cell in the above table was exactly 2,000.

This is not a question one answers with debate.  To go beyond intuition you need to perform a statistical test.  In this case, a good test is Pearson’s Chi-square test, which tests how well observed results match a specified distribution.  In this test we assume the null-hypothesis that the observed data is taken from a uniform distribution.  The test then tells us the probability that the observed results can be explained by chance.  In other words, what is the probability that the difference between observation and a uniform distribution was just the luck of the draw?  If that probability is very small, say less than 1%, then we can say with high confidence, say 99% confidence, that the positions are not uniformly distributed.   However, if the test returns a larger number, then we cannot disprove our null-hypothesis.  That doesn’t mean the null-hypothesis is true.  It just means we can’t disprove it.  In the end we can never prove the null hypothesis.  We can only try to disprove it.

Note also that having a uniform distribution is not the same as having uniformly distributed random positions.  There are ways of getting a uniform distribution that are not random, for example, by treating the order as a circular buffer and rotating through the list on each invocation.  Whether or not randomization is needed is ultimately dictated by the architectural assumptions of your application.  If you determine the order on a central server and then serve out that order on each innovation, then you can use non-random solutions, like the rotating circular buffer.  But if the ordering is determined independently on each client, for each invocation, then you need some source of randomness on each client to achieve a uniform distribution overall.  But regardless of how you attempt to achieve a uniform distribution the way to test it is the same, using the Chi-square test.

Using the open source statistical package R, I ran the  chisq.test() routine on the above data.  The results are:

X-squared = 13340.23, df = 16, p-value < 2.2e-16

The p-value is much, much less than 1%.  So, we can say with high confidence that the results are not random.

Repeating the same test on Firefox is also non-random, but in a different way:

Firefox raw counts
Position I.E. Firefox Opera Chrome Safari
1 2494 2489 1612 947 2458
2 2892 2820 1909 1111 1268
3 2398 2435 2643 1891 633
4 1628 1638 2632 3779 323
5 588 618 1204 2272 5318
Firefox fraction of total
Position I.E. Firefox Opera Chrome Safari
1 0.2494 0.2489 0.1612 0.0947 0.2458
2 0.2892 0.2820 0.1909 0.1111 0.1268
3 0.2398 0.2435 0.2643 0.1891 0.0633
4 0.1628 0.1638 0.2632 0.3779 0.0323
5 0.0588 0.0618 0.1204 0.2272 0.5318

On Firefox, Internet Explorer is more frequently in one of the first 3 positions, while Safari is most often in last position.  Strange.  The same code, but vastly different results.

The results here are also highly significant:

X-squared = 14831.41, df = 16, p-value < 2.2e-16

So given the above, we know two things:  1) The problem is real.  2) The problem is not related to a flaw only in Internet Explorer.

In the next section we look at the algorithm and show what the real problem is, and how to fix it.

Random shuffles

The browser choice screen requires what we call a “random shuffle”.  You start with an array of values and return those same values, but in a randomized order. This computational problem has been known since the earliest days of computing.  There are 4 well-known approaches: 2 good solutions, 1 acceptable (“good enough”) solution that is slower than necessary, and 1 bad approach that doesn’t really work.  Microsoft appears to have picked the bad approach. But I do not believe there is some nefarious intent to this bug.  It is more in the nature of a “naive” algorithm”, like the bubble sort, that inexperienced programmers  inevitably will fall upon when solving a given problem.  I bet if we gave this same problem to 100 freshmen computer science majors, at least one of them would make the same mistake.  But with education and experience, one learns about these things.  And one of the things one learns early on is to reach for Knuth.


The Art of Computer Programming, Vol. 2, section 3.4.2 “Random sampling and shuffling” describes two solutions:

  1. If the number of items to sort is small, then simply put all possible orderings in a table and select one ordering at random.  In our case, with 5 browsers, the table would need 5! = 120 rows.
  2. “Algorithm P” which Knuth attributes to Moses and Oakford (1963), but is now known to have been anticipated by Fisher and Yates (1938) so it is now called the Fisher-Yates Shuffle.

Another solution, one I use when I need a random shuffle in a database or spreadsheet, is to add a new column, fill that column with random numbers and then sort by that column.  This is very easy to implement in those environments. However, sorting is an O(N log N)  operation where the Fisher-Yates algorithm is O(N), so you need to keep that in mind if performance is critical.

Microsoft used none of these well-known solutions in their random solution.  Instead they fell for the well-known trap.  What they did is sort the array, but with a custom-defined comparison function or “comparator”.  JavaScript, like many other programming languages, allows a custom comparator function to be specified.  In the case of JavaScript, this function takes two indexes into the value array and returns a value which is:

  • <0 if the value at the first index should be sorted before the value at the second index
  • 0 if the values at the first index and the second index are equal, which is to say you are indifferent as to what order they are sorted
  • >0 if the value at the first index should be sorted after the value at the second index

This is a very flexible approach, and allows the programmer to handle all sorts of sorting tasks, from making case-insensitive sorts to defining locale-specific collation orders, etc..

In this case Microsoft gave the following comparison function:

function RandomSort (a,b)
{
    return (0.5 - Math.random());
}

Since Math.random() should return a random number chosen uniformly between 0 and 1, the RandomSort() function will return a random value between -0.5 and 0.5.  If you know anything about sorting, you can see the problem here.  Sorting requires a self-consistent definition of ordering. The following assertions must be true if sorting is to make any sense at all:

  1. If a<b then b>a
  2. If a>b then b<a
  3. If a=b then b=a
  4. if a<b and b<c then a<c
  5. If a>b and b>c then a>c
  6. If a=b and b=c then a=c

All of these statements are violated by the Microsoft comparison function.  Since the comparison function returns random results, a sort routine that depends on any of these logical implications would receive inconsistent information regarding the progress of the sort.  Given that, the fact that the results were non-random is hardly surprising.  Depending on the exact search algorithm used, it may just do a few exchanges operations and then prematurely stop.  Or, it could be worse.  It could lead to an infinite loop.

Fixing the Microsoft Shuffle

The simplest approach is to adopt a well-known and respected algorithm like the Fisher-Yates Shuffle, which has been known since 1938.  I tested with that algorithm, using a JavaScript implementation taken from the Fisher-Yates Wikpedia page, with the following results for 10,000 iterations in Internet Explorer:

Internet Explorer raw counts
Position I.E. Firefox Opera Chrome Safari
1 2023 1996 2007 1944 2030
2 1906 2052 1986 2036 2020
3 2023 1988 1981 1984 2024
4 2065 1985 1934 2019 1997
5 1983 1979 2092 2017 1929
Internet Explorer fraction of total
Position I.E. Firefox Opera Chrome Safari
1 0.2023 0.1996 0.2007 0.1944 0.2030
2 0.1906 0.2052 0.1986 0.2036 0.2020
3 0.2023 0.1988 0.1981 0.1984 0.2024
4 0.2065 0.1985 0.1934 0.2019 0.1997
5 0.1983 0.1979 0.2092 0.2017 0.1929

Applying Pearson’s Chi-square test we see:

X-squared = 21.814, df = 16, p-value = 0.1493

In other words, these results are not significantly different than a truly random distribution of positions.  This is good.  This is what we want to see.

Here it is, in graphical form, to the same scale as the “Microsoft Shuffle” chart earlier:

Summary

The lesson here is that getting randomness on a computer cannot be left to chance.  You cannot just throw Math.random() at a problem and stir the pot, and expect good results.  Random is not the same as being casual.  Getting random results on a deterministic computer is one of the hardest things you can do with a computer and requires deliberate effort, including avoiding known traps.  But it also requires testing.  Where serious money is on the line, such as with online gambling sites, random number generators and shuffling algorithms are audited, tested and subject to inspection.  I suspect that the stakes involved in the browser market are no less significant.  Although I commend DSL.sk for finding this issue in the first place, I am astonished that the bug got as far as it did.  This should have been caught far earlier, by Microsoft, before this ballot screen was ever made public.  And if the EC is not already demanding a periodic audit of the aggregate browser presentation orderings, I think that would be a prudent thing to do.

If anyone is interested, you can take a look at the file I used for running the tests.  You type in an iteration count and press the execute button.  After a (hopefully) short delay you will get a table of results, using the Microsoft Shuffle as well as the Fisher-Yates Shuffle.  With 10,000 iterations you will get results in around 5 seconds.  Since all execution is in the browser, use larger numbers at your own risk.  At some large value you will presumably run out of memory, time out, hang, or otherwise get an unsatisfactory experience.

  • Tweet

Filed Under: Microsoft, Popular Posts Tagged With: algorithm, chi square test, internet explorer, javascript, random number generator, shuffling, statistics

Monopoly Freedom Day

2009/01/23 By Rob 5 Comments

Each year the Tax Foundation, a 70-year old nonpartisan tax research group based in Washington, D.C., issues a press release on Tax Freedom Day, the day in the year where the average American worker has earned enough to pay their taxes for the year. In 2008 Tax Freedom Day was April 23rd. Of course, this is a rhetorical device, since we really pay taxes throughout the year, at various rates, and not all at once, but it is a useful device that illustrates the relationship between wages and taxes.

So, this got me thinking whether this same analysis could be applied to what I’ve been calling the “Monopoly Tax”, the excess price we pay for products from a monopolist when adequate open source alternatives are available.

Let me take a stab at the analysis. First, let’s look at the price of two entry-level PC’s, identical except that one comes pre-installed with Ubuntu and OpenOffice, while the other comes pre-installed with Microsoft Windows and Microsoft Office. I’ll use Dell’s Inspiron 530/530n as an example. Same chip, same RAM, same monitor, same drive, same graphics, everything the same, except one comes with Linux and the other has the Microsoft software.

  • Dell Inspiron 530 with Microsoft Vista/Microsoft Office = $818.00
  • Dell Inspiron 530N with Ubuntu/OpenOffice = $428.00

So the “monopoly tax” in this case is $390, or 48% of the total cost of the system. Now that amount is probably not going to crush you or me. But for a student, a small town library strapped for funds, the recently unemployed, or a family in the developing world, this is a huge difference.

We can further quantify this by calculating the date of “Monopoly Freedom Day” for countries around the world, based on per-capita income. If you purchase a new PC on January 1st, you will work up until Monopoly Freedom Day just to pay the excess cost of the non-open source software. Up until Monopoly Freedom Day the fruits of your labors are not going to you, you family or your community. Your wages are going to Redmond, to fatten the stock portfolio of the wealthiest man in America. Think about it. You know that Microsoft has. With the poorest countries in the world being the ones who would benefit most from using open source to avoid paying the Monopoly Tax, Microsoft has started a new “Scramble for Africa” in order lock them into a costly cycle of technological dependency in a new colonialist campaign.

Country

Monopoly Freedom Day

Luxembourg

Jan 02

Ireland

Jan 04

Norway

Jan 04

United States

Jan 04

Switzerland

Jan 04

Qatar

Jan 04

Austria

Jan 04

Denmark

Jan 04

Netherlands

Jan 04

Finland

Jan 04

United Kingdom

Jan 04

Canada

Jan 04

Belgium

Jan 04

Singapore

Jan 04

United Arab Emirates

Jan 05

Greece

Jan 05

Australia

Jan 05

Japan

Jan 05

Israel

Jan 05

France

Jan 05

Germany

Jan 05

Italy

Jan 05

Cyprus

Jan 05

Spain

Jan 05

New Zealand

Jan 06

Slovenia

Jan 06

Korea

Jan 06

Czech Republic

Jan 06

Portugal

Jan 06

Malta

Jan 07

Kuwait

Jan 07

Barbados

Jan 07

Trinidad and Tobago

Jan 08

Argentina

Jan 09

Saudi Arabia

Jan 09

Poland

Jan 09

Croatia

Jan 10

Mauritius

Jan 11

South Africa

Jan 11

Chile

Jan 11

Russia

Jan 11

Uruguay

Jan 12

Malaysia

Jan 12

Costa Rica

Jan 12

Mexico

Jan 12

Romania

Jan 13

Bulgaria

Jan 13

Kazakhstan

Jan 14

Brazil

Jan 14

Belarus

Jan 15

Bosnia and Herzegovina

Jan 15

Turkey

Jan 15

Thailand

Jan 15

Tunisia

Jan 15

Panama

Jan 16

Iran

Jan 16

Colombia

Jan 17

China

Jan 17

Ukraine

Jan 17

Azerbaijan

Jan 17

Venezuela

Jan 18

Peru

Jan 20

Serbia

Jan 20

Fiji

Jan 24

Morocco

Jan 24

Lebanon

Jan 24

Jordan

Jan 24

Sri Lanka

Jan 25

Armenia

Jan 25

Philippines

Jan 25

Egypt

Jan 27

Ecuador

Jan 29

Jamaica

Jan 31

Syrian Arab Republic

Feb 01

India

Feb 04

Cuba

Feb 04

Vietnam

Feb 08

Ghana

Feb 18

Pakistan

Feb 18

Uzbekistan

Feb 26

Zimbabwe

Mar 01

Bangladesh

Mar 04

Côte d’Ivoire

Mar 24

Kenya

Apr 08

Nigeria

Apr 22

Congo

Jun 09

Tanzania

Jun 13

  • Tweet

Filed Under: Microsoft, Open Source

Embrace the Reality and Logic of Choice

2008/04/30 By Rob 9 Comments

Another neo-colonialist press release from Microsoft’s CompTIA lobbying arm, this time inveighing against South Africa’s adoption of ODF as a national standard. One way to point out the absurdity of their logic is to replace the reference to ODF with references to any other useful standard that a government might adopt, like electrical standards.

When we do this, we end up with the following.


South Africa Electrical Current Adoption Outdated

South Africa’s recent adoption of the 230V/50Hz residential electrical standard represents a tact that will blunt innovation, much needed for their developing economy. The policy choice – which actually reduces electrical current choice – runs contrary to worldwide policy trends, where multiple electrical standards rule, thus threatening to separate South Africa from the wealth creating abilities of the global electrical industry.

For MonPrevAss, the Monopoly Preservation Association, the overall concern for the global electrical industry is to ensure that lawmakers adopt flexible policies and set policy targets rather than deciding on fixed rules, technologies and different national standards to achieve these targets. Such rigid approaches pull the global electrical market apart rather than getting markets to work together and boost innovation for consumers and taxpayers. “The adoption sends a negative signal to a highly innovative sector” says I.M. Atool, MonPrevAss’s Group Director, Public Policy EMEA.

The “South African Bureau of Standards” (SABS) approved the 230V/50Hz residential electrical standard on Friday 18 April as an official national standard. This adoption, if implemented, will reduce choice, decrease the benefits of open competition and thwart innovation. The irony here is that South Africa is moving in a direction which stands in stark relief to the reality of the highly dynamic market, with some 40 different electrical current conventions available today.

“Multiple co-existing electrical standards as opposed to only one standard should be favoured in the interest of users. The markets are the most efficient in creating electrical standards and it should stay within the exclusive hands of the market”, I.M. Atool explains.

In light of the recent ISO/IEC adoption of the Microsoft 240V/55Hz electrical standard, the South African decision will not lead to improvements in the electrical sector. MonPrevAss urges Governments to allow consumers and users to decide which electrical standards are best. We fear that the choice of just one electrical standard runs the risk of being outdated before it is even implemented, as well as being prohibitively costly to public budgets and taxpayers.

Governments should not restrict themselves to working with one electrical standard, and should urge legislators to refrain from any kind of mandatory regulation and discriminatory interventions in the market. The global electrical industry recommends Governments to embrace the reality and logic of choice and to devote their energies to ensuring interoperability through this choice.


Of course, this is just a rehash of an old logical fallacy, related to the old “Broken Windows” fallacy. It is like saying heart disease is a good thing because you have such a wide choice of therapies to treat it. We would all agree that it is far preferable to be healthy and have a wide choice of activities that you want to do, rather than a wide choice of solutions to a problem that you never asked for and don’t want.

Consumers don’t want a bag of adapters to convert between different formats and protocols. That is giving consumers a choice in a solution to a interoperability problem they didn’t ask for and they don’t want. Consumers want a choice of goods and services.

Observe the recent standards war with Blu-ray and HD DVD. Ask yourself:

  1. Did consumers want a choice in formats, or did they want a wider choice in players and high definition movies?
  2. Did movie studios want a choice in formats and either the uncertainty over choosing the winner, or the expense of supporting both formats? Or did they really just want a single format that would allow them to reach all consumers?
  3. Did the uncertainty around the existence of two competing high definition formats help or hurt the adoption of high definition technologies in general?
  4. Did consumers who make the early choice to go with HD DVD, say Microsoft XBox owners, benefit from having this choice?

If every private individual, and every private business has the right to adopt technology standards according to their needs, why should governments be denied that same right? Why should they be forced to take the only certain losing side of every standards war — implementing all standards indiscriminately — a choice that no rational business owner would make?

How many spreadsheet formats does Microsoft use internally for running their business on? Why should governments be denied choice in the same field that Microsoft itself exerts its right to chose?

  • Tweet

Filed Under: Microsoft, Standards

  • Go to page 1
  • Go to page 2
  • Go to Next Page »

Primary Sidebar

Copyright © 2006-2023 Rob Weir · Site Policies

 

Loading Comments...