• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

An Antic Disposition

  • Home
  • About
  • Archives
  • Writings
  • Links
You are here: Home / Archives for Numismatics

Numismatics

Essential and Accidental in Standards

2007/02/25 By Rob 16 Comments

The earliest standards were created to support the administration of the government, which in antiquity primarily consisted of religion, justice, taxation and warfare. Crucial standards included the calendar, units of length, area, volume and weight, and uniform coinage.

Uniform coinage in particular was a significant advance. Previously, financial transactions occurred only by barter or by exchanging lumps of metals of irregular purity, size and shape, called “aes rude”. With the mass production of coins of uniform purity and weight imprinted with the emperor’s portrait, money could now be exchanged by simply counting, a vast improvement over having to figure out the purity and weight of an arbitrary lump of metal. Standards reduced the friction of commercial transactions.

Cosmas Indicopleustes, a widely-traveled merchant, later a monk, writing in the 6th Century, said:

The second sign of the sovereignty which God has granted to the Romans is that all nations trade in their currency, and in every place from one end of the world to the other it is acceptable and envied by every other man and every kingdom

“You can see a lot just by observing,” as Yogi Berra once said. A coin can be read much like a book. So, what can you see by reading a coin, and what does this tell us about standards?

To the left are examples from my collection of a single type of coin. The first picture shows the the obverse of one instance, followed by the reverse of eight copies of the same type.

The legend on the observe is “FLIVLCONSTANTIVSNOBC”. The text is highly abbreviated and there are no breaks between the words as is typical in classical inscriptions whether on coins or monuments. Brass or marble was expensive so space was not wasted. We can expand this inscription to “Flavius Julius Constantius Nobilissimus Caesar” which translates to “Flavius Julius Constantius, Most Noble Caesar”.

So this is a coin of Constantius II (317-361 A.D.), the middle son of Constantine the Great. The fact that he is styled “Caesar” rather than “Augustus” indicates that this coin dates from his days as heir-designate (324-337), prior to his father’s death. We know from other evidence that this type of coin was current around 330-337 A.D.

There is not much else interesting on the obverse. Imperial portraits had become stylized so much by this period that you cannot really tell one from the other purely by the portrait.

The reverse is a bit more interesting. When you consider that the such coins were produced by the millions and circulated to the far corners of the empire, it is clear the coins could have propaganda as well as monetary value. In this case, the message is clear. The legend reads “Gloria Exercitus” or “The Glory of the Army”. Since the army’s favor was usually the deciding factor in determining succession, a young Caesar could never praise the army too much. Not coincidentally, Constantius’s brothers, also named as Caesars, also produced coins praising the army before their father’s death.

At bottom of the reverse, in what is called the “exergue”, is where we find the mint marks, telling where the coin was minted, and even which group or “officina” within the mint produced the coin. From the mint marks, we see that these particular coins were minted in Siscia (now Sisak, Croatia), Antioch (Antakya, Turkey), Cyzicus (Kapu-Dagh, Turkey), Thessalonica (Thessaloníki, Greece) and Constantinople (Istanbul, Turkey).

The image on the reverse shows two soldiers, each holding a spear and shield, with two standards, or “signa militaria”, between them. The standard was of vital importance on the battle field, providing a common point by which the troops could orient themselves in their maneuvers. These standards appear to be of the type used by the Centuries (units of 100 men) rather than the legionary standard, which would have the imperial eagle on top. You can see a modern recreation of a standard here.

If you look closely, you’ll notice that soldiers on these coins are not holding the standards (they already have a spear in one hand and a shield in the other), and they lack the animal skin headpiece traditional to a standard bearer or “signifer”. So this tells us that these soldiers are merely rank and file soldiers encamped, with the standards stuck into the ground.

If you compare the coins carefully you will note some differences, for example:

  • Where the breaks in the legend occur. Some break the inscription into “GLOR-IAEXERC-ITVS”, while others have “GLORI-AEXER-CITVS”. Note that neither match word boundaries.
  • The uniforms of the soldiers differ, in particular the helmets. Also the 2nd coin has the soldiers wearing a sash that the other coins lack. This may reflect legionary or regional differences.
  • The standards themselves have differences, in the number of disks and in the shape of the topmost ornament.
  • The stance of the soldiers varies. Compare the orientation of their spear arm, the forward foot and their vertical alignment.

There are also differences in quality. Consider the mechanics of coin manufacture. These were struck, not cast. The dies were hand engraved in reverse (intaglio) and hand struck with hammers into bronze planchets. The engravers, called “celators”, varied in their skills. Some were clearly better at portraits. Others were better at inscriptions. (Note the serifs in the ‘A’ and ‘X’ of the 4th coin) Some made sharp, deep designs that would last many strikes. Others had details that were too fine and wore away quickly. Since these coins are a little under 20mm in diameter, and the dies were engraved by hand, without optical aids, there is considerable skill demonstrated here, even though this time period is pretty much the artistic nadir of classical numismatics.

Despite the above differences, to an ancient Roman, all of these coins were equivalent. They all would have been interchangeable, substitutable, all of the same value. Although they differed slightly in design, they matched the requirements of the type, which we can surmise to have been:

  • obverse portrait of the emperor with the prescribed legend
  • reverse two soldiers with standards, spears and shields with the prescribed legend
  • mint mark to indicate which mint and officina made the coin
  • specified purity and weight

I’d like to borrow two terms from metaphysics: “essential” and “accidental”. The essential properties are those which an object must necessarily have in order to belong to a particular category. Other properties, those which are not necessary to belong to that category, are termed “accidental” properties. These coins are all interchangeable because they all share the same essential properties, even though they differ in many accidental properties.

As another example, take corrective eye-glasses. A definition in terms of the essential properties might be, “Corrective eyeglasses consist of two transparent lenses, held in a frame, to be worn on the face to correct the wearer’s vision.” Take away any essential property and they are no longer eyeglasses. Accidental properties might include the material used to make the lenses, the shape and color of the frame, whether single or bifocals, the exact curvature of the lens etc.

The distinction between the essential and accidental is common wherever precision in words is required, in legislation, in regulations, in contracts, in patent applications, in specifications and in standards. There are risks in not specifying an essential property, as well as in specifying an accidental property. Underspecification leads to lower interoperability, but over-specification leads to increased implementation cost, with no additional benefit.

Technical standards have dealt with this issue in several ways. One is through the use of tolerances. I talked about light bulbs in a previous post and the Medium Edison Screw. The one I showed, type A21/E26 has an allowed length range of 125.4-134.9 mm. An eccentricity of up to 3-degrees is allowed along the base axis. The actual wattage may be as much as 4% plus 0.5 watts greater than specified. Is this just an example of a sloppy standard? Why would anyone use it if it allows 4% errors? Why not have a standard that tells exactly how large the bulb should be?

The point is that bulb sockets are already designed to accept this level of tolerance. Making the standard more precise would do nothing but increase manufacturing costs, while providing zero increase in interoperability. The reason why we have cheap and abundant lamps and light bulbs is that their interconnections are standardized to the degree necessary, but no more so.

There is often a sweet spot in standardization that gives optimal interoperability at minimal cost. Specify less than this and interoperability suffers. Specify more than that and implementation costs increase, though with diminished interoperability returns.

An allowance for implementation-dependent behaviors is another technique a standard has available to find that sweet spot. A standard can define some constraints, but explicitly state that others are implementation-dependent, or even implementation-defined. (Implementation-defined goes beyond implementation-dependent in that not only can an implementation choose their own behavior in this area, but that they should also document what behavior they made implemented.) For example, in the C or C++ programming languages the size of an integer is not specified. It is declared to be implementation-defined. Because of this, C/C++ programs are not as interoperable as, say Java or Python programs, but they are better able to adapt to a particular machine architecture, and in that way achieve better performance. And even Java specifies that some threading behavior is implementation-dependent, knowing that runtime performance would be significantly enhanced if implementations could directly use native OS threads. Even with these implementation-dependent behaviors, C, C++ and Java have been extremely successful.

Let’s apply this line of reasoning to document file formats. Whether you are talking about ODF or OOXML there are pieces left undefined, important things. For example, neither format specifies the exact pixel-perfect positioning of text. There is a common acceptance that issues of text kerning and font rasterization do not belong in the file format, but instead are decisions best deferred to the application and operating environment, so they can make a decision based on factors such as availability of fonts and the desired output device. Similarly a color can be specified, in RGB or another color space, but these are not exact spectral values. Red may appear one way on a CRT under florescent light, and another way on a LCD monitor in darkness and another way on color laser output read under tungsten lighting. An office document does not specify this level of detail.

In the end, a standard is defined as much by what it does not specify as by what it does specify. A standard that specifies everything can easily end up being merely the DNA sequence of a single application.

A standard provides interoperability within certain tolerances and with allowances for implementation-dependent behaviors. A standard can be evaluated based on how well it handles such concerns. Microsoft’s Brian Jones has criticized ODF for having a flexible framework for storing application-specific settings. He lists a range of settings that the OpenOffice application stores in their documents, and compares that to OOXML, where such settings are part of the standardized schema. But this makes me wonder, where then does one store application-dependent settings in OOXML? For example, when Novell completes support of OOXML in OpenOffice, where would OpenOffice store its application-dependent settings? The Microsoft-sponsored ODF Add-in for Word project has made a nice list of ODF features that cannot be expressed in OOXML. These will all need to be stored someplace or else information will be lost when down-converting from ODF to OOXML. So how should OpenOffice store these when saving to OOXML format?

There are other places where OOXML seems to have regarded the needs of Microsoft Office, but not other implementors. For example, section 2.15.2.32 of the WordprocessingML Reference defines an “optimizeForBrowser” element which allows the notation of optimization for Internet Explorer, but no provision is made for Firefox, Opera or Safari.

Section 2.15.1.28 of the same reference specifies a “documentProtection” element:

This element specifies the set of document protection restrictions which have been applied to the contents of a WordprocessingML document. These restrictions shall be enforced by applications editing this document when the enforcement attribute is turned on, and should be ignored (but persisted) otherwise.

This “protection” relies on a storing a hashed password in the XML, and comparing that to the hash of the password the user enters, a familiar technique. But rather than using a secure hash algorithm, SHA256 for example, or any other FIPS compliant algorithm, OOXML specifies a legacy algorithm of unknown strength. Now, I appreciate the need for Microsoft to have legacy compatibility. They fully acknowledge that the protection scheme they provide here is not secure and is only there for compatibility purposes. But why isn’t the standard flexible enough to allow an implementation to utilize a different algorithm, one that is secure? Where is the allowance for innovation and flexibility?

What makes this worse is that Microsoft’s DRM-based approach to document protection, from Office 2003 and Office 2007, is entirely undocumented in the OOXML specification. So we are left with a standard with a broken protection feature that we cannot replace, while the protection that really works is in Microsoft’s proprietary extensions to OOXML that we are not standardizing. How is this a good thing for anyone other than Microsoft?

Section 2.15.3.54 defines an element called “uiCompat97To2003” which is specified simply as, “Disable UI functionality that is not compatible with Word97-2003”. But what use is this if I am using OOXML in OpenOffice or WordPerfect Office? What if I want to disable UI functionality that is not compatible with OpenOffice 1.5? Or WordPerfect 8? Or any other application? Where is the ability for other implementations to specify their preferences?

It seems to me is that OOXML in fact does have application-dependent behaviors, but only for Microsoft Office, and that Microsoft has hard-coded these application-dependent behaviors into the XML schema, without tolerance or allowance for any other implementations settings.

Something does not cease to be application-dependent just because you write it down. It ceases to be application-dependent only when you generalize it and accommodate the needs of more than one application.

Certainly, any application that stores document content or styles or layout as application-dependent settings rather than in the defined XML standard should be faulted for doing so. But I don’t think anyone has really demonstrated that OpenOffice does this. It would be easy enough to demonstrate if it were true. Delete the settings.xml from an ODF document (and the reference to it from the manifest) and show that the document renders differently without it. If it does, then submit a bug report against OpenOffice or (since this is open source) submit a patch to fix it. A misuse of application-settings is that easy to fix.

But a standard that confuses the accidental application-dependent properties of a single vendor’s application for an essential property that everyone should implement, and to do this without tolerance or allowance for other implementations, this is certainly a step back for the choice and freedom of applications to innovate in the marketplace. Better to use ODF, the file format that has multiple implementations and acknowledges the propriety of sensible application-dependent behaviors, and provides a framework for them to record such settings.


3/16/2007 — added Cosmas Indicopleustes quote

  • Tweet

Filed Under: Numismatics, OOXML, Standards

Ptolemy IV, Philopator

2006/01/15 By Rob 2 Comments

photograph of coin, obverse and reverse
Coin of Ptolemy IV, Philopator

The word is “deaccession”, a fancy way of saying a museum is selling off a portion of its collection. This can happen for many reasons. For example, the museum may have received a bequest that was outside the parameters of its collection, it may have acquired items in the past that are now outside of the museum’s mission, or it may sell off some lesser works in order to raise funds to acquire a more significant piece of art. In any case, the Boston Museum of Fine Arts deaccessioned a number of ancient coins, one of which I was able to acquire, and which you see above. A close up view is here.

The obverse of this coin features a bust of Zeus/Ammon facing right. This was a composite god, conflating the chief Egyptian and Greek deities, under the Alexandrian practice of syncretism.

The reverse shows an Eagle grasping a thunder bolt, with a cornucopia under the rear wing. The legend reads ΠΤΟΛΕΜΑΙΟΥ ΒΑΣΙΛΕΩΣ or “of Ptolemy, (the) King”.

You may notice the small central pit on each side of the coin. This is called the “centration dimple”, and is an artifact of the production method.

This particular coin type is a well-known type from Ptolemy IV, who reigned from 221 BC to 204 BC. The Ptolemies were the Greek/Macedonian rulers of Egypt in the period after the death of Alexander the Great and before the rise of Roman control. By all accounts Ptolemy IV was an ineffectual leader, more interested in orgiastic rituals than ruling the nation. His one notable military victory was at the Battle of Raphia in 217 BC where he defeated the troops of the Seleucid King Antiochus III. In this battle, Ptolemy supplemented his Greek army with native Egyptians trained as phalangites. This infantry advantage proved decisive, and Ptolemy won the day, preserving Syria and Palestine in Egyptian hands. This was a mixed blessing, since the Ptolemaic dynasty, foreign rulers of Egypt, had now to contend with the newly armed and trained Egyptians, as well as external threats.

The reign of Ptolemy IV, including the Battle of Raphia, as seen from the perspective of an Egyptian Jew, is told in 3 Maccabees, a book canonical to the Bibles of Eastern Orthodox churches, though apocryphal to others.

One last close look at the coin. Notice the “Ε” between the eagle’s legs? This the “regnal date”, giving the year according to the year of the rules reign. Epsilon (Ε) was the 5th letter in the Greek alphabet. This would this correspond to 217BC, the year of the Battle of Raphia. So, a nice piece of history.

  • Tweet

Filed Under: Numismatics Tagged With: Battle of Raphia, Boston Museum of Fine Arts, Egypt, Ptolemaic dynasty

Primary Sidebar

Copyright © 2006-2023 Rob Weir · Site Policies