The AMQP 1.0.0 draft of 2011-07-04 contains the following language defining the meaning of its 64-bit “timestamp” data type:
Encodes a point in time using a 64 bit signed integer representing milliseconds since Midnight Jan 1, 1970 UTC. For the purpose of this representation, milliseconds are taken to be (1/(24*60*60*1000))th of a day.
This language is broken.1
The first sentence there makes it a fixed offset from TAI, which is good. The second sentence there is totally bogus and can have no purpose beyond confusing the crap out of anyone trying to work from this definition. It’d be massively improved by just removing the second sentence:
Encodes a point in time using a 64 bit signed integer representing milliseconds since Midnight Jan 1, 1970 UTC.
For clarity, you might want to add the word “elapsed” and also reconfirm that it’s TAI being discussed here:
Encodes a point in time using a 64 bit signed integer representing elapsed milliseconds since Midnight Jan 1, 1970 UTC. Note that time stamps change in lockstep with TAI, not UTC nor Unix time.
Using a TAI-based metric instead of Unix time (
time_t) is good
because the Unix
time_t definition is a bad design: see this
page. Furthermore, UTC, which
time_t is an encoding of, cannot be used to talk precisely about
moments beyond 6 months in the future, because of the leap-second
that it needs.
TAI-based metrics are almost perfect except that they’re damned
awkward to use right on most computer systems out there, because most
systems use Unix time. Worse, using TAI but offset so that second zero
time_t epoch gives a number currently within 30s or so of
time_t, which could lead to a lot of confusion.
Choosing a representation of an instant for use in a new protocol is hard. You get a tradeoff between:
time_t: ambiguous representation and lack of accuracy when referring to moments more than 6mo in the future; on the plus side, wide support and low potential for making mistakes (other than those caused by the inherent flaws in the definition).
UTC, but not encoded as a count of seconds since some epoch: unambiguous representation, but lack of accuracy with future dates still and also poor library support plus the hassle of defining some new representation.
TAI offset to be based at the
time_tepoch: unambiguous representation, fully accurate, but looks so similar to Unix time that people will probably make mistakes. (Will the mistakes be harmful though? How harmful is being off by 30s? You might ask the Mars Climate Orbiter.)
2000-01-01T00:00:00Z, in lockstep with TAI: unambiguous representation, fully accurate, looks dissimilar enough that it won’t be mistaken for Unix time, but jolly weird and still needs a leap-second table for display purposes, like any other TAI-based metric.
Chronological Julian Days in GMT: unambiguous representation, fully accurate, but difficult to convert back and forth to Unix time without a full calendar package.
Additionally, if you ever want to simply subtract one timestamp from another to get some idea of the interval between them, Julian Days and Unix time don’t work; TAI-based metrics are the only ones that give accurate results for this use.
Ideally we want these timestamps to be able to be accurate up to and
beyond 50 years in the future or so. If it were me, I’d use TAI either
with its own built-in epoch,
1958-01-01T00:00:00Z, or with a
suitable easy-to-remember epoch such as