bounds(f) := [f-116 × 2533, f16 × 2533 - 1]
scale(t, s) := bfst + (bsnd - bfst) × t ÷ 2533 - 1 where b = bounds(s)
These definitions garner us several significant benefits:
The table below demonstrates the affect this scaling has on our representable range:
I.e., when the scaling factor is 0, we have the same range we have discussed so far. But, when the scaling factor is 1, our range now goes from [2533, 2537 - 1] effectively adding 4 bits of range while decreasing our precision to about 15 tP per-step. Scaling up again adds another 64 bits to our range and decreases our precision to just under 16 yoctoseconds per-step. With the third-scaling up, the scaling factor is so large that new_distance(3)'s value is dominated by it:
((161616 × 2533 - 1) - (1616 × 2533)) ÷ 2533 ≈ 161616.
These numbers are large enough that most computers (including mine) are not easily able to work with them, so filling in the final column (even very roughly) is quite difficult. In some sense, this is a drawback because it makes it very difficult to actually specify the precision present in the higher scalings. However, we know we can represent times far enough out into the future to cover all of the points-of-interest mentioned so far. The largest height (for the base of 10, measuring years) of an event we need to cover was 5. Our format has a base of 16 (and so scales far faster), but measures tP and supports a height up to 7. Even if height 5 doesn't quite reach what we had hoped, height 6 would far surpass it. What's more, despite that it is impractical to talk about the unit and value of each step at the higher scaling factors, we can still use the 533 bits as a fraction to declare how far through the new period the time-of-interest occurs.
I just want to take a brief tangent to quickly discuss something we've left out so far: coordinating across great distances requires some method of adjusting for travel time.
On Earth, timezones enable coordination between people all over the planet.14 In particular, they give us a common language to declare an offset from an agreed-upon reference point (mitigating the difference in solar time between locations). However, our time format is far less Terrestrially-focused.
A similar system could be established for our format (though it would need a stable point to use as the reference frame)—for Earth's timezones, this takes of the form of utc, earlier gmt. An obvious point-of-reference we could leverage would be the galactic center (perhaps also separating the galactic plane into sections). This also lends a reasonable extension to other galaxies: add an intergalactic exchange to specify which galactic center you're referencing (which could roughly be the distance from the Milky Way's galactic center to the specified galaxy's center). However, that system is rather Milky Way-centric; and, unfortunately, there is no accepted center of the universe that we can leverage as our reference point. It also incorporates an interesting oddity: because planets, stars, and galaxies all move, this timezone marker will change over time (requiring an updating formula to calculate the time offset between galaxies).
Obviously, this is its own rabbit hole, but we can (thankfully?) ignore it. Such a time offset, as mentioned above, is really just a distance calculation between two points (a calculation made very frequently by those in the field15). Should someone communicating a specified time wish to disambiguate their frame-of-reference, they can include the 3-dimensional coordinate using Earth (or any other location known to both parties) as the origin.
Phew… Well, dear reader, I don't know about you, but I'm quite excited for hardware manufacturers to start adding 536-bit registers so we can start using this format in our computers!
This article is, of course, an exercise in absurdity, taking a human-scale concept and attempting to extrapolate out to the scale of the universe and hoping that the rickety van we built along the way doesn't collapse under the weight of the duct-tape we used. However, writing this piece has felt like a deep exploration of just how small we are. I recognize that it is clichéd, but the people who originally specified UNIX Epoch Time as a 32-bit integer weren't wrong or foolish to do so (nor were those who reasonably proposed, accepted, and implemented 64-bit UNIX Epoch Time). They were simply operating on the practical scale that faced them. The format I've explored above is still impractical at the time-of-writing; imagine how Ken Thompson and Rob Pike (creator and co-implementer of UTF-8) would have reacted if someone had proposed a 536-bits wide format for every timestamp.
Despite the obvious impracticality of such a format, I hope it was at least mildly as interesting for anyone who's made it to the end as it was for me to write. The vastness of the universe is incomprehensible, but maybe we've helped to nail it down to slightly more tractible bounds.