On Wed, Nov 21, 2012 at 3:46 AM, Jochen Topf <[email protected]> wrote:
> On Tue, Nov 20, 2012 at 09:17:59PM -0600, Scott Crosby wrote: > > Not quite. The granularity of timestamps can go down to the milliseconds. > > > > > https://github.com/DennisOSRM/OSM-binary/blob/master/src/osmformat.proto#L96 > > Ugh. Yes. That was always somewhat of a problem in the protocol IMHO. > Nobody > needs more granularity than seconds because the main database doesn't have > it. > Similar for the latitude/longitude granularity. Nobody uses that. And it > just > makes all the code reading PBF files a bit more complex and a bit slower. > Today the database lacks those features, but the future can be different. The trivial complexity of that feature in readers allows many possible future features, without a breaking format change. The ones I had in mind were: Lower granularity makes it easy to create lower-precision excerpts that are smaller to send and easier to store. Allow OSM tooling to handle contour lines, or other grid-specified data, where making the granularity size matching the grid size can lead to vastly improved compression. Support future higher-precision data, e.g., generated from GPS block 3 satellites. Millisecond timestamps are much easier to use as unique changeset ID's than second-granularity timestamps. The runtime cost of this is a couple of multiplications that loop-invariant code motion can remove; about 30 nanoseconds for each 8000 entity block, and is much much cheaper than the branch prediction failures of VarInt decoding. Scott
_______________________________________________ dev mailing list [email protected] http://lists.openstreetmap.org/listinfo/dev

