Yes, I am currently storing high-res timestamps in a normal long long field. This works OK, but it isn't great, since things like BSONObj::toString and BSONObj::jsonString don't know that it is meant to be a timestamp, and just print it as a raw long integer value. That means that if I want to use b2json, say in a shell pipeline, then I would need to write some sort of post-processor that knows that certain things (based on key?) that look like 64 bit integers are really timestamps, and mangle them appropriately in order to get readable output. That is rather fragile.
If BSONObj::jsonString was aware that it was a high-res timestamp, it could choose to represent it in some more useful format, and I wouldn't need to make a context dependent tool.
Thinking about this some more, is there any way in the current BSONObj::toString or BSONObj::jsonString to specify how UTC datetime objects should be formatted, especially in the 'pretty' case of jsonString? Since I haven't used the UTC datetime field, due to its limited precision, I don't actually know how it gets printed.
Anyway, I understand your point about keeping the type-space for BSON small, and that it is painful to add new types. However, I imagine I'm not the only one who will want something more fine-grained than milliseconds when representing timestamps.