|
I am observing strange behaviour in mongoc_gridfs_file_get_upload_date() in my 32-bit chroot system. On a 64-bit system, it seems to return credible results. On a 32-bit system, the returned results barely resemble "milliseconds since the UNIX epoch", if at all.
The code:
#include <mongoc.h>
|
|
int main() {
|
mongoc_client_t *client;
|
mongoc_gridfs_t *gridfs;
|
mongoc_gridfs_file_t *file;
|
bson_error_t error;
|
int64_t t;
|
|
mongoc_init();
|
|
client = mongoc_client_new("mongodb://127.0.0.1");
|
BSON_ASSERT(client);
|
gridfs = mongoc_client_get_gridfs(client, "test-gridfs", 0, &error);
|
BSON_ASSERT(gridfs);
|
|
file = mongoc_gridfs_create_file(gridfs, 0);
|
BSON_ASSERT(file);
|
|
t = mongoc_gridfs_file_get_upload_date(file);
|
printf("upload_date: %lld\n", t); // This fails on "bad days" :-)
|
BSON_ASSERT(t > 0);
|
|
mongoc_gridfs_file_destroy(file);
|
mongoc_gridfs_destroy(gridfs);
|
mongoc_client_destroy(client);
|
mongoc_cleanup();
|
return 0;
|
}
|
The output on a 64-bit system:
$ ./a.out
|
upload_date: 1486175015000
|
$ ./a.out
|
upload_date: 1486175016000
|
$ ./a.out
|
upload_date: 1486175017000
|
$ ./a.out
|
upload_date: 1486175018000
|
Seems legit - ascending 64-bit millisecond timestamp value every time the program is run.
The output on a 32-bit system:
$ ./a.out
|
upload_date: 116451584
|
$ ./a.out
|
upload_date: 116452584
|
$ ./a.out
|
upload_date: 116454584
|
$ ./a.out
|
upload_date: 116457584
|
$ ./a.out
|
upload_date: 116458584
|
$ ./a.out
|
upload_date: 116460584
|
$ ./a.out
|
upload_date: 116461584
|
Doesn't look like an ascending 64-bit millisecond timestamp at all. On "bad days", values can even be negative as some arbitrary internal bit distortion takes place.
|