[CDRIVER-2010] ISO8601 test fails with GCC 7 Created: 31/Jan/17  Updated: 03/May/17  Resolved: 22/Mar/17

Status: Closed
Project: C Driver
Component/s: libbson
Affects Version/s: 1.5.3
Fix Version/s: 1.7.0

Type: Bug Priority: Major - P3
Reporter: A. Jesse Jiryu Davis Assignee: Backlog - C Driver Team
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: Text File libbson-1.5.0-rc3-Fix-signed-integer-wrap-in-time2sub.patch     Text File libbson-1.6.0-Fix-signed-integer-wrap-in-time2sub.patch    

 Description   

From https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=853483, "libbson: ftbfs with GCC-7"

The package fails to build in a test rebuild on at least amd64 with
gcc-7/g++-7, but succeeds to build with gcc-6/g++-6. The
severity of this report may be raised before the buster release.
There is no need to fix this issue in time for the stretch release.

In the Debian build log, the last test that succeeds is "/endian/swap64", so the test that fails must be the next one, "/bson/iso8601/utc".

I rewrote the ISO8601 parser for CDRIVER-1962, but it's unknown whether the test would have passed with GCC 7 before my rewrite or not.

GCC 7 is unreleased. I think we just need to fix this before Debian Buster with GCC 7 arrives, or other GCC 7 distributions. It's possible it's a new GCC bug that will be resolved without us.



 Comments   
Comment by Petr Pisar [ 13/Feb/17 ]

This is a patch I use for 1.6.0. I think you can remove CHAR_BIT and TYPE_BIT macros then.

Comment by Petr Pisar [ 09/Feb/17 ]

Attached patch fixes it for time_t type as seen in libbson-1.5.0. With int64_t in libbson-1.6.0 you can use INT64_MAX and INT64_MIN from stdint.h directly.

Comment by A. Jesse Jiryu Davis [ 08/Feb/17 ]

Brilliant Petr, thank you!

Comment by Petr Pisar [ 08/Feb/17 ]

This caused by new GCC optimizations that finds a possible signed integer overflow in this piece of code:

  /*
   ** Do a binary search.
   */
   lo = 1;
   for (i = 0; i < (int64_t) TYPE_BIT (int64_t) - 1; ++i)
→     lo *= 2;

and thus optimizes it out or deletes all you data or whatever undefined behavior you can imagine. This can be identified by compiling the code with -fsanitize=undefined compiler option and then it reports (output for libbson-1.5.3 with signed time_t, but should be the same for int64_t):

    { "status": "PASS", "test_file": "/bson/append_int32", "seed": "2751394709", "start": 26100.175135173, "end": 26101.052242048, "elapsed": 0.877106875 },
src/bson/bson-timegm.c:644:7: runtime error: signed integer overflow: 4611686018427387904 * 2 cannot be represented in type 'long int'
    { "status": "PASS", "test_file": "/bson/iso8601/utc", "seed": "1303037762", "start": 26100.226295036, "end": 26101.076501063, "elapsed": 0.850206027 },

The for-loop multiplies lo variable 63 times, that means it goes up to 1*2^63 but maximal representable value is 2^63-1.

Generated at Wed Feb 07 21:13:52 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.