[SERVER-27236]  [initandlisten] Fatal Assertion 34433 when downgrading from 3.2.10 to 3.2.6 Created: 30/Nov/16  Updated: 30/Jan/17  Resolved: 30/Jan/17

Status: Closed
Project: Core Server
Component/s: WiredTiger
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Major - P3
Reporter: Bryan Cantwell Assignee: Kelsey Schubert
Resolution: Cannot Reproduce Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Duplicate
is duplicated by WT-3043 Fatal Assertion 34433 Closed
Operating System: ALL
Participants:

 Description   

Previously we had Sharded MongoDB 3.2.6
After upgrading the bins to 3.2.10 got issues so even after reverting back to 3.2.6 we get:

2016-11-30T19:20:12.360+0000 I CONTROL  [main]  SERVER RESTARTED 
2016-11-30T19:20:12.391+0000 I CONTROL  [initandlisten] MongoDB starting : pid=15061 port=30000 dbpath=<removed> 64-bit host=<removed>
2016-11-30T19:20:12.391+0000 I CONTROL  [initandlisten] db version v3.2.6
2016-11-30T19:20:12.391+0000 I CONTROL  [initandlisten] git version: 05552b562c7a0b3143a729aaa0838e558dc49b25
2016-11-30T19:20:12.391+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013
2016-11-30T19:20:12.391+0000 I CONTROL  [initandlisten] allocator: tcmalloc
2016-11-30T19:20:12.391+0000 I CONTROL  [initandlisten] modules: none
2016-11-30T19:20:12.391+0000 I CONTROL  [initandlisten] build environment:
2016-11-30T19:20:12.391+0000 I CONTROL  [initandlisten]     distmod: rhel62
2016-11-30T19:20:12.391+0000 I CONTROL  [initandlisten]     distarch: x86_64
2016-11-30T19:20:12.391+0000 I CONTROL  [initandlisten]     target_arch: x86_64
2016-11-30T19:20:12.391+0000 I CONTROL  [initandlisten] options: { config: "/etc/mongod.conf", net: { bindIp: "127.0.0.1,fsprddb1c02.<removed>.com", port: 30000 }, operationProfiling: { mode: "off", slowOpThresholdMs: 10000 }, processManagement: { fork: true, pidFilePath: "/var/run/mongodb/mongodb.pid" }, security: { authorization: "enabled", keyFile: "/<removed>/system_config/mongo_auth.key" }, sharding: { clusterRole: "shardsvr" }, storage: { dbPath: "/<removed>/mongo/db/", directoryPerDB: true, engine: "wiredTiger", wiredTiger: { collectionConfig: { blockCompressor: "snappy" }, engineConfig: { cacheSizeGB: 13, journalCompressor: "snappy" } } }, systemLog: { destination: "file", logAppend: true, logRotate: "rename", path: "/<removed>/logs/mongodb.log" } }
2016-11-30T19:20:12.414+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=13G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),
2016-11-30T19:20:12.458+0000 I -        [initandlisten] Fatal Assertion 34433
2016-11-30T19:20:12.459+0000 I -        [initandlisten]
 
***aborting after fassert() failure



 Comments   
Comment by Kelsey Schubert [ 30/Jan/17 ]

Hi bcantwell@firescope.com,

We haven’t heard back from you for some time, so I’m going to mark this ticket as resolved. Unfortunately, we were unable to reproduce the issue:

  1. Start a 3.2.6 standalone, insert some data
  2. Restart with 3.2.10, insert some data in a new collection
  3. Restart with 3.2.6, did not reproduce an fassert, all data was accessible.

If this is still an issue for you, please provide additional information and we will reopen the ticket.

Regards,
Thomas

Comment by Kelsey Schubert [ 16/Dec/16 ]

Hi bcantwell@firescope.com,

We still need additional information to diagnose the problem as we were not able to reproduce this behavior. If this is still an issue for you, would you please answer my previous questions?

Thank you,
Thomas

Comment by Kelsey Schubert [ 30/Nov/16 ]

Hi bcantwell@firescope.com,

I have a few questions to get a better understanding of what is going on here.

  • Would you clarify whether you first encountered this fassert on 3.2.10, or only after subsequently downgrading to 3.2.6 after observing other issues? If so, what were the other issues?
  • My understanding is that your upgrade and downgrade process was completed by replacing the binaries on top of the same dbpath. Is this correct? Could you describe the complete upgrade and downgrade process for the sharded cluster? Which nodes were taken down in which order?
  • Are all the nodes in the sharded cluster affected?

Thank you,
Thomas

Generated at Thu Feb 08 04:14:33 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.