[SERVER-23097] Segfault on drop of source collection during MapReduce Created: 11/Mar/16  Updated: 19/Nov/16  Resolved: 23/Mar/16

Status: Closed
Project: Core Server
Component/s: Aggregation Framework
Affects Version/s: 3.2.4, 3.3.3
Fix Version/s: 3.2.5, 3.3.4

Type: Bug Priority: Major - P3
Reporter: pavan Assignee: James Wahlin
Resolution: Done Votes: 0
Labels: code-only
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: File mongod_error.log.gz    
Issue Links:
Related
related to SERVER-20050 uassert in IDHackStage if on failure ... Closed
Backwards Compatibility: Fully Compatible
Operating System: ALL
Backport Completed:
Steps To Reproduce:

1) run aggregation on a collection in parallel on multiple keys
2) drop the target collection after completion

Sprint: Query 12 (04/04/16)
Participants:
Linked BF Score: 0

 Description   

Mongod process seems to sporadically causing segment errors while running mongo aggregation framework or/and right after completing the aggregation pipeline, most notably during dropping temp collections created as part of the aggregation framework.

We run aggregation on multiple keys on a collection, we need unique counts for each of the field values, several aggregation pipelines are run in parallel and target collections are dropped after reading the results and writing the outputs to a unified collection [ done via map-reduce framework].

The errors seem to be happening quite frequently - in dev and qa.

Platform specs : Mongo Db version 3.2.1
Storage Engine: WiredTiger
OS :Linux version 3.13.0-74-generic (buildd@lcy01-07) (gcc version 4.8.2 (Ubuntu 4.8.2-19ubuntu1) )



 Comments   
Comment by pavan [ 01/Apr/16 ]

The slowness reported during map-reduce operations is part of an open issue which can be close as of now. https://jira.mongodb.org/browse/SERVER-23456

Apparently map-reduce owns a global lock in contrast to collection lock design paradigm - new in 3.2 . By passing/setting nonAtomic flag to true on map-reduce jobs frees up the db for other queries, Is there any reason why map-reduce require global lock on a database ?. Lock to Oplog/local collection makes sense, just not sure why Map-reduce needs global lock when operations are on non interfering collections within the database.

Comment by pavan [ 31/Mar/16 ]

some new observation that we like share, when running heavy map-reduce jobs,mongo seems to respond very slowly to queries, the server is on a 4 core box, 1 core is fully owned by the map-reduce, the map-reduce logic is totally operating off of different collection - temps collections, data from temp collection is being merged into a different collection. Mongo queries being requested are totally isolated from the map-reduce space, even then mongo seems to be , intermittently - quite visible on front end node app, where data/query ( reads) take a while to comeback , in our case up to 16 seconds. In the below trace its on

'[conn5599] command dim3.scans command: count'

2016-03-31T12:45:25.851-0700 I COMMAND [conn5720] command dim3.domainstats command: mapReduce { mapreduce: "whois.billing_stateprovince_56cc44bbf2e5301119d51e37s", map: "function () {

var key = this._id;
var value=

{'colors':this.colors}

;
if(value.colors){
//coming from aggregate collections
va...", reduce: "function (key, values) {

//printjson(values)
var reduce ={colors:{whois:{}}};
values.forEach(function(value){
var properties = Ob...", verbose: true, out:

{ reduce: "domainstats" }

, jsMode: true, sort:

{ _id: 1 }

, readPreference: "primary" } planSummary: COUNT keyUpdates:0 writeConflicts:0 numYields:961 reslen:218 locks:{ Global: { acquireCount:

{ r: 432022, w: 245061, W: 61263 }

}, Database: { acquireCount:

{ r: 61752, w: 245055, R: 1097, W: 8 }

, acquireWaitCount:

{ W: 4 }

, timeAcquiringMicros:

{ W: 66963 }

}, Collection: { acquireCount:

{ r: 61752, w: 122530 }

}, Metadata: { acquireCount:

{ w: 122527 }

}, oplog: { acquireCount:

{ w: 122527 }

} } protocol:op_query 31020ms
2016-03-31T12:45:25.880-0700 I COMMAND [conn5599] command dim3.scans command: count { count: "scans", query: { $and: [

{ scan_status: "Filter Domains" }

], brand_id:

{ $in: [ ObjectId('56cd9f5898fd55053830554d'), ObjectId('56cea82d8a0e5d7952142b5e'), ObjectId('56cea94366e6e6a354584d97'), ObjectId('56d5e2203c6909fb46e08883'), ObjectId('56e122cc2d661b6302c47ced'), ObjectId('56e1233f2d661b6302c47cf2'), ObjectId('56e1239b2d661b6302c47cf3'), ObjectId('56e124142d661b6302c47cf4'), ObjectId('56e13eca2d661b6302c47d0d'), ObjectId('56e13f182d661b6302c47d13'), ObjectId('56e167562d661b6302c47d45'), ObjectId('56e167a02d661b6302c47d46'), ObjectId('56e167ba2d661b6302c47d47'), ObjectId('56e7b8fc0ae6fbaa5e3907a2'), ObjectId('56f3b357b168e8630b701bf5'), ObjectId('56fabd55a0949280749497bb') ] }

} } planSummary: IXSCAN

{ scan_status: 1.0 }

keyUpdates:0 writeConflicts:0 numYields:5 reslen:62 locks:{ Global: { acquireCount:

{ r: 12 }

, acquireWaitCount:

{ r: 1 }

, timeAcquiringMicros:

{ r: 16598608 }

}, Database: { acquireCount:

{ r: 6 }

, acquireWaitCount:

{ r: 4 }

, timeAcquiringMicros:

{ r: 18213 }

}, Collection: { acquireCount:

{ r: 6 }

} } protocol:op_query 16707ms
2016-03-31T12:45:25.903-0700 I COMMAND [conn5600] command dim3.scans command: count { count: "scans", query: { brand_id:

{ $in: [ ObjectId('56cd9f5898fd55053830554d'), ObjectId('56cea82d8a0e5d7952142b5e'), ObjectId('56cea94366e6e6a354584d97'), ObjectId('56d5e2203c6909fb46e08883'), ObjectId('56e122cc2d661b6302c47ced'), ObjectId('56e1233f2d661b6302c47cf2'), ObjectId('56e1239b2d661b6302c47cf3'), ObjectId('56e124142d661b6302c47cf4'), ObjectId('56e13eca2d661b6302c47d0d'), ObjectId('56e13f182d661b6302c47d13'), ObjectId('56e167562d661b6302c47d45'), ObjectId('56e167a02d661b6302c47d46'), ObjectId('56e167ba2d661b6302c47d47'), ObjectId('56e7b8fc0ae6fbaa5e3907a2'), ObjectId('56f3b357b168e8630b701bf5'), ObjectId('56fabd55a0949280749497bb') ] }

} } planSummary: COLLSCAN keyUpdates:0 writeConflicts:0 numYields:7 reslen:62 locks:{ Global: { acquireCount:

{ r: 16 }

, acquireWaitCount:

{ r: 1 }

, timeAcquiringMicros:

{ r: 15101649 }

}, Database: { acquireCount:

{ r: 8 }

, acquireWaitCount:

{ r: 4 }

, timeAcquiringMicros:

{ r: 22334 }

}, Collection: { acquireCount:

{ r: 8 }

} } protocol:op_query 15232ms
2016-03-31T12:45:25.918-0700 I COMMAND [conn5722] CMD: drop dim3.tmp.mr.whois.registrar_56cc44bbf2e5301119d51e37s_3850
2016-03-31T12:45:26.060-0700 I NETWORK [initandlisten] connection accepted from 172.31.32.189:37107 #5824 (358 connections now open)
2016-03-31T12:45:28.005-0700 I - [conn5722] M/R: (1/3) Emit Progress: 54900/61061 89%
2016-03-31T12:45:31.006-0700 I - [conn5722] M/R: (3/3) Final Reduce Progress: 56600/61061 92%
2016-03-31T12:45:34.016-0700 I - [conn5722] M/R Reduce Post Processing Progress: 6400/61061 10%
2016-03-31T12:45:37.016-0700 I - [conn5722] M/R Reduce Post Processing Progress: 13300/61061 21%
2016-03-31T12:45:40.040-0700 I - [conn5722] M/R Reduce Post Processing Progress: 20200/61061 33%
2016-03-31T12:45:41.222-0700 I NETWORK [initandlisten] connection accepted from 172.31.32.189:37108 #5825 (359 connections now open)
2016-03-31T12:45:41.224-0700 I NETWORK [conn5823] end connection 172.31.32.189:37106 (358 connections now open)
2016-03-31T12:45:43.039-0700 I - [conn5722] M/R Reduce Post Processing Progress: 27000/61061 44%
2016-03-31T12:45:46.021-0700 I - [conn5722] M/R Reduce Post Processing Progress: 33700/61061 55%
2016-03-31T12:45:49.030-0700 I - [conn5722] M/R Reduce Post Processing Progress: 40700/61061 66%
2016-03-31T12:45:52.013-0700 I - [conn5722] M/R Reduce Post Processing Progress: 47500/61061 77%
2016-03-31T12:45:55.039-0700 I - [conn5722] M/R Reduce Post Processing Progress: 54400/61061 89%
2016-03-31T12:45:58.000-0700 I COMMAND [conn5722] CMD: drop dim3.tmp.mr.whois.registrar_56cc44bbf2e5301119d51e37s_3850
2016-03-31T12:45:58.001-0700 I COMMAND [conn5824] command local.oplog.rs command: getMore { getMore: 117817015601, collection: "oplog.rs", maxTimeMS: 5000, term: 17, lastKnownCommittedOpTime:

{ ts: Timestamp 1459453531000|4444, t: 17 }

} cursorid:117817015601 keyUpdates:0 writeConflicts:0 exception: operation exceeded time limit code:50 numYields:0 reslen:74 locks:{ Global: { acquireCount:

{ r: 2 }

, acquireWaitCount:

{ r: 1 }

, timeAcquiringMicros:

{ r: 26779529 }

}, Database: { acquireCount:

{ r: 1 }

}, oplog: { acquireCount:

{ r: 1 }

} } protocol:op_command 26779ms
2016-03-31T12:45:58.002-0700 I NETWORK [conn5824] end connection 172.31.32.189:37107 (357 connections now open)
2016-03-31T12:45:58.002-0700 I NETWORK [conn5825] end connection 172.31.32.189:37108 (356 connections now open)

Comment by Githook User [ 23/Mar/16 ]

Author:

{u'username': u'benety', u'name': u'Benety Goh', u'email': u'benety@mongodb.com'}

Message: SERVER-23097 fixed lint in MapReduce
Branch: master
https://github.com/mongodb/mongo/commit/7bd2ffaaadae400efc05d642d43436210aa835b3

Comment by Githook User [ 23/Mar/16 ]

Author:

{u'username': u'jameswahlin', u'name': u'James Wahlin', u'email': u'james.wahlin@10gen.com'}

Message: SERVER-23097 Fix segfault on invalid BSONObj reference in MapReduce
Branch: master
https://github.com/mongodb/mongo/commit/bcac0c80bcf1e4c6b7a55e165f9d08336338068d

Comment by Githook User [ 23/Mar/16 ]

Author:

{u'username': u'jameswahlin', u'name': u'James Wahlin', u'email': u'james.wahlin@10gen.com'}

Message: SERVER-23097 Improve killed executor handling in MapReduce

(cherry picked from commit 200b4f971b021f792194489c8ffbc95b9f9cba35)
Branch: v3.2
https://github.com/mongodb/mongo/commit/20bafb68ae90b11883258d25a2c380843d01a1d7

Comment by Githook User [ 23/Mar/16 ]

Author:

{u'username': u'jameswahlin', u'name': u'James Wahlin', u'email': u'james.wahlin@10gen.com'}

Message: SERVER-23097 Handle PlanExecutor error in MapReduce

(selective cherry pick of mr.cpp changes from commit
a12f3f807900829a36f97dc777f98bebe74ad591)
Branch: v3.2
https://github.com/mongodb/mongo/commit/03f73c9d8f3b593b2bb3b6abf6135cde78163b99

Comment by James Wahlin [ 23/Mar/16 ]

Reopening to address concurrency suite test failure.

Comment by Githook User [ 21/Mar/16 ]

Author:

{u'username': u'jameswahlin', u'name': u'James Wahlin', u'email': u'james.wahlin@10gen.com'}

Message: SERVER-23097 Improve killed executor handling in MapReduce
Branch: master
https://github.com/mongodb/mongo/commit/200b4f971b021f792194489c8ffbc95b9f9cba35

Comment by James Wahlin [ 15/Mar/16 ]

Thanks Pavan! We believe we understand where the issues lies (our 3.2 MapReduce code is not handling collection drop adequately) but having this information may help confirm. We will proceed with our patch either way and will supplement if needed.

Comment by pavan [ 15/Mar/16 ]

Hello James, increased the log level to 2 on dev/qa/prod. I tried to simulate the tests on my mac - doing concurrent operations on the collection that has the aggregation logic - bulk inserts and fetches, same time ran aggregation, no errors so far. On Mac it was 2 node replicate set. The log is enabled now, will share the logs and crash report as soon as we see the issue again - Thx.

Comment by James Wahlin [ 14/Mar/16 ]

Hi ppeddada,

Thanks for sharing details on how your application is triggering this. I will be working to investigate this issue and will update once I have information to share.

If possible, can you increase the logging level to 2 on your primary and trigger this issue? Log level 2 will write operations to the log file before execution and will provide details on the operation that is triggering the segmentation fault. If you can provide, please attach the log output to this ticket, including the crash and the preceding operation for the given connection (so for your example above, the preceding operation would contain "[conn1601]".

Thanks,
James

Comment by pavan [ 14/Mar/16 ]

We had this error again in our QA environment today, pretty consistent whenever aggregation pipeline is run. The cluster/replica set is a 3 node system. We drop aggregation collections right after the pipeline execution, not sure if oplog/sync process between nodes has problems on visibility of collections dropped - like trying to access a dropped collection ???. Something to note on this specific use case. The Node layer launches the aggregation command and on completion drops the output collections.

2016-03-14T02:22:52.081-0700 I COMMAND  [conn1703] command dim3.domains command: aggregate { aggregate: "domains", pipeline: [ { $match: { scan_id: ObjectId('56e37ec5f2867caf306845d0') } }, { $group: { _id: { admin_organization: "$whois.admin_organization" }, count: { $sum: 1 }, keys: { $push: "$_id" }, scan_id: { $first: "$scan_id" } } }, { $match: { count: { $gt: 1 } } }, { $project: { _id: 0, keys: "$keys", scan_id: "$scan_id", colors: { whois: { admin_organization: { value: "$_id.admin_organization", count: "$count" } } } } }, { $unwind: "$keys" }, { $project: { _id: "$keys", scan_id: "$scan_id", colors: "$colors" } }, { $sort: { _id: 1 } }, { $out: "whois.admin_organization_56e37ec5f2867caf306845d0s" } ], allowDiskUse: true } keyUpdates:0 writeConflicts:0 numYields:1 reslen:68 locks:{ Global: { acquireCount: { r: 23, w: 8, W: 1 }, acquireWaitCount: { r: 2, w: 1, W: 1 }, timeAcquiringMicros: { r: 181655, w: 108645, W: 121857 } }, Database: { acquireCount: { r: 6, w: 6, R: 1, W: 2 }, acquireWaitCount: { r: 4, R: 1, W: 2 }, timeAcquiringMicros: { r: 106549, R: 43136, W: 175094 } }, Collection: { acquireCount: { r: 6, w: 1 } }, Metadata: { acquireCount: { w: 102 } }, oplog: { acquireCount: { w: 5 } } } protocol:op_query 777ms
2016-03-14T02:22:52.081-0700 I COMMAND  [conn1728] command dim3.domains command: aggregate { aggregate: "domains", pipeline: [ { $match: { scan_id: ObjectId('56e37ec5f2867caf306845d0') } }, { $group: { _id: { created_date: "$whois.created_date" }, count: { $sum: 1 }, keys: { $push: "$_id" }, scan_id: { $first: "$scan_id" } } }, { $match: { count: { $gt: 1 } } }, { $project: { _id: 0, keys: "$keys", scan_id: "$scan_id", colors: { whois: { created_date: { value: "$_id.created_date", count: "$count" } } } } }, { $unwind: "$keys" }, { $project: { _id: "$keys", scan_id: "$scan_id", colors: "$colors" } }, { $sort: { _id: 1 } }, { $out: "whois.created_date_56e37ec5f2867caf306845d0s" } ], allowDiskUse: true } keyUpdates:0 writeConflicts:0 numYields:1 reslen:68 locks:{ Global: { acquireCount: { r: 23, w: 8, W: 1 }, acquireWaitCount: { r: 2, w: 1, W: 1 }, timeAcquiringMicros: { r: 141075, w: 108388, W: 158985 } }, Database: { acquireCount: { r: 6, w: 6, R: 1, W: 2 }, acquireWaitCount: { r: 4, R: 1, W: 2 }, timeAcquiringMicros: { r: 71893, R: 37179, W: 217667 } }, Collection: { acquireCount: { r: 6, w: 1 } }, Metadata: { acquireCount: { w: 100 } }, oplog: { acquireCount: { w: 5 } } } protocol:op_query 767ms
2016-03-14T02:22:52.081-0700 I COMMAND  [conn1721] command dim3.domains command: aggregate { aggregate: "domains", pipeline: [ { $match: { scan_id: ObjectId('56e37ec5f2867caf306845d0') } }, { $group: { _id: { billing_fax: "$whois.billing_fax" }, count: { $sum: 1 }, keys: { $push: "$_id" }, scan_id: { $first: "$scan_id" } } }, { $match: { count: { $gt: 1 } } }, { $project: { _id: 0, keys: "$keys", scan_id: "$scan_id", colors: { whois: { billing_fax: { value: "$_id.billing_fax", count: "$count" } } } } }, { $unwind: "$keys" }, { $project: { _id: "$keys", scan_id: "$scan_id", colors: "$colors" } }, { $sort: { _id: 1 } }, { $out: "whois.billing_fax_56e37ec5f2867caf306845d0s" } ], allowDiskUse: true } keyUpdates:0 writeConflicts:0 numYields:1 reslen:68 locks:{ Global: { acquireCount: { r: 23, w: 8, W: 1 }, acquireWaitCount: { r: 2, w: 1, W: 1 }, timeAcquiringMicros: { r: 203895, w: 109771, W: 98989 } }, Database: { acquireCount: { r: 6, w: 6, R: 1, W: 2 }, acquireWaitCount: { r: 4, R: 1, W: 2 }, timeAcquiringMicros: { r: 64782, R: 52577, W: 215537 } }, Collection: { acquireCount: { r: 6, w: 1 } }, Metadata: { acquireCount: { w: 104 } }, oplog: { acquireCount: { w: 5 } } } protocol:op_query 770ms
2016-03-14T02:22:52.081-0700 F -        [conn1601] Invalid access at address: 0
2016-03-14T02:22:52.082-0700 I COMMAND  [conn1921] command local.oplog.rs command: getMore { getMore: 787104288244, collection: "oplog.rs", maxTimeMS: 5000, term: 9, lastKnownCommittedOpTime: { ts: Timestamp 1457947371000|556, t: 9 } } cursorid:787104288244 keyUpdates:0 writeConflicts:0 numYields:16 nreturned:2107 reslen:422173 locks:{ Global: { acquireCount: { r: 34 }, acquireWaitCount: { r: 3 }, timeAcquiringMicros: { r: 420899 } }, Database: { acquireCount: { r: 17 } }, oplog: { acquireCount: { r: 17 } } } protocol:op_command 426ms
2016-03-14T02:22:52.082-0700 I COMMAND  [conn1885] command local.oplog.rs command: getMore { getMore: 790194289981, collection: "oplog.rs", maxTimeMS: 5000, term: 9, lastKnownCommittedOpTime: { ts: Timestamp 1457947371000|556, t: 9 } } cursorid:790194289981 keyUpdates:0 writeConflicts:0 numYields:13 nreturned:1739 reslen:347305 locks:{ Global: { acquireCount: { r: 28 }, acquireWaitCount: { r: 3 }, timeAcquiringMicros: { r: 418458 } }, Database: { acquireCount: { r: 14 } }, oplog: { acquireCount: { r: 14 } } } protocol:op_command 427ms
2016-03-14T02:22:52.089-0700 F -        [conn1601] Got signal: 11 (Segmentation fault).

Comment by Charlie Swanson [ 11/Mar/16 ]

Backtrace from the logs:

2016-03-11T18:56:00.944+0000 F -        [conn29677] Got signal: 11 (Segmentation fault).
 
 0x12f14b2 0x12f0609 0x12f0988 0x7f074e3b4340 0xafd0de 0xaccda8 0xb495c5 0xba5449 0xba6106 0xb02330 0xcaf525 0xcb1db6 0x9968cc 0x129ee6d 0x7f074e3ac182 0x7f074e0d947d
----- BEGIN BACKTRACE -----
{"backtrace":[{"b":"400000","o":"EF14B2","s":"_ZN5mongo15printStackTraceERSo"},{"b":"400000","o":"EF0609"},{"b":"400000","o":"EF0988"},{"b":"7F074E3A4000","o":"10340"},{"b":"400000","o":"6FD0DE","s":"_ZN5mongo27CollectionIndexUsageTracker17recordIndexAccessENS_10StringDataE"},{"b":"400000","o":"6CCDA8","s":"_ZN5mongo19CollectionInfoCache13notifyOfQueryEPNS_16OperationContextERKSt3setISsSt4lessISsESaISsEE"},{"b":"400000","o":"7495C5","s":"_ZN5mongo2mr16MapReduceCommand3runEPNS_16OperationContextERKSsRNS_7BSONObjEiRSsRNS_14BSONObjBuilderE"},{"b":"400000","o":"7A5449","s":"_ZN5mongo7Command3runEPNS_16OperationContextERKNS_3rpc16RequestInterfaceEPNS3_21ReplyBuilderInterfaceE"},{"b":"400000","o":"7A6106","s":"_ZN5mongo7Command11execCommandEPNS_16OperationContextEPS0_RKNS_3rpc16RequestInterfaceEPNS4_21ReplyBuilderInterfaceE"},{"b":"400000","o":"702330","s":"_ZN5mongo11runCommandsEPNS_16OperationContextERKNS_3rpc16RequestInterfaceEPNS2_21ReplyBuilderInterfaceE"},{"b":"400000","o":"8AF525"},{"b":"400000","o":"8B1DB6","s":"_ZN5mongo16assembleResponseEPNS_16OperationContextERNS_7MessageERNS_10DbResponseERKNS_11HostAndPortE"},{"b":"400000","o":"5968CC","s":"_ZN5mongo16MyMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortE"},{"b":"400000","o":"E9EE6D","s":"_ZN5mongo17PortMessageServer17handleIncomingMsgEPv"},{"b":"7F074E3A4000","o":"8182"},{"b":"7F074DFDF000","o":"FA47D","s":"clone"}],"processInfo":{ "mongodbVersion" : "3.2.3", "gitVersion" : "b326ba837cf6f49d65c2f85e1b70f6f31ece7937", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "3.13.0-74-generic", "version" : "#118-Ubuntu SMP Thu Dec 17 22:52:10 UTC 2015", "machine" : "x86_64" }, "somap" : [ { "elfType" : 2, "b" : "400000", "buildId" : "C1CD0F405485844DA016C6B5275C8BEF3D68DB7A" }, { "b" : "7FFD17EBB000", "elfType" : 3, "buildId" : "DC075B751E9FB361F14CD59BD81300A6BB5CB377" }, { "b" : "7F074F5C9000", "path" : "/lib/x86_64-linux-gnu/libssl.so.1.0.0", "elfType" : 3, "buildId" : "D08DD65F97859C71BB2CBBF1043BD968EFE18AAD" }, { "b" : "7F074F1EE000", "path" : "/lib/x86_64-linux-gnu/libcrypto.so.1.0.0", "elfType" : 3, "buildId" : "F86FA9FB4ECEB4E06B40DBDF761A4172B70A4229" }, { "b" : "7F074EFE6000", "path" : "/lib/x86_64-linux-gnu/librt.so.1", "elfType" : 3, "buildId" : "92FCF41EFE012D6186E31A59AD05BDBB487769AB" }, { "b" : "7F074EDE2000", "path" : "/lib/x86_64-linux-gnu/libdl.so.2", "elfType" : 3, "buildId" : "C1AE4CB7195D337A77A3C689051DABAA3980CA0C" }, { "b" : "7F074EADE000", "path" : "/usr/lib/x86_64-linux-gnu/libstdc++.so.6", "elfType" : 3, "buildId" : "4BF6F7ADD8244AD86008E6BF40D90F8873892197" }, { "b" : "7F074E7D8000", "path" : "/lib/x86_64-linux-gnu/libm.so.6", "elfType" : 3, "buildId" : "1D76B71E905CB867B27CEF230FCB20F01A3178F5" }, { "b" : "7F074E5C2000", "path" : "/lib/x86_64-linux-gnu/libgcc_s.so.1", "elfType" : 3, "buildId" : "36311B4457710AE5578C4BF00791DED7359DBB92" }, { "b" : "7F074E3A4000", "path" : "/lib/x86_64-linux-gnu/libpthread.so.0", "elfType" : 3, "buildId" : "9318E8AF0BFBE444731BB0461202EF57F7C39542" }, { "b" : "7F074DFDF000", "path" : "/lib/x86_64-linux-gnu/libc.so.6", "elfType" : 3, "buildId" : "30C94DC66A1FE95180C3D68D2B89E576D5AE213C" }, { "b" : "7F074F828000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "9F00581AB3C73E3AEA35995A0C50D24D59A01D47" } ] }}
 mongod(_ZN5mongo15printStackTraceERSo+0x32) [0x12f14b2]
 mongod(+0xEF0609) [0x12f0609]
 mongod(+0xEF0988) [0x12f0988]
 libpthread.so.0(+0x10340) [0x7f074e3b4340]
 mongod(_ZN5mongo27CollectionIndexUsageTracker17recordIndexAccessENS_10StringDataE+0x62E) [0xafd0de]
 mongod(_ZN5mongo19CollectionInfoCache13notifyOfQueryEPNS_16OperationContextERKSt3setISsSt4lessISsESaISsEE+0x38) [0xaccda8]
 mongod(_ZN5mongo2mr16MapReduceCommand3runEPNS_16OperationContextERKSsRNS_7BSONObjEiRSsRNS_14BSONObjBuilderE+0xAF5) [0xb495c5]
 mongod(_ZN5mongo7Command3runEPNS_16OperationContextERKNS_3rpc16RequestInterfaceEPNS3_21ReplyBuilderInterfaceE+0x3F9) [0xba5449]
 mongod(_ZN5mongo7Command11execCommandEPNS_16OperationContextEPS0_RKNS_3rpc16RequestInterfaceEPNS4_21ReplyBuilderInterfaceE+0x406) [0xba6106]
 mongod(_ZN5mongo11runCommandsEPNS_16OperationContextERKNS_3rpc16RequestInterfaceEPNS2_21ReplyBuilderInterfaceE+0x1F0) [0xb02330]
 mongod(+0x8AF525) [0xcaf525]
 mongod(_ZN5mongo16assembleResponseEPNS_16OperationContextERNS_7MessageERNS_10DbResponseERKNS_11HostAndPortE+0x696) [0xcb1db6]
 mongod(_ZN5mongo16MyMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortE+0xEC) [0x9968cc]
 mongod(_ZN5mongo17PortMessageServer17handleIncomingMsgEPv+0x26D) [0x129ee6d]
 libpthread.so.0(+0x8182) [0x7f074e3ac182]
 libc.so.6(clone+0x6D) [0x7f074e0d947d]
-----  END BACKTRACE  -----

Comment by Ramon Fernandez Marina [ 11/Mar/16 ]

Thanks for your report ppeddada, the Query team is investigating this issue.

Generated at Thu Feb 08 04:02:21 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.