[SERVER-35133] non-"raw" stack traces full of cannot access memory Created: 21/May/18  Updated: 27/Oct/23  Resolved: 18/Dec/18

Status: Closed
Project: Core Server
Component/s: Testing Infrastructure
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Major - P3
Reporter: Randolph Tan Assignee: Randolph Tan
Resolution: Gone away Votes: 0
Labels: former-toolchain-epic, sharding-wfbf-day, tig-hanganalyzer
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Depends
Related
Sprint: Sharding 2018-12-31
Participants:
Linked BF Score: 17

 Description   

link to task: https://evergreen.mongodb.com/task/mongodb_mongo_master_enterprise_rhel_72_s390x_inmem_concurrency_sharded_replication_with_balancer_cf339b8a8d8708e8b28747fe0cafee7cc79fe9a6_18_05_16_00_03_26/0

Example output:

Writing raw stacks to debugger_mongod_7400_raw_stacks.log.
Redirecting output to debugger_mongod_7400_raw_stacks.log.
Done logging to debugger_mongod_7400_raw_stacks.log.
MongoDB GDB commands loaded, run 'mongodb-help' for list of commands
MongoDB GDB pretty-printers loaded
MongoDB Lock analysis commands loaded
Thread 70: "signalP.gThread" (Thread 0x3ff8eeff910 (LWP 7410)) Error occurred in Python command: Cannot access memory at address 0x6a9aa67f880
 
...
 
Thread 1: "mongod" (Thread 0x3ff92a62aa0 (LWP 7400)) Error occurred in Python command: Cannot access memory at address 0x6a9ae1e2a10
Thread 70: "signalP.gThread" (Thread 0x3ff8eeff910 (LWP 7410))
Traceback (most recent call last):
  File "/data/mci/1fcc8e1e1b801a663ff93ffc102ac6ea/src/buildscripts/gdb/mongo.py", line 174, in invoke
    self._dump_unique_stacks(stacks)
  File "/data/mci/1fcc8e1e1b801a663ff93ffc102ac6ea/src/buildscripts/gdb/mongo.py", line 233, in _dump_unique_stacks
    print(stack['output'])
KeyError: 'output'
Error occurred in Python command: ('output',)
warning: target file /proc/7400/cmdline contained unexpected null characters
Saved corefile dump_mongod.7400.core
Running Hang Analyzer Supplement - MongoDBDumpLocks
Traceback (most recent call last):
  File "/data/mci/1fcc8e1e1b801a663ff93ffc102ac6ea/src/buildscripts/gdb/mongo_lock.py", line 387, in invoke
    self.mongodb_show_locks()
  File "/data/mci/1fcc8e1e1b801a663ff93ffc102ac6ea/src/buildscripts/gdb/mongo_lock.py", line 394, in mongodb_show_locks
    get_locks(graph=None, thread_dict=thread_dict, show=True)
  File "/data/mci/1fcc8e1e1b801a663ff93ffc102ac6ea/src/buildscripts/gdb/mongo_lock.py", line 348, in get_locks
    find_mutex_holder(graph, thread_dict, show)
  File "/data/mci/1fcc8e1e1b801a663ff93ffc102ac6ea/src/buildscripts/gdb/mongo_lock.py", line 272, in find_mutex_holder
    frame = find_frame(r'std::mutex::lock\(\)')
  File "/data/mci/1fcc8e1e1b801a663ff93ffc102ac6ea/src/buildscripts/gdb/mongo_lock.py", line 254, in find_frame
    block = frame.block()
RuntimeError: Cannot locate object file for block.
Error occurred in Python command: Cannot locate object file for block.
Traceback (most recent call last):
  File "/data/mci/1fcc8e1e1b801a663ff93ffc102ac6ea/src/buildscripts/gdb/mongo_lock.py", line 412, in invoke
    self.mongodb_waitsfor_graph(arg)
  File "/data/mci/1fcc8e1e1b801a663ff93ffc102ac6ea/src/buildscripts/gdb/mongo_lock.py", line 421, in mongodb_waitsfor_graph
    get_locks(graph=graph, thread_dict=thread_dict, show=False)
  File "/data/mci/1fcc8e1e1b801a663ff93ffc102ac6ea/src/buildscripts/gdb/mongo_lock.py", line 348, in get_locks
    find_mutex_holder(graph, thread_dict, show)
  File "/data/mci/1fcc8e1e1b801a663ff93ffc102ac6ea/src/buildscripts/gdb/mongo_lock.py", line 272, in find_mutex_holder
    frame = find_frame(r'std::mutex::lock\(\)')
  File "/data/mci/1fcc8e1e1b801a663ff93ffc102ac6ea/src/buildscripts/gdb/mongo_lock.py", line 254, in find_frame
    block = frame.block()
RuntimeError: Cannot locate object file for block.
Error occurred in Python command: Cannot locate object file for block.
Running Print JavaScript Stack Supplement



 Comments   
Comment by Randolph Tan [ 18/Dec/18 ]

This particular issue doesn't appear to be occurring anymore. Test: https://evergreen.mongodb.com/task/mongodb_mongo_master_enterprise_rhel_72_s390x_inmem_concurrency_sharded_replication_patch_b37b5ef7ec0ec2e502423d53e6c0d6e86b343c27_5c1821112a60ed7a17c1b33f_18_12_17_22_20_47##%257B%2522compare%2522%253A%255B%257B%2522hash%2522%253A%2522b37b5ef7ec0ec2e502423d53e6c0d6e86b343c27%2522%257D%255D%257D

Comment by Randolph Tan [ 04/Dec/18 ]

I placed this on the next sprint.

Comment by Andrew Morrow (Inactive) [ 04/Dec/18 ]

renctan - Can I put this in a sprint that you are working on so it is on your queue? Or an upcoming sprint?

Comment by Andrew Morrow (Inactive) [ 28/Nov/18 ]

renctan - I'm reassigning this ticket back to you because we have recently deployed a new version of GDB. Could you please test out the GDB found on the relevant spawn host and see if this issue is now fixed for you?

Comment by Randolph Tan [ 21/May/18 ]

max.hirschhorn Didn't realize this was zSeries. Feel free to close if there is already a ticket for it.

Generated at Thu Feb 08 04:38:56 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.