[SERVER-45034] [ftdc] serverStatus was very slow Created: 09/Dec/19  Updated: 11/Feb/20  Resolved: 11/Feb/20

Status: Closed
Project: Core Server
Component/s: WiredTiger
Affects Version/s: 4.0.9
Fix Version/s: None

Type: Question Priority: Major - P3
Reporter: Afonso Rodrigues Assignee: Dmitry Agranat
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: PNG File RetransSegs.png     PNG File co-located-data-and-journal.png     PNG File data-disk.png     PNG File data-disks.png     PNG File journal-disks.png     PNG File separated-data-journal-bottleneck.png     PNG File tickets.png    
Backwards Compatibility: Fully Compatible
Participants:

 Description   

I recive the error in my mongodb server: [ftdc] serverStatus was very slow.

But I don't identify the root cause.
I increase the size my machine, but this error persists.



 Comments   
Comment by Dmitry Agranat [ 11/Feb/20 ]

Hi afonso.rodrigues@maxmilhas.com.br,

We haven’t heard back from you for some time, so I’m going to mark this ticket as resolved. If this is still an issue for you, please provide additional information and we will reopen the ticket.

Regards,
Dima

Comment by Dmitry Agranat [ 27/Jan/20 ]

Hi afonso.rodrigues@maxmilhas.com.br,

We still need additional information to diagnose the problem. If this is still an issue for you, would you please provide the requested information?

Thanks,
Dima

Comment by Dmitry Agranat [ 14/Jan/20 ]

Hi afonso.rodrigues@maxmilhas.com.br, we will need to collect a fresh set of data. After investigating the latest set, I've noticed that you've increased the number of read and write WiredTiger tickets from the default 128 to 5000. In general, we do not recommend changing the tickets number as it does not help remove any bottleneck. In this specific workload, I suspect this change might be actually causing some additional bottlenecks we have not seen before.

Please set these two to their default (128) on the server where data and journal are separated, let it run for a couple of days and re-upload the usual set of data:

Thanks,
Dima

Comment by Dmitry Agranat [ 13/Jan/20 ]

Thanks afonso.rodrigues@maxmilhas.com.br, I am going to look into the latest data soon.

Comment by Afonso Rodrigues [ 09/Jan/20 ]

Hi Dima,

Thank's for your question.

After change server, I receive the new incidents.
I not remember the change parameter tcp_mem, but after change this error occurr again.

I change the server how troubleshooting, but its not ideal.

I have the three server for application switches, but I will send the files only the server with split device to log, journal and data.
In many times this server is not using because the job ir the other.

Comment by Dmitry Agranat [ 07/Jan/20 ]

Hi afonso.rodrigues@maxmilhas.com.br, did you notice new occurrences of "serverStatus was very slow" events after separating data and journal? Could you upload the fresh set of diagnostic.data for us to review?

The graphs here are being generated by our internal tools.

Comment by Afonso Rodrigues [ 03/Jan/20 ]

Hi Dima,
Thank's for the update,

I modified other instance to sperate data and journal storages.
I changed the instance for facility the migration.

I mounted the filesystems with in separate devices like:

/dev/xvda1 on / type xfs (rw,noatime,nodiratime,attr2,inode64,noquota)
/dev/md5 on /var/lib/mongo type xfs (rw,noatime,nodiratime,attr2,inode64,sunit=1024,swidth=4096,noquota)
/dev/md6 on /var/lib/mongo/journal type xfs (rw,noatime,nodiratime,attr2,inode64,sunit=1024,swidth=4096,noquota)
 
/dev/md5        7.0T  186G  6.8T   3% /var/lib/mongo
/dev/md6        7.0T  4.0G  7.0T   1% /var/lib/mongo/journal

I will observe my workload a few days.
Thank's

And, how you create this graphcis below?

Comment by Dmitry Agranat [ 02/Jan/20 ]

Hi afonso.rodrigues@maxmilhas.com.br, thanks for the update.

After the profiling was disabled, we are now able to see the next bottleneck.

This bottleneck is related to co-locating data and journal and your access pattern, which we do not recommend.

  • There are several occurrences of operational stalls over the time period you've provided. However, I've chosen just a short period of time with only 2 such events for better visibility.
  • log entries here indicate occurrences of slow serverStatus entries inside mongoD log. We can see these messages being logged right after operational stalls are over (right after the gaps in the graph).
  • During these operational stalls, associated with update operations, we are trying to write simultaneously about 350 MB/s for checkpoint, see block-manager bytes written for checkpoint and transaction checkpoint currently running metrics as wells as about 2.4 GB for journal, see log bytes written metric.
  • The impact on the journal is confirmed by active slot closed which is symptomatic of an i/o bottleneck affecting the journal.
  • Additional metrics confirm this, for example, we can see the long duration of fsync calls to the OS in service of the journal, see log sync time duration metric. Indicative of I/O performance impact on journaling.

Separating data and journal into different storage devices should address this bottleneck.

Thanks,
Dima

Comment by Afonso Rodrigues [ 02/Jan/20 ]

Hi Dima,

I uploaded the files with information.

Thank so much.

Comment by Afonso Rodrigues [ 02/Jan/20 ]

Hi Dima,
Thank's for your update.

Happy new year.

I will the upload the diagnostic.data.
But in this interval I had the incidents, where sometimes restart the connection resolves the problem, sometimes I had restart mongo process.

Comment by Dmitry Agranat [ 02/Jan/20 ]

Hi afonso.rodrigues@maxmilhas.com.br, can you update us on the progress and upload the diagnostic.data for us to review the current status?

Comment by Afonso Rodrigues [ 24/Dec/19 ]

Hi Dima,

Thank's for your update.

I waiting a few days to send the new metrics.

Thank's so much.

Comment by Dmitry Agranat [ 24/Dec/19 ]

Hi afonso.rodrigues@maxmilhas.com.br, after checking the uploaded data, the profiling was still on during both mentioned incidents up until the process restart. After the restart, the profiling shows as off and no incidents were observed. Please let it run for a few days and upload the fresh set of data.

Comment by Afonso Rodrigues [ 23/Dec/19 ]

Hi Dima,

Unfortunately today had two incidents.
One before change the profiling level and other after change profiling level.
When the frist, I restarted the application and database came back normaly.
But in the second I had the restart the database.

I send the present metrics, but I don't have the perf metrics.
I will the observation my environment a few days.

Thank's so much.

Comment by Afonso Rodrigues [ 23/Dec/19 ]

Hi Dima,

Thank's for your update.

I changed the value in 23/12/2019 at 10:55 UTC.

> db.setProfilingLevel(0, { slowms: 15000,sampleRate: 0.01 })
{ "was" : 1, "slowms" : 100, "sampleRate" : 1, "ok" : 1 }
> db.getProfilingStatus()
{ "was" : 0, "slowms" : 15000, "sampleRate" : 0.01 }
> 

I waiting a few days to send the new metrics.

Thank's so much.

Comment by Dmitry Agranat [ 23/Dec/19 ]

Hi afonso.rodrigues@maxmilhas.com.br, to disable profiling, use the following helper in the mongo shell:

db.setProfilingLevel(0)

To view the profiling level, issue the following from the mongo shell:

db.getProfilingStatus()

Example output:

{ "was" : 0, "slowms" : 100, "sampleRate" : 1.0, "ok" : 1 }

The was field indicates the current profiling level, so if it shows "0:, it means profiling is disabled.

You can review our documentation to see how to change other parameters, such as slowms and sampleRate

Thanks,
Dima

Comment by Afonso Rodrigues [ 22/Dec/19 ]

Hi Dima

I testing in the other instance the configuration for stop the profiling.

In my server is actually configuration:

	> db.getProfilingStatus()
	{ "was" : 1, "slowms" : 100, "sampleRate" : 1 }
	> db.getProfilingLevel()
	1
	> 

It's below the testing using the same version server: 4.0.9

 
	operationProfiling:
   		mode: slowOp
   		slowOpThresholdMs: 100
   		slowOpSampleRate: 1
 
 
	wget https://dl.dropboxusercontent.com/s/gxbsj271j5pevec/trades.json
	mongoimport --db trade --collection trades --type json --file ./trades.json
 
	mongo ->
 
		> db.getProfilingStatus()
		{ "was" : 1, "slowms" : 100, "sampleRate" : 1 }
		> 	
		1
 
		use trade
		> db.trades.find({"price":{"gte":200}}).count()
		0
 
	2019-12-22T19:33:55.656+0000 I COMMAND  [conn13] command trade.trades appName: "MongoDB Shell" command: count { count: "trades", query: { price: { gte: 200.0 } }, fields: {}, lsid: { id: UUID("24bc2742-1510-42fc-b713-ed967ee51665") }, $db: "trade" } planSummary: COLLSCAN keysExamined:0 docsExamined:1000001 numYields:7840 reslen:45 locks:{ Global: { acquireCount: { r: 7841 } }, Database: { acquireCount: { r: 7841 } }, Collection: { acquireCount: { r: 7841 } } } storage:{ data: { bytesRead: 97087629, timeReadingMicros: 44730 }, timeWaitingMicros: { cache: 274 } } protocol:op_msg 953ms
 
	operationProfiling:
   		mode: off
   		#slowOpThresholdMs: 100
   		#slowOpSampleRate: 1
 
	mongo ->
 
		> db.getProfilingStatus()
		{ "was" : 0, "slowms" : 100, "sampleRate" : 1 }
		> db.geProfilingLevel()
		2019-12-22T16:37:46.326-0300 E QUERY    [js] TypeError: db.geProfilingLevel is not a function :
		@(shell):1:1
 
		use trade
		> db.trades.find({"price":{"gte":200}}).count()
		0
 
		2019-12-22T19:38:23.385+0000 I COMMAND  [conn2] command trade.trades appName: "MongoDB Shell" command: count { count: "trades", query: { price: { gte: 200.0 } }, fields: {}, lsid: { id: UUID("3c08479f-d06b-4865-8410-f75f3f486286") }, $db: "trade" } planSummary: COLLSCAN keysExamined:0 docsExamined:1000001 numYields:7848 reslen:45 locks:{ Global: { acquireCount: { r: 7849 } }, Database: { acquireCount: { r: 7849 } }, Collection: { acquireCount: { r: 7849 } } } storage:{ data: { bytesRead: 239358516, timeReadingMicros: 250457 }, timeWaitingMicros: { cache: 34065 } } protocol:op_msg 1299ms

The same query is logged in logfile when with mode slowOp or off.
In this case, will be change the value for off and slowms to 15000?
This configuration is correct?

operationProfiling:
mode: off
slowOpThresholdMs: 15000
slowOpSampleRate: 0.01

Comment by Dmitry Agranat [ 22/Dec/19 ]

Thanks afonso.rodrigues@maxmilhas.com.br for uploading all the collected perf data. We suspect the erratic behavior you've reported is related to the fact that the profiler is enabled all the time in your system. Specifically, this is happening when we need to truncate the system.profiler collection (since it's a capped collection).

Can you disable profiling, let the workload run for a few days and upload a fresh set of diagnostic.data?

Thanks,
Dima

Comment by Afonso Rodrigues [ 20/Dec/19 ]

Hi Dima,

I uploaded the file collect-20191219.tar.gz with the files from 20191219.
In this day, the message "serverStatus was vary slow" occur in 2019-12-19T10:26 UTC.

Thank's so much.

Comment by Afonso Rodrigues [ 20/Dec/19 ]

Hi Dima,

Fortunately, I don't have the critical incident in the last days.
But I stay receive the message "serverStatus was very slow".

In my last action for workaround the problem, I implemented the scheduler the restart server every 4 hours. this not was effective.

But I added in my application layer a anti-bot waf in 17/12/2019.
Since 18/12/2019 don't necessary restart server with a critical incident.

I will the upload files reference 19/12/2019 with perf files and MongoD files.

Thank's

Comment by Afonso Rodrigues [ 18/Dec/19 ]

Hi Dima,

I initiate collect the information from perf.

But, in my session I receive this messages:

Message from syslogd@mm-mongodb-availability-02-2 at Dec 18 17:05:56 ...
 kernel:Uhhuh. NMI received for unknown reason 31 on CPU 6.
 
Message from syslogd@mm-mongodb-availability-02-2 at Dec 18 17:05:56 ...
 kernel:Do you have a strange power saving mode enabled?
 
Message from syslogd@mm-mongodb-availability-02-2 at Dec 18 17:05:56 ...
 kernel:Dazed and confused, but trying to continue

The collect is active. I waiting the next incident for the send the files for you.

Comment by Afonso Rodrigues [ 18/Dec/19 ]

Hi Dima,

Thanks for your answer.

I will the collected a information.
I send the metrics after the next incident.
This incidents are occurring daily.

Comment by Dmitry Agranat [ 18/Dec/19 ]

Thanks afonso.rodrigues@maxmilhas.com.br, the information provided was very helpful.

In order for us to better understand what's going on, we'd like to collect some perf call stacks. This will require the installation of Linux perf tool.

Execute this during your typical workload for an hour or two. It is important to catch at least several events when "serverStatus" is reported as slow. This command will capture call stack samples in separate files of 60 seconds each:

while true; do perf record -a -g -F 99 -o perf.data.$(date -u +%FT%TZ) sleep 60; done

Then run perf script as above on the subset of files of interest

for fn in ...; do perf script -i $fn >$fn.txt; done

Once completed, please upload all of the generated .txt files and a fresh archive of the diagnostic.data and mongod log.

Note that it is important to run perf script on the same node where perf.data was generated so that it can be correctly symbolized using the addresses on that machine.

Thanks,
Dima

Comment by Afonso Rodrigues [ 17/Dec/19 ]

Hi Dima,

The instance type is a i3.16xlarge.

Comment by Dmitry Agranat [ 17/Dec/19 ]

Thanks afonso.rodrigues@maxmilhas.com.br, I'll have a look at the latest information and will report back with my findings. Could you clarify what AWS instance type you are using for mm-mongodb-availability-02-2 node?

Comment by Afonso Rodrigues [ 16/Dec/19 ]

Hi Dima,
How are you?

This file collect-20191212.tar.gz its correct?

My apologise the last message, i don't told the information when change TCP mem.
I'm changed in 11/12 16:00 UTC-03.

Thank's

Comment by Afonso Rodrigues [ 13/Dec/19 ]

Hi Dima,
Thank's for you update.

My environment is a single instance.
I'm not using replicaSet.

I uploaded this file collect-20191212.tar.gz with last metrics and files.

My timezone is UTC-03 but timezone in my server is UTC.

I'm not using ReplicaSet because my server persists the ephemeral data with ttl 1hour.

Comment by Dmitry Agranat [ 12/Dec/19 ]

Thanks for the update afonso.rodrigues@maxmilhas.com.br. No need to manually collect serverStatus.

Please upload when possible:

  • The exact time when TCP mem was fixed/changed
  • AWS instance time + if all replica set members are running on separated instances + if all replica set members are deployed on the same instance type
  • archive (tar or zip) the mongod.log files and the $dbpath/diagnostic.data directory
  • A fresh dmesg.log (human version) after the TCP mem was changed

Thanks,
Dima

Comment by Afonso Rodrigues [ 12/Dec/19 ]

Hi Dima,

Thank's for your answer.

I change the value in the parameter "net.ipv4.tcp_mem" from "65536 131072 262144" to "5888262 7851019 11776524".

I get the value from the other machine with SO Amazon Linux with same instance type (comparison in attached).

After change value, this error "TCP: out of memory – consider tuning tcp_mem" not received again.

 
but I'm still my server is degradation with message:

2019-12-12T12:36:55.309+0000 I COMMAND  [conn6676] serverStatus was very slow: { after basic: 0, after asserts: 0, after backgroundFlushing: 0, after connections: 0, after dur: 0, after extra_info: 0, after freeMonitoring: 0, after globalLock: 1810, after locks: 1810, after logicalSessionRecordCache: 1810, after network: 1810, after opLatencies: 1810, after opReadConcernCounters: 1810, after opcounters: 1810, after opcountersRepl: 1810, after repl: 1810, after security: 1810, after storageEngine: 1810, after tcmalloc: 1820, after transactions: 1820, after transportSecurity: 1820, after wiredTiger: 1820, at end: 1820 }
2019-12-12T12:36:55.310+000

But the alternative resolution of problem was to restart all applications. There was no need to restart mongodb process.

I will attach the log files later in the day.

 Is it valid for me to put some process collecting information server status and lock connections?

would be this commands:

db.serverStatus()

db.adminCommand({
    aggregate: 1,
    pipeline: [
        {$currentOp: {}},
        {$match: 
            {$or: [
                {secs_running: {$gt: 1}},
                {WaitState: {$exists: true}}]}},
        {$project: {_id:0, opid: 1, secs_running: 1}}],
    cursor: {}
});

Thank you so much.

Comment by Dmitry Agranat [ 11/Dec/19 ]

Hi afonso.rodrigues@maxmilhas.com.br, thank you for uploading all the requested information, it was very helpful.

I've correlated the occurrences of "serverStatus was very slow" messages with the events on the server.

There's a moderately high rate of packet loss: up to 20-30 RetransSegs, TCPLossProbes, and DelayedACKLost per second. so I suspect that the network packet loss could be the reason for this issue.

  • During A-B, there is a spike of operations, mainly aggregations and updates.
  • This results in rather high amount of data (~2 GB possibly compressed) being sent over the network by this mongod instance.
  • This leads to a high rate of packet loss.

I am not sure if your are simply hitting the limit of your network bandwidth or if those "TCP out of memory" errors (unfortunately we do not have dmesg.log (human version) post Dec10 03:24 to correlate with mongoD.log which only starts at Dec10 10:50), or both are related to this issue. Could you investigate these network issues and come back with your founding? I am eager to re-evaluate these slow messages once the network issue is addressed.

Thanks,
Dima

Comment by Afonso Rodrigues [ 11/Dec/19 ]

Thank you so much for waiting.
I uploaded as requested.
In the file collect.tar.gz contains all the information.

Comment by Afonso Rodrigues [ 10/Dec/19 ]

I will the upload file diagnostic-data later.

Comment by Afonso Rodrigues [ 10/Dec/19 ]

I collecting and make the upload in the portal 10gen-httpsupload.s3.amazonaws.com.

Comment by Afonso Rodrigues [ 10/Dec/19 ]

Hi Dima,
Thank's so much.

My problem relationship about MongoDB does not define if is a Bug or Performance issue.

The symptoms I receive:
* The queries in my application stay a very slow
* In my server, load and context switch increases
* In MongoDB Active and Queued read/write increases

In this moment the incident, the workload in the database don't change.
But the number of commands is down and the througphut In/Out its down.

In command mongostat I see the columns "insert query update delete" with *0.
And columns "net_in net_out" change value from ~150m to ~30kb.

I attached the graphs for incident in 08/12 between 10:00 to 10:44 UTC-03.
I restarted my process mongodb in 10:37 and started in 10:43

Comment by Dmitry Agranat [ 10/Dec/19 ]

Hi afonso.rodrigues@maxmilhas.com.br,

In order to better understand what's going on we'll need to collect some addition information.

Would you please archive (tar or zip) the mongod.log files and the $dbpath/diagnostic.data directory (the contents are described here) and upload them to this support uploader location?

In addition, please upload dmesg, syslog and messages logs.

Files uploaded to this portal are visible only to MongoDB employees and are routinely deleted after some time.

Thanks,
Dima

Comment by Dmitry Agranat [ 10/Dec/19 ]

Hi afonso.rodrigues@maxmilhas.com.br,

As per our documentation, this is the time, in microseconds, since the database last started and created the globalLock. This is roughly equivalent to total server uptime.

Could you please elaborate what MongoDB bug you might be pointing to in this ticket?

Thanks,
Dima

Comment by Afonso Rodrigues [ 10/Dec/19 ]

Hi Dima,
Thank's for your update.

After many operations with messages "serverStatus was very slow", will the accumulate operations in my server this cause the stack operations?
The message "serverStatus was very slow" is not only about degradation of my server?

This example is the first message "serverStatus was very slow" after restart mongodb:

// 
 
2019-12-10T12:17:53.451+0000 I COMMAND [ftdc] serverStatus was very slow: 
 { after basic: 0, 
 after asserts: 0, 
 after backgroundFlushing: 0, 
 after connections: 0, 
 after dur: 0, 
 after extra_info: 0, 
 after freeMonitoring: 0, 
 after globalLock: 7420, 
 after locks: 7420, 
 after logicalSessionRecordCache: 7420, 
 after network: 7420, 
 after opLatencies: 7420, 
 after opReadConcernCounters: 7420, 
 after opcounters: 7420, 
 after opcountersRepl: 7420, 
 after repl: 7420, 
 after security: 7420, 
 after storageEngine: 7420, 
 after tcmalloc: 7450, 
 after transactions: 7450, 
 after transportSecurity: 7450, 
 after wiredTiger: 7450, 
 at end: 7450 
 }

The numbers "after globalLock: 7420" the reference a number of document is impacted or in locking?

My version:
mongod --version
db version v4.0.9

Comment by Dmitry Agranat [ 10/Dec/19 ]

Hi afonso.rodrigues@maxmilhas.com.br,

This is not an error, the serverStatus command is being run at a regular interval to collect statistics about the instance. If the response of this command is being slow, we log the output listing all the important MongoDB sub-components and the time it took. In your example, the first non-zero component is globalLock

after globalLock: 990

You can correlate operations inside the mongodb.log executed at this time to understand which operations were responsible for taking a globalLock.

I would definitely suggest addressing this device driver level error:

TCP: out of memory -- consider tuning tcp_mem

I am not sure there is a relationship between the TCP: out of memory errors and outputs of serverStatus as timestamps between these two do not match.

Thanks,
Dima

Comment by Afonso Rodrigues [ 09/Dec/19 ]

What the important files for analysis?

How to find the root causes for "[ftdc] serverStatus was very slow"

 

Thank's

Comment by Afonso Rodrigues [ 09/Dec/19 ]

In my server, is a single instance with disks in raid 0 for performance.

// i'm checking the dmesg.log the messages today:
 
"
[Dec 9 03:44] Process accounting resumed
[Dec 9 17:07] TCP: out of memory -- consider tuning tcp_mem
[Dec 9 17:08] TCP: out of memory -- consider tuning tcp_mem
[ +52.144719] TCP: out of memory -- consider tuning tcp_mem
[Dec 9 17:09] TCP: out of memory -- consider tuning tcp_mem
[  +7.878032] TCP: out of memory -- consider tuning tcp_mem
[ +22.131282] TCP: out of memory -- consider tuning tcp_mem
[Dec 9 17:10] TCP: out of memory -- consider tuning tcp_mem
[ +22.137551] TCP: out of memory -- consider tuning tcp_mem
[ +22.185096] TCP: out of memory -- consider tuning tcp_mem
[Dec 9 17:11] perf: interrupt took too long (25672 > 24886), lowering kernel.perf_event_max_sample_rate to 7000
[  +5.636714] perf: interrupt took too long (32744 > 32090), lowering kernel.perf_event_max_sample_rate to 6000
"
 
My config in sysctl.tcp_mem is the value: 
net.ipv4.tcp_mem = 65536	131072	262144
 
this values is:
min: 256 Mb
pressure: 512 Mb
Max: 1024 Gb
 
Maybe this possible the increase the max value from tcp_mem?

 

In my mongod.log I recive the messages:

// 
2019-12-09T21:22:10.008+0000 I COMMAND  [ftdc] serverStatus was very slow: { after basic: 0, after asserts: 0, after backgroundFlushing: 0, after connections: 0, after dur: 0, after extra_info: 0, after freeMonitoring: 0, after globalLock: 990, after locks: 990, after logicalSessionRecordCache: 990, after network: 990, after opLatencies: 990, after opReadConcernCounters: 990, after opcounters: 990, after opcountersRepl: 990, after repl: 990, after security: 990, after storageEngine: 990, after tcmalloc: 1000, after transactions: 1000, after transportSecurity: 1000, after wiredTiger: 1000, at end: 1010 }
2019-12-09T21:22:10.016+0000 I COMMAND  [conn4082] serverStatus was very slow: { after basic: 0, after asserts: 0, after backgroundFlushing: 0, after connections: 0, after dur: 0, after extra_info: 0, after freeMonitoring: 0, after globalLock: 1470, after locks: 1470, after logicalSessionRecordCache: 1470, after network: 1470, after opLatencies: 1470, after opReadConcernCounters: 1470, after opcounters: 1470, after opcountersRepl: 1470, after repl: 1470, after security: 1470, after storageEngine: 1470, after tcmalloc: 1480, after transactions: 1480, after transportSecurity: 1480, after wiredTiger: 1490, at end: 1490 }

After restart the server, is problem solves temporarily.

 

 

Generated at Thu Feb 08 05:07:41 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.