[SERVER-23778] All user requests are queued because all foreground threads are stuck in __wt_evict() Created: 18/Apr/16  Updated: 08/Feb/23  Resolved: 08/Jun/16

Status: Closed
Project: Core Server
Component/s: Internal Code, WiredTiger
Affects Version/s: 3.2.3, 3.2.4, 3.2.5
Fix Version/s: None

Type: Bug Priority: Critical - P2
Reporter: 아나 하리 Assignee: Michael Cahill (Inactive)
Resolution: Duplicate Votes: 1
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: PNG File CacheUsage_vs_QueuedReaderWriters.png     PNG File WiredTiger-MetricGraphs.png     PNG File WiredTiger-WT2560-MetricGraphs.png     Text File collection.txt     File metrics.2016-04-18T09-51-46Z-00000.tar.gz     File mongod.conf     Text File mongostat.txt     Text File pstack_primary.txt     Text File pstack_secondary.txt     File wt-2560-mongodb-3.2.diff    
Issue Links:
Duplicate
duplicates WT-2560 Stuck trying to update oldest transac... Closed
is duplicated by SERVER-23777 and all user requests are All foregro... Closed
Backwards Compatibility: Fully Compatible
Operating System: ALL
Steps To Reproduce:

I can't prepare simple re-procduce script.
But I think this can happen in 1~2 dayes with heavy read/write traffic.

Participants:

 Description   

I am doing performance test with 4-shard(3 members for each replica-set).
There's about 1K~2K Query/second and 500~1500 Update(upsert)/second user requests for each shard.

But some shard(primary) is stuck and can't process user request in 10~24 hours after restart. Once this case happen then user requests are blocked for a few hours or never released. Sometimes they released in 10~30 minutes.

I found this case happens when WiredTiger cache usage is getting over 95%. And during this time, all foreground threads which have "write ticket" are doing __wt_evict() => __wt_txn_update_oldest().

According to checking some wiredtiger source code,
If cache usage is getting over 95%(eviction_trigger), looks like that all eviction server and foreground threads are responsible for LRU eviction. But there's only one thread can update oldest_transaction by global_txn->scan_count. If they can't update then loop finding oldest transaction no again and again.

I am not sure suppressing cache usage with eviction_trigger is planned, But all foreground threads and eviction server starting LRU eviction at the same time might be trouble. (e.g. All ticket holder foreground threads are doing scan all sessions at the same time, and updating oldest transaction is more difficult becuase scan_count is getting high at maximum ticket count).

Usually cache usage is 80% and during this time everything is fine, But once cache usage is getting over 80% increasing usage is not stopped up to 96~97%. after that Queued writer and reader is increasing, and active reader and writers are scanning oldest transaction.

Attached pstack_primary.txt is stack trace of primary when this case happen.
And attached pstack_secondary.txt is stack trace of secondary after primary /secondary switching(stepdown). (Actually pstack_primary.txt and pstack_seconday.txt are stack trace for the same server, just replication role is changed)



 Comments   
Comment by Michael Cahill (Inactive) [ 08/Jun/16 ]

The original issue reported here was fixed in WT-2560.

Comment by Michael Cahill (Inactive) [ 30/May/16 ]

Hi matt.lee, here are binaries that I believe will work on your system:

https://s3.amazonaws.com/mciuploads/mongodb-mongo-v3.2/enterprise-rhel-62-64-bit-inmem/5c3534c9b4d621ff57b3dbca6a259ebd1b10322c/binaries/mongo-mongodb_mongo_v3.2_enterprise_rhel_62_64_bit_inmem_5c3534c9b4d621ff57b3dbca6a259ebd1b10322c_16_05_26_06_05_41.tgz

This is the tip of the MongoDB 3.2 branch with the latest WiredTiger changes added.

We have now updated MongoDB master with the latest WiredTiger changes, so if you have any trouble with those binaries, you could build the MongoDB master branch and see whether it resolves your issues. We expect to update the MongoDB 3.2 branch soon, but these eviction changes may not make it into MongoDB 3.2.7.

Again, apologies for the delay but I hope these changes address the performance issues you have found.

I would like to close the GitHub pull requests against MongoDB: if we need additional changes to WiredTiger, please follow up here and we can open pull requests against the WiredTiger repository.

Comment by 아나 하리 [ 26/May/16 ]

Michael Cahill

  1. cat /etc/redhat-release
    CentOS release 6.7 (Final)
  1. uname -a
    Linux shard01-mongo1 2.6.32-573.18.1.el6.centos.plus.x86_64 #1 SMP Wed Feb 10 18:09:24 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

Thanks.

Comment by Michael Cahill (Inactive) [ 26/May/16 ]

matt.lee, I have just started the build that includes the latest WiredTiger changes with MongoDB 3.2.6. Can you tell me what Linux variant you run so I can make sure the right binaries are created?

Comment by 아나 하리 [ 25/May/16 ]

When do you expect I can get a development code for test. ?

Comment by 아나 하리 [ 25/May/16 ]

Hi Michael Cahill.
It's good to hear your voice.

>> Would you be able to test that development code to see whether it addresses the performance issues you have reported?
Sure I could test your development code anytime.

Comment by Michael Cahill (Inactive) [ 25/May/16 ]

matt.lee, my apologies that your requests went quiet. My whole team has been travelling for the past two weeks.

We have some active work in progress to improve eviction performance that I think should help with your workloads. There have been several changes to the eviction code since the version in mongodb:master. Also, our development process is to make changes first to WiredTiger (https://github.com/wiredtiger/wiredtiger), then update MongoDB when the WiredTiger tree is stable.

Given all of that, what I would like to suggest is that we produce MongoDB 3.2.x binaries updated to the latest version of WiredTiger. Would you be able to test that development code to see whether it addresses the performance issues you have reported?

Comment by 아나 하리 [ 05/May/16 ]

Hi Michael Cahill.

I think I found the reason why original mongodb doesn't evict internal pages.
Please check this pull request.

https://github.com/mongodb/mongo/pull/1079

And I've patched and run our service traffic. And now cache usage of wiredtiger and mongodb query processing is looks good.
(also with custom change at __evict_lru_walk() of https://github.com/mongodb/mongo/pull/1078)

But still I think candidate cutoff point should be more aggressive than original code.
I will test mongodb you patched above with only "https://github.com/mongodb/mongo/pull/1079" (without other custom changes)

Thanks.
Matt.

Comment by 아나 하리 [ 05/May/16 ]

I have tested MongoDB 3.2.6 + WT-2560.

Wiredtiger cache usage was not stable, it's going up to 100% and throughput is decreased.
But the difference is mongodb was processing user requests (slowly).

insert query update delete getmore command % dirty % used flushes vsize   res  qr|qw   ar|aw netIn netOut conn         set repl                      time
    *0   721    152     *0      22    41|0     0.0   99.9       0 50.7G 45.5G  1|202  14|128  278k    12m 1078 testreplset  PRI 2016-05-05T13:51:00+09:00
    *0   777     16     *0       9    14|0     0.1   99.9       0 50.7G 45.5G   0|98   1|128  260k    13m 1078 testreplset  PRI 2016-05-05T13:51:01+09:00
    *0   526     23     *0       3    10|0     0.1  100.0       0 50.7G 45.5G   0|73  18|128  162k     8m 1078 testreplset  PRI 2016-05-05T13:51:02+09:00
    *0   594     40      1      27    48|0     0.1  100.0       0 50.7G 45.5G  1|123  26|128  192k     7m 1078 testreplset  PRI 2016-05-05T13:51:03+09:00
    *0   831     69     *0       6    11|0     0.1  100.0       0 50.7G 45.5G  0|177  42|128  242k    12m 1078 testreplset  PRI 2016-05-05T13:51:04+09:00
    *0   804    144      2      45    80|0     0.1  100.0       0 50.7G 45.5G  0|268   1|128  284k    13m 1078 testreplset  PRI 2016-05-05T13:51:05+09:00
    *0   687     31     *0      59   112|0     0.1   99.9       0 50.7G 45.5G  1|180   2|128  272k    12m 1078 testreplset  PRI 2016-05-05T13:51:06+09:00
    *0   704     39      1       0     4|0     0.1   99.8       0 50.7G 45.5G   1|84   0|128  242k    12m 1078 testreplset  PRI 2016-05-05T13:51:07+09:00
    *0   721     55     *0      35    64|0     0.1   99.8       0 50.7G 45.5G   0|19   9|128  271k    11m 1078 testreplset  PRI 2016-05-05T13:51:08+09:00
    *0   573     82      2      49    80|0     0.1   99.8       0 50.7G 45.5G    0|0    4|69  261k     9m 1078 testreplset  PRI 2016-05-05T13:51:09+09:00
    *0   605     57     *0      36    61|0     0.1   99.9       0 50.7G 45.5G    0|0     6|3  243k     9m 1078 testreplset  PRI 2016-05-05T13:51:10+09:00
    *0   462     37     *0      24    36|0     0.1  100.0       0 50.7G 45.5G    0|0     7|1  158k     7m 1078 testreplset  PRI 2016-05-05T13:51:11+09:00
    *0   465     72      1      37    58|0     0.2  100.0       0 50.7G 45.5G    0|0     7|0  185k     7m 1078 testreplset  PRI 2016-05-05T13:51:12+09:00
    *0   503     95      1      39    66|0     0.2  100.0       0 50.7G 45.5G    0|0   54|44  177k     7m 1078 testreplset  PRI 2016-05-05T13:51:13+09:00

Still there were really small internal page eviction even though cache usage was getting upto 100%.
Please check the metric graphs (WiredTiger-WT2560-MetricGraphs).

Matt.

Comment by 아나 하리 [ 04/May/16 ]

Same issue happens on another shard server.
Pages are evicted 6~8K/second, but there's also almost no internal page evictions.

Still not sure rare internal page eviction is the cause of this issue. (I am saying rare internal page eviction as one of symptoms).

Thanks.

Comment by 아나 하리 [ 04/May/16 ]

I've done the test with original mongod 3.2.5.
And attach metric graphs "WiredTiger-MetricGraphs.png".

In the graphs,
13:48 : Restart mongodb with original mongodb 3.2.5 (before 13:47 , it's patched mongod)
13:51 : Step up as Primary
15:10 : Cache usage is increasing over 80%
15:22 : Cache usage is 100% and all write thread is waiting something and queued.
16:07 : Restart mongodb with patched mongodb

Something weird is that there's almost no internal page eviction with original mongodb, But with patched mongodb there's about 2~3K internal page eviction this time yesterday.
I don't know the reason yet. Looks like there's something I didn't expect.

insert query update delete getmore command % dirty % used flushes vsize   res  qr|qw   ar|aw netIn netOut conn         set repl                      time
    *0  1139    585      2     140   259|0     1.4   95.7       0 46.1G 43.5G    1|0     0|0  672k    18m  809 testreplset  PRI 2016-05-04T15:22:48+09:00
    *0  1141    551      5     129   248|0     1.4   95.8       0 46.1G 43.5G    0|0     0|0  658k    18m  809 testreplset  PRI 2016-05-04T15:22:49+09:00
    *0   968    565     12     114   216|0     1.4   95.8       0 46.1G 43.5G    0|0     0|0  589k    16m  809 testreplset  PRI 2016-05-04T15:22:50+09:00
    *0  1093    579      2     133   257|0     1.4   95.9       0 46.1G 43.5G    0|0     1|0  651k    18m  809 testreplset  PRI 2016-05-04T15:22:51+09:00
    *0  1096    604      5     140   274|0     1.4   95.9       0 46.1G 43.5G    0|0     0|0  680k    18m  809 testreplset  PRI 2016-05-04T15:22:52+09:00
    *0  1108    639      5     139   261|0     1.5   96.0       0 46.1G 43.5G    0|0     0|0  684k    18m  809 testreplset  PRI 2016-05-04T15:22:53+09:00
    *0   966    595      4      54   105|0     1.5   96.0       0 46.1G 43.5G    0|0   38|17  533k    15m  809 testreplset  PRI 2016-05-04T15:22:54+09:00
    *0  1153    573      9      34    64|0     1.5   96.1       0 46.1G 43.5G    0|0   23|32  592k    19m  809 testreplset  PRI 2016-05-04T15:22:55+09:00
    *0  1172    565      7      40    78|0     1.5   96.2       0 46.1G 43.5G    0|0   11|14  612k    19m  809 testreplset  PRI 2016-05-04T15:22:56+09:00
    *0  1049    640      8      37    72|0     1.5   96.3       0 46.1G 43.5G    6|0     7|7  595k    17m  809 testreplset  PRI 2016-05-04T15:22:57+09:00
    *0   987    550      4      37    72|0     1.5   96.3       0 46.1G 43.5G    0|0    1|65  520k    16m  809 testreplset  PRI 2016-05-04T15:22:58+09:00
    *0  1002    392      5      17    38|0     1.5   96.3       0 46.1G 43.5G  1|167   0|128  373k    15m  809 testreplset  PRI 2016-05-04T15:22:59+09:00
    *0   928     55     *0      29    53|0     1.5   96.4       0 46.1G 43.5G  0|192  42|128  275k    14m  809 testreplset  PRI 2016-05-04T15:23:00+09:00
    *0  1082     21     *0      38    73|0     1.5   96.5       0 46.1G 43.5G  0|173  10|128  349k    18m  809 testreplset  PRI 2016-05-04T15:23:01+09:00
    *0  1776      1     *0      25    44|0     1.4   96.7       1 46.1G 43.5G  0|131  16|128  515k    29m  809 testreplset  PRI 2016-05-04T15:23:02+09:00
    *0  1486     97     *0      16    32|0     0.6   96.6       0 46.1G 43.5G  0|120   1|128  472k    23m  809 testreplset  PRI 2016-05-04T15:23:03+09:00
    *0  1114    198      3      84   161|0     0.1   96.2       0 46.1G 43.5G  0|125  58|128  447k    17m  809 testreplset  PRI 2016-05-04T15:23:04+09:00
    *0   728     41      2       5     6|0     0.1   96.3       0 46.1G 43.5G  0|316   9|128  181k     9m  809 testreplset  PRI 2016-05-04T15:23:05+09:00
    *0   215     67     *0       0     6|0     0.1   96.2       0 46.1G 43.5G  4|328   1|128   88k     3m  811 testreplset  PRI 2016-05-04T15:23:06+09:00
    *0   232     55     *0       5    16|0     0.1   96.0       0 46.1G 43.5G  0|325   3|128   92k     4m  811 testreplset  PRI 2016-05-04T15:23:07+09:00

Thanks.
Matt.

Comment by 아나 하리 [ 04/May/16 ]

Hi Michael Cahill.

I have thought about your question last night. Actually I am not sure about this.
>> We have tried several different policies for eviction of internal pages. Do you have any sense of how important that part of the change was? If internal pages have to be evicted for the workload to run, is that because the data size is much larger than the cache size?

I have focused on internal page eviction,
But actually my patch has an effect on increasing eviction throughput also (prepare more evciction candidates and evict all candidate without internal page threshold)

Patched MongoDB ::
Total evicted pages : 15~20K/second (Internal pages are 4.5K/second, Dirty pages are 3.8K/second)
Eviction failure is 1.5K/second
Eviction blocked by hazard pointer is 200/sec
Eviction walk 17M/second
Block reads 18K/second, Block writes 3.8K/second

But original mongodb, they generate only 30 candidates (out of 300 entries by __evict_walk call) each __evict_lru_walk() call.
But with my patch all 300 entries are selected as canddiates, because below code block. and all candidates have a high chance to evict (because no internal page threshold)

                        float cache_usage_per_target = (bytes_inuse / cache->eviction_target * bytes_max);
                        if(cache_usage_per_target>=1) cache_usage_per_target = 1.0;
                        cache->evict_candidates = entries * cache_usage_per_target;

What I am wondering is that maybe full eviction throughput of original mongodb is not sufficient
So I tried original mongodb 3.2.5 again and compare wiredtiger metrics against my patch.
Let me share the result if there's something.

FYI,
Actually(I almost forgot about this)
I changed one more thing in my test version mongodb. I have not thought this is not so critical section.

void
__wt_txn_update_oldest(WT_SESSION_IMPL *session, bool force)
{
        WT_CONNECTION_IMPL *conn;
        WT_SESSION_IMPL *oldest_session;
        WT_TXN_GLOBAL *txn_global;
        WT_TXN_STATE *s;
        uint64_t current_id, id, last_running, oldest_id, prev_oldest_id;
        uint32_t i, session_cnt;
        int32_t count;
        bool last_running_moved;
 
        conn = S2C(session);
        txn_global = &conn->txn_global;
 
retry:
        current_id = last_running = txn_global->current;
        oldest_session = NULL;
        prev_oldest_id = txn_global->oldest_id;
 
        /*
         * For pure read-only workloads, or if the update isn't forced and the
         * oldest ID isn't too far behind, avoid scanning.
         */
        if (prev_oldest_id == current_id ||
            (!force && WT_TXNID_LT(current_id, prev_oldest_id + 100)))
                return;
 
 
        // Start of added code to avoid oldest_id scan of all foreground threads
        // If there's over two threads updating txn_global->oldest_id, then skip it in this thread
        if(txn_global->scan_count>2){
                return;
        }
        // End of added code to avoid oldest_id scan of all foreground threads
 
        /*
         * We're going to scan.  Increment the count of scanners to prevent the
         * oldest ID from moving forwards.  Spin if the count is negative,
         * which indicates that some thread is moving the oldest ID forwards.
         */
        do {
                if ((count = txn_global->scan_count) < 0)
                        WT_PAUSE();
        } while (count < 0 ||
            !__wt_atomic_casiv32(&txn_global->scan_count, count, count + 1));

Comment by Michael Cahill (Inactive) [ 04/May/16 ]

matt.lee, I have attached the patch for WT-2560 against the MongoDB 3.2 tree. Can you please test this change and let me know if it resolves the issue for you? We plan to backport this fix to 3.2 soon.

Comment by Michael Cahill (Inactive) [ 03/May/16 ]

matt.lee, thanks, I understand the pull request, and it is certainly interesting to look at making more pages available for eviction.

However, in the common case, your change will mean that there is very little selectivity on LRU. In other words, we will be selecting pages from cache more or less at random. For some workloads that may help, but for others it will lead to lower performance due to eviction of hot pages (such as _id index pages).

We have tried several different policies for eviction of internal pages. Do you have any sense of how important that part of the change was? If internal pages have to be evicted for the workload to run, is that because the data size is much larger than the cache size?

The changes from WT-2560 are listed in the ticket. In particular, they are:

https://github.com/wiredtiger/wiredtiger/commit/2389dbb24ccad4a4eb06bd9418fc56d0da51f58c
https://github.com/wiredtiger/wiredtiger/commit/197fafa2043dd41164be41bf24f9c6c82e3a1318

These need to be applied to src/third_party/wiredtiger in a MongoDB tree. I will turn them into a MongoDB patch and run it through testing overnight.

Comment by 아나 하리 [ 03/May/16 ]

Hi Michael Cahill.

In my thread stack, all foreground threads were running "__wt_txn_update_oldest()". And __wt_txn_update_oldest must be optimized.
But what I have focused is the reason why all foreground threads were doing eviction.
So I removed internal pages' eviction threshold and changed the way to determine eviction target of __evict_lru_walk().
More internal pages in the cache, page eviction throughput will be decreased because of internal page eviction threshold.

I have seen the reason you patched internal page eviction threshold, But that is performance issue, But my issue is server hang for a long time.
(So I tried removing that threshold)

>> Note that the fix for WT-2560 has not yet been applied to MongoDB 3.2: we are still putting it through internal testing. If you are interested in testing that patch, let me know and I will prepare it
Anyway I am glad to hear this. And I think I can test your recent changes. (Could I also see the changes if possible ?)

And I have experienced one more issue last night.
I am not sure last night issue(SERVER-24019) is related with this issue. anyway I reported another issue.

Matt.

Comment by Michael Cahill (Inactive) [ 03/May/16 ]

matt.lee, thanks again for reporting this issue and supplying all the information.

As alexander.gorrod said earlier, from our analysis, this looks like a duplicate of WT-2560 where under extremely high loads, threads can spin inside __wt_txn_update_oldest for an excessive time.

In pstack_primary.txt, we see:

   1 __wt_txn_update_oldest,__evict_server,start_thread,clone

and in pstack_secondary:

   1 __wt_txn_update_oldest,??,start_thread,clone

(where the "??" is most likely also __evict_server).

That means the thread responsible for queuing pages for eviction is stuck due to WT-2560.

While I can see where you are going with the patch, I suspect it may be masking the problem by making more progress each time eviction runs, rather than fixing the underlying issue.

Note that the fix for WT-2560 has not yet been applied to MongoDB 3.2: we are still putting it through internal testing. If you are interested in testing that patch, let me know and I will prepare it for you.

Comment by 아나 하리 [ 02/May/16 ]

I have singed the contributor agreement.
Thanks.

Matt.

Comment by Ramon Fernandez Marina [ 02/May/16 ]

matt.lee, please note that before we can consider your pull request you'll need to sign the contributor agreement.

Thanks,
Ramón.

Comment by 아나 하리 [ 02/May/16 ]

Hi Ramon,

I think I found some sloppy solution.
And I made small pull request about what I changes.

https://github.com/mongodb/mongo/pull/1078

Comment by 아나 하리 [ 19/Apr/16 ]

Hi Alexander,
Thanks for the link. But I think it's a little bit different case.

According to my observation, very few threads were processing user requests.
So all user update operations are logged in mongod.log, they took about over 50~ seconds. But not stuck forever.

And everytime when this case happen,
UPDATE operations are stuck first and at last FIND operations also stuck. (Maybe this is because of FIND opeations don't need to call __wt_txn_begin()).

A little bit weird thing is the result of db.currentOp().
I don't know why, but db.currentOp() report WriteLock for Global, Database, Collection and Metadata for pure FIND query. I don't know why they took write lock rather than read lock.
Especially, Metadata lock waiting time is way too long. I thought metadata lock is related to sharding config server, But looks not (Because config servers were so healthy at that time).
What Metadata lock means in this currentOp() result ? Is it something to do with this case ?

db.currentOp()

...
                {
                        "desc" : "conn2328",
                        "threadId" : "140435698865920",
                        "connectionId" : 2328,
                        "client" : "192.x.x.x:55792",
                        "active" : true,
                        "opid" : 89412790,
                        "secs_running" : 505,
                        "microsecs_running" : NumberLong(505055671),
                        "op" : "query",
                        "ns" : "story.notifications",
                        "query" : {
                                "find" : "notifications",
                                "filter" : {
                                        "pid" : 123456
                                },
                                "projection" : {
                                        "$sortKey" : {
                                                "$meta" : "sortKey"
                                        }
                                },
                                "sort" : {
                                        "guid" : -1
                                },
                                "limit" : NumberLong(50),
                                "shardVersion" : [
                                        Timestamp(30, 838),
                                        ObjectId("570f67ad411434d1e478de6f")
                                ]
                        },
                        "planSummary" : "IXSCAN { pid: 1.0, guid: 1.0 }",
                        "numYields" : 1,
                        "locks" : {
                                "Global" : "w",
                                "Database" : "w",
                                "Collection" : "w",
                                "Metadata" : "W"
                        },
                        "waitingForLock" : true,
                        "lockStats" : {
                                "Global" : {
                                        "acquireCount" : {
                                                "r" : NumberLong(5),
                                                "w" : NumberLong(1)
                                        }
                                },
                                "Database" : {
                                        "acquireCount" : {
                                                "r" : NumberLong(2),
                                                "w" : NumberLong(1)
                                        }
                                },
                                "Collection" : {
                                        "acquireCount" : {
                                                "r" : NumberLong(2),
                                                "w" : NumberLong(1)
                                        }
                                },
                                "Metadata" : {
                                        "acquireCount" : {
                                                "W" : NumberLong(1)
                                        },
                                        "acquireWaitCount" : {
                                                "W" : NumberLong(1)
                                        },
                                        "timeAcquiringMicros" : {
                                                "W" : NumberLong(458050450)
                                        }
                                }
                        }
                }
...

Comment by Alexander Gorrod [ 19/Apr/16 ]

This appears very similar to WT-2560, I'll link the tickets.

Comment by 아나 하리 [ 18/Apr/16 ]

Hi Ramon,

I attached "metrics.2016-04-18T09-51-46Z-00000.tar.gz" which is today metrics file of diagnostics.data/.

And I changed wiredtiger engine config string from "2016-04-18T09-51-46".

eviction_trigger=85
eviction_target=80
eviction_dirty_trigger=50
eviction_dirty_target=30
eviction=(threads_min=4,threads_max=8)

Comment by Ramon Fernandez Marina [ 18/Apr/16 ]

matt.lee, can you please upload the contents of the diagnostic.data directory for the affected node(s)? That should help us better understand what's going on.

Thanks,
Ramón.

Generated at Thu Feb 08 04:04:27 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.