[SERVER-5931] Secondary reads in sharded clusters need stronger consistency Created: 25/May/12  Updated: 06/Apr/23  Resolved: 31/Jul/17

Status: Closed
Project: Core Server
Component/s: Querying, Replication, Sharding
Affects Version/s: None
Fix Version/s: 3.5.11

Type: Improvement Priority: Major - P3
Reporter: Kay Agahd Assignee: Dianna Hohensee (Inactive)
Resolution: Done Votes: 41
Labels: setShardVersion
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

ubuntu lucid 64 bit


Issue Links:
Depends
Duplicate
is duplicated by SERVER-14644 Retrieving duplicate records with the... Closed
is duplicated by SERVER-31663 Inconsistent query results between pr... Closed
is duplicated by SERVER-8948 Count() can be wrong in sharded colle... Closed
is duplicated by SERVER-9858 After a chunk migration, requests on ... Closed
is duplicated by SERVER-21650 Duplicate _id when reading from secon... Closed
is duplicated by SERVER-6563 Improve consistency of non-primary re... Closed
Related
related to SERVER-8598 Add command to cleanup orphaned data ... Closed
related to SERVER-20782 Support causal consistency with secon... Closed
related to SERVER-23917 splitVector can't be run against seco... Closed
is related to SERVER-30708 _id index returning more than one doc... Closed
is related to SERVER-3645 Sharded collection counts (on primary... Closed
is related to SERVER-8598 Add command to cleanup orphaned data ... Closed
Backwards Compatibility: Fully Compatible
Sprint: Sharding 2017-08-21
Participants:
Case:

 Description   

Secondary reads in MongoDB are only eventually consistent - the state of the system will not reflect the latest changes. When balancing, the state of the cluster is changing implicitly, and so secondary reads are inconsistent. This means that duplicate, stale, or missing data can be observed when balancing operations are active, along with orphaned data from aborted balancer operations.

Issues with orphaned data affecting results from primary reads are different problems - see SERVER-3645 for example.

Original description:

Mongo may return too many documents in a sharded system. This may occur when a document is located on more than one shard. We don't know yet why some documents are located on more than one shard because we never access shards directly. We always access mongoDB through mongos (router). Perhaps these documents result from a failed chunk migration?

In any case, even if these documents exist on more than one shard, mongo should be clever enough to return only those, which are tracked by the config servers.

Let me show you a test case (documents are sharded by _id):

mongos> db.offer.find({shopId:100}).count()
0
## no doc of shopId:100 exist yet, so let add one through the router:
mongos> db.offer.save({"_id" : 100, "shopId" : 100, "version": 1})
mongos> exit
bye
## let's add an document on another shard (this time by accessing it directly to beeing able to reproduce)
> mongo localhost:20017/offerStore
MongoDB shell version: 2.0.5
connecting to: localhost:20017/offerStore
PRIMARY> db.offer.find({shopId:100}).count()
0
PRIMARY> db.offer.save({"_id" : 100, "shopId" : 100, "version": 2})
PRIMARY> db.offer.find({shopId:100}).count()
1
PRIMARY> exit
bye
## let's check what mongos thinks how many docs of shopId:100 it has:
MongoDB shell version: 2.0.5
connecting to: localhost:20021/offerStore
mongos> db.offer.find({shopId:100}).count()
2
## this is a bug, because mongos should find only 1 doc since the 2nd doc is a an orphan, not beeing referenced by config servers:
mongos> db.printShardingStatus(true)
--- Sharding Status --- 
  sharding version: { "_id" : 1, "version" : 3 }
  shards:
	{  "_id" : "shard1",  "host" : "shard1/localhost:20017" }
	{  "_id" : "shard2",  "host" : "shard2/localhost:20018" }
	{  "_id" : "shard3",  "host" : "shard3/localhost:20019" }
  databases:
	{  "_id" : "admin",  "partitioned" : false,  "primary" : "config" }
	{  "_id" : "offerStore",  "partitioned" : true,  "primary" : "shard1" }
		offerStore.offer chunks:
				shard3	6
				shard1	7
				shard2	7
	{ "_id" : { $minKey : 1 } } -->> { "_id" : NumberLong(538697491) } on : shard3 { "t" : 4000, "i" : 2 }
	{ "_id" : NumberLong(538697491) } -->> { "_id" : NumberLong(538748351) } on : shard3 { "t" : 4000, "i" : 4 }
	{ "_id" : NumberLong(538748351) } -->> { "_id" : NumberLong(538827239) } on : shard3 { "t" : 5000, "i" : 4 }
	{ "_id" : NumberLong(538827239) } -->> { "_id" : NumberLong(538893516) } on : shard3 { "t" : 6000, "i" : 2 }
	{ "_id" : NumberLong(538893516) } -->> { "_id" : NumberLong(591546899) } on : shard3 { "t" : 6000, "i" : 3 }
	{ "_id" : NumberLong(591546899) } -->> { "_id" : NumberLong(647519529) } on : shard1 { "t" : 6000, "i" : 1 }
	{ "_id" : NumberLong(647519529) } -->> { "_id" : NumberLong(660087036) } on : shard1 { "t" : 3000, "i" : 2 }
	{ "_id" : NumberLong(660087036) } -->> { "_id" : NumberLong(675320121) } on : shard1 { "t" : 3000, "i" : 6 }
	{ "_id" : NumberLong(675320121) } -->> { "_id" : NumberLong(691204023) } on : shard1 { "t" : 3000, "i" : 7 }
	{ "_id" : NumberLong(691204023) } -->> { "_id" : NumberLong(706454221) } on : shard1 { "t" : 3000, "i" : 4 }
	{ "_id" : NumberLong(706454221) } -->> { "_id" : NumberLong(751548202) } on : shard1 { "t" : 3000, "i" : 5 }
	{ "_id" : NumberLong(751548202) } -->> { "_id" : NumberLong(799095936) } on : shard1 { "t" : 7000, "i" : 0 }
	{ "_id" : NumberLong(799095936) } -->> { "_id" : NumberLong(844050111) } on : shard2 { "t" : 7000, "i" : 1 }
	{ "_id" : NumberLong(844050111) } -->> { "_id" : NumberLong(896132956) } on : shard2 { "t" : 6000, "i" : 8 }
	{ "_id" : NumberLong(896132956) } -->> { "_id" : NumberLong(937716362) } on : shard2 { "t" : 6000, "i" : 10 }
	{ "_id" : NumberLong(937716362) } -->> { "_id" : NumberLong(960061623) } on : shard2 { "t" : 6000, "i" : 11 }
	{ "_id" : NumberLong(960061623) } -->> { "_id" : NumberLong(995515056) } on : shard2 { "t" : 5000, "i" : 2 }
	{ "_id" : NumberLong(995515056) } -->> { "_id" : NumberLong(1021076450) } on : shard2 { "t" : 6000, "i" : 4 }
	{ "_id" : NumberLong(1021076450) } -->> { "_id" : NumberLong(1035798084) } on : shard2 { "t" : 6000, "i" : 5 }
	{ "_id" : NumberLong(1035798084) } -->> { "_id" : { $maxKey : 1 } } on : shard3 { "t" : 5000, "i" : 0 }
 
mongos> db.offer.find({shopId:100})
{ "_id" : 100, "shopId" : 100, "version" : 1 }
## this is correct (only 1 doc found) BUT see the next one:
mongos> rs.slaveOk()
mongos> db.offer.find({shopId:100})
{ "_id" : 100, "shopId" : 100, "version" : 2 }
{ "_id" : 100, "shopId" : 100, "version" : 1 }
## this is a bug since mongo queries all shards without ever asking whether they return orphan docs or not
mongos> db.offer.find({_id:100})
{ "_id" : 100, "shopId" : 100, "version" : 1 }
## When searching by sharding key, mongo get it correct.



 Comments   
Comment by Kaloian Manassiev [ 26/Feb/19 ]

Hello lucasoares,

This feature is already available in the 3.6.0 and later releases. Please take a look at the documentation for more information.

Best regards,
-Kal.

Comment by Lucas [ 26/Feb/19 ]

Hello! This will enter on any near 3.6 release?

Thank you.

Comment by Spencer Brody (Inactive) [ 17/Sep/15 ]

jblackburn, you may also be interested in watching SERVER-4935

Comment by Andy Schwerin [ 17/Sep/15 ]

jblackburn, this ticket describes a different feature request from the one in your comment. Your request is more akin to SERVER-4936, where you have also commented.

Comment by James Blackburn [ 15/Sep/15 ]

It's currently possible for a SECONDARY to be infinitely lagged w.r.t. the PRIMARY (in version 3.0.6). We saw an issue where large packets (>1500bytes were lost by the network). Heartbeat still work, but replication stopped. However the SECONDARY never stopped servicing queries CS-24224.

Ideally the SECONDARY should move to RECOVERING if it becomes too stale, or be configured to fail altogether.

We have a reasonable tolerance for stale secondaries, but not for hours or days...

Comment by Adam Flynn [ 06/Nov/14 ]

Wanted to add a +1 to this ticket and describe how it's manifested itself in our deployment (see CS-11107 for more details). We have a workaround now, but I want to make the engineering team aware of a use case where this can be really bad.

First, I understand the complexity of the issues involved in this one and that it can't be fixed haphazardly. It's probably too late for 2.8, but I'd love to see this be a priority in 3.0. I also think it's important to advertise the symptoms of this limitation more widely (internally & externally) so people can avoid painting themselves into corners until this is fixed.

We use tagged secondary reads pretty aggressively in our app. With a high read/write ratio, tolerance for eventual consistency (in the form of replication lag, anyway), and a high redundancy requirement that makes us carry a lot of secondaries anyway, a secondary read preference makes a lot of sense. We also split between analytics queries and real-time queries. Having analytics & real-time loads on the same node causes problems, so we use tags to route to different secondaries. Great feature for that! But - the high read-write ratio means a long time can pass between a moveChunk finishing and the primary being hit (especially in analytics tools which often do no writes).

The behaviour here is that documents seem to disappear and all kinds of queries fail. Until Andrew helped us understand the details (hitting a primary of a shard that knows about the migration refreshes metadata), our only operational fix was flushRouterConfig everywhere. So, when we added new shards and had aggressive balancing, every night or two our error rate would spike way up from "missing" documents. Someone would wake up, flush or restart all mongos, error rate goes down, back to sleep. Random data disappearing until you restart mongos is pretty scary.

We have 17 shards (68 mongod) now and add shards monthly, so we're constantly doing a lot of balancing, even with well-distributed writes. Adding shards implied accepting a bunch of things would be transiently broken for a week or two. That obviously isn't scalable.

We first opened the ticket about this back in March and the support engineer wasn't able to nail down what was happening (and we couldn't reproduce reliably to get logs). After we figured out how to reproduce this and got a high logLevel capture, Andrew was able to nail down that it was a case of this ticket.

To be clear: I think MongoDB support is excellent and we've got a reasonable (if messy) workaround from CS-11107. But, my concern here is that such a big caveat in MongoDB's consistency semantics isn't mentioned in the docs and 2 support engineers weren't initially aware of its symptoms. If it was clear in docs or support discussions that this particular behavior existed, I probably would have made different architectural decisions over the last couple years... but having it come up as a "gotcha" during fast growth is a big concern.

Comment by Greg Studer [ 27/Aug/14 ]

This issue has had a lot of discussion - summarizing here:

Secondary reads in MongoDB are only eventually consistent - the state of the system will not reflect the latest changes. When balancing, the state of the cluster is changing implicitly, and so secondary reads are inconsistent. This means that duplicate, stale, or missing data can be observed when balancing operations are active, along with orphaned data from aborted balancer operations.

Issues with orphaned data affecting results from primary reads are different problems - see SERVER-3645 for example.

Filtering results to remove duplicate data is only half of the problem - data that has not yet been replicated to certain secondaries on a TO-shard but has been removed on a FROM-shard may be invisible temporarily. A full fix is nontrivial and requires tracking differing sets of chunk metadata per-node, integrated with replication, and new targeting logic.

A partial fix may be possible by forcing migrations to become fully replicated using secondary throttling - this would allow filtering to work, if a primary was online, at the cost of slow migrations. We're still considering the options here.

Comment by Thomas Rueckstiess [ 29/Jul/14 ]

Hi Nic,

to clarify your last question: compact rewrites (and thus compacts) the extents of a collection but does not free up disk space. This is because extents from different collections are kept within the same db file. The space is added to the free list and available to newly inserted data, but you won't see more available space on the OS level after a compact.

Regards,
Thomas

Comment by Vincent [ 29/Jul/14 ]

@Nic > Probably but I'm not sure it frees up disk space (the docs only states it's similar to repairDatabase but for a collection). I had not enough disk space to repair my big database too, so I "repaired" small databases first to make enough room to repair the big one, but I don't know if you can do this too. Let me know

Comment by Nic Cottrell (Personal) [ 29/Jul/14 ]

I don't seem to have the required available disk space for an entire copy of the whole db, so will running compact on individual collections give me the same index space savings? Maybe I should just rebuild the entire secondary from the primary data...

Comment by Nic Cottrell (Personal) [ 29/Jul/14 ]

@Vicent - I did run on the two mongod primaries directly, but didn't run the repairs yet. Will give it a try. Many thanks!

Comment by Vincent [ 29/Jul/14 ]

@Nic Cottrell > Did you make sure you ran the script on mongod and not mongos?
I'm using this piece of code that works pretty well: http://pastebin.com/VvPpeH8j
Also, don't forget to run a "db.getSiblingDB("MyDB1").repairDatabase();" on your secondary(ies) once the script is done, then stepDown the primary and repair it too. To my experience it reduces index size A LOT.

Comment by Nic Cottrell (Personal) [ 29/Jul/14 ]

@agahd When running the script from the mongodb.org docs page it sat there running for hours. I didn't see any printouts that chunks were cleaned up, but some of our errors disappeared. It may be however a change in read preferences in some subroutine.

We have the same problem where we are on the edge of fitting in to RAM.

I'm a bit wary of running the chunk-checker.js since it's from 2010 and many versions of Mongo ago. I don't want to further mess up sharding Can anyone from MongoDB confirm that the script on the cleanupOrphan page has been tested and confirmed to work?

Comment by Kay Agahd [ 27/Jul/14 ]

niccottrell, I've tried the script snippet at http://docs.mongodb.org/manual/reference/command/cleanupOrphaned/ without luck. The script took a long time but the unreferenced chunks remained on the shard - at least, no space was freed up.
However, after executing the script https://github.com/mongodb/mongo-snippets/blob/master/sharding/chunk-checker.js multiple GB's were retrieved. This is very important for us because our db needs to fit in RAM completely for best performance.

It seems to me that balancing and cleaning-up moved chunks is even worse in v2.6 than ever before because it may get stuck and even may crash mongod. Maybe these threads are related:
https://jira.mongodb.org/browse/SERVER-14389
https://jira.mongodb.org/browse/SERVER-14261
https://jira.mongodb.org/browse/SERVER-11299
https://jira.mongodb.org/browse/SERVER-14375

Comment by sam flint [ 22/Jul/14 ]

You can also use explain() to capture the correct count and this is much faster than itcount(). We put this in our client side application to call explain().n

As you can see it is accurate and it is faster than itcount().

"cursor" : "BtreeCursor client_id_1_lists_1_order_1",
"n" : 5487153,
"nChunkSkips" : 17072,
"nYields" : 11907,
"nscanned" : 5672905,
"nscannedAllPlans" : 5672905,
"nscannedObjects" : 5672905,
"nscannedObjectsAllPlans" : 5672905,
"millisShardTotal" : 69749,
"millisShardAvg" : 9964,
"numQueries" : 7,
"numShards" : 7,
"millis" : 18282
}
mongos> db.profile.find(

{client_id : 3762}

,

{client_id:1}

).count()
5503724
mongos> db.profile.find(

{client_id : 3762}

,

{client_id:1}

).itcount()
5487153
mongos> db.profile.find(

{client_id : 3762}

,

{client_id:1}

).explain().n
5487153

Comment by Asya Kamsky [ 21/Jul/14 ]

niccottrell I just realized that your test does not look at documents, but rather it looks at count() which will be wrong whenever there is a migration in progress (or if there are orphans) due to SERVER-3645 - to check the actual count of matched documents you can use itcount() which actually iterates over fetched documents rather than taking a shortcut that "count()" does.

Comment by Asya Kamsky [ 21/Jul/14 ]

niccottrell if your queries are going to the primaries, then it can't be because of this bug as primaries filter out documents that don't belong to their chunk ranges.

It's possible that you are seeing a different bug. If you can confirm the queries are being routed to *primaries* and you're seeing duplicates, could you open a new SERVER bug?

If the queries are being routed to *secondaries* when you are requesting primary then it could be either Morphia or Java driver bug or mongos bug which we would also like to track, triage and fix.

Comment by Nic Cottrell (Personal) [ 21/Jul/14 ]

Thanks @agahd! I guess with M2.6+ you could borrow this snippet from http://docs.mongodb.org/manual/reference/command/cleanupOrphaned/ :

while ( nextKey = db.runCommand( {
                      cleanupOrphaned: "test.user",
                      startingFromKey: nextKey
                  } ).stoppedAtKey ) {
   printjson(nextKey);
}

to do the actual cleanup steps.

Comment by Nic Cottrell (Personal) [ 21/Jul/14 ]

Definitely querying against primary (at least that's what I'm instructing Morphia/Java driver) and definitely got duplicate objects with the same ObjectId _id field. Ran the cleanupOrphaned loop example from the MongoDoc and now no longer getting these duplicates - no changes to our code, so really looks like a mongos bug remains..

Comment by Asya Kamsky [ 21/Jul/14 ]

2.6 supports a command cleanupOrphaned so you should be using it, rather than any scripts.

However, if you are querying against primaries only, this ticket does not apply and if you're seeing incorrect results, it's not because of the issue this ticket is tracking.

You might want to post your case on mongodb-user google group.

Comment by Kay Agahd [ 20/Jul/14 ]

Nic, I've used this one, slightly modified:
https://github.com/mongodb/mongo-snippets/blob/master/sharding/chunk-checker.js

Comment by Nic Cottrell (Personal) [ 20/Jul/14 ]

Unfortunately this seems to be a problem. We have a sharded setup with both nodes and mongos running 2.6.3. The collection has shard key which doesn't including the _id field at all, and we do a query with readPref=primary. We're using the Morphia interface, i.e.

      final Datastore datastore = getDatastore(aClass);
      final Query<K> query = datastore.get(aClass, ids);
      query.useReadPreference(ReadPreference.primary()); 
      final List<K> results = query.asList();
      assert result.size() <= ids.size() :
          "More results than unique IDs (" + ids.size() + RARR + result.size() + "): "
          + ids + RARR + result + ", with query=" + query;

And this assert regularly fails in our unit tests. Is there a nice script in 2.6 to clean away these orphan documents automatically?

Comment by Kay Agahd [ 23/Oct/13 ]

Thank you Scott for clarifying. Right now, we have removed all orphans from our db so the export works as expected. As soon as we have new orphans and detect duplicates in our exported dataset, we will let you know. Thanks for your patience.

Comment by Scott Hernandez (Inactive) [ 23/Oct/13 ]

If you do any query (find not count, see below or linked issues) on the mongos, with or without the shard key, which goes only to the primaries then all non-owned/orphan docs will be removed from the results. MongoDB has a number of tests which verify this.

Here is part of the explain which show the "nChunkSkips" which are skipped documents not returned since they are not owned by that shard – results in no orphans docs being returned:

// output from https://gist.github.com/scotthernandez/becf47ae9d0c33eac6d6
// collection "foo.bar" is sharded on {_id:1}, with value from -50...50 split between shards, with a manually inserted doc {_id:10, x: 10} on both shards.
// The test then queries for {x:10}, which is not the shard key, and only gets back one document (with one of the shards skipping the orphaned doc).
m30001| 2013-10-23T13:06:56.834-0400 [conn3] query foo.bar query: { query: { x: 10.0 }, $explain: true } ntoreturn:0 ntoskip:0 keyUpdates:0 locks(micros) r:1470 nreturned:1 reslen:332 1ms
{
	"clusteredType" : "ParallelSort", "shards" : {
		"localhost:30000" : [ {
				"n" : 0,
				"nscannedObjects" : 51,
...
				"nChunkSkips" : 1, }],
		"localhost:30001" : [ {
				"n" : 1,
				"nscannedObjects" : 50,
...
				"nChunkSkips" : 0,}]},
	"n" : 1,
	"nChunkSkips" : 1,
...
	"numQueries" : 2,
	"numShards" : 2,
	"millis" : 2
}

In your example above you are using count() which has similar issues like a secondaries wrt to queries (via find), but on the primary as well. That issue is here: SERVER-3645 – it affects counts with or without a query predicate even on the primaries.

In your example above if you use itcount() it will run the query and count the returned documents which will filter out orphans.

I have also run mongoexport/dump to verify that orphans are not returned when the primary is used. If you see otherwise we need to create a new issue and follow up there.

You may also be interested in the new cleanupOrphans command in the next version (2.6) which be used to remove orphans in cases where a failure leaves them around: SERVER-8598

Comment by Kay Agahd [ 23/Oct/13 ]

Scott, we are aware that running with slaveOk=true might return inconsistent results. However, what we have experienced and what I've shown above demonstrates that one may receive inconsistent results such as duplicates even when using slaveOk=false while going through mongos if the queried field is NOT the shardkey.

Please review my above steps to reproduce the problem. The reason I was connecting there to a mongod instead to a mongos was to be able to create some orphan documents in order to demonstrate how mongodb behaves when you run a query against a mongodb system having orphan documents. If mongodb would NOT create orphan documents, probably resulted from a broken chunk move, OR if mongodb was able to automatically remove orphan documents asap OR if mongodb would read the sharding status upon a query request to know whether the found documents belong really to the shard which returned them, then we would NOT have this problem at all.

Just connecting through mongos with slaveOk=false option does NOT solve the problem.

Comment by Scott Hernandez (Inactive) [ 23/Oct/13 ]

agahd, you must use the primary (the default behavior for all langauges/drivers but not all tools) connecting through mongos, not directly to the shards or with slaveOk enabled. All direct connections are unaware of sharding state (even on the primary) and considered administrative/maintenance connections which allow full access to data independent of sharding state (and therefore return "duplicates"). This is why it is always required to change your application/tools to only connect to the mongos servers and never directly to the shard for user data operations.

Did you run mongoexport against mongos with the "--slaveOk false" option? By default it will try to read from the non-primary, if not the option is explicit.

$ mongoexport --help
Export MongoDB data to CSV, TSV or JSON files.
 
options:
  --help                                produce help message
 ...
  -k [ --slaveOk ] arg (=1)             use secondaries for export if 
                                        available, default true
 

Comment by Kay Agahd [ 23/Oct/13 ]

Scott, your sugested workaround "that you must use the primary for accurate results" is wrong. Even using the primary returns orphaned documents if the query does not contain the shardkey (see my steps to reproduce above please).

Btw. this happens also when using mongoexport which is a big pain for us because we can't always use the shardkey to export some datasets, thus we often have to fight against duplicates in the exported data. It would be nice if this bug could be fixed. Thanks!

Comment by Kay Agahd [ 13/Feb/13 ]

@Holger, during migrations, documents must be on both source and destination server. They will/should be deleted only after migration from the source server. If your query hits the servers during migration, it's normal that it will find both (with slaveOk on). A job which removes orphan documents wouldn't help in this case.
However, the router should be clever enough to read the sharding status to know whether the found documents belong really to the shard which returned them. If they belong to another shard, mongos should refuse to add them to the result set.

Comment by Holger Morch [ 13/Feb/13 ]

We are facing the same issue. Since we are a read heavy application we are normally reading with secondary preferred and so it happens that users see orphan objects in the response.
I think is is important that the secondaries implement the same behavior as the primaries and don't return such orphan objects. But even if this is place the data is still present on the shard. I think it should be possible if one of the hosts recognizes orphan objects to start a job, that removes this objects to free the space again.

Comment by Christian Tonhäuser [ 13/Feb/13 ]

Do you think this might make it into 2.3.X?
We're facing this problem in our production systems and it's a real hassle.

Comment by Scott Hernandez (Inactive) [ 31/May/12 ]

There is no workaround for this until the underlying bug/system is fixed.

The workaround is that you must use the primary for accurate results.

Comment by Kay Agahd [ 31/May/12 ]

I understand that orphaned documents may exist but they shouldn't have any impact on query results, even when queried non-primaries (slaveOk). Can you suggest a better workaround than querying primaries (since this would drop mongoDB performance) or will this issue be fixed soon? Thanks!

Comment by Scott Hernandez (Inactive) [ 29/May/12 ]

Yes, this is a known problem and I've linked the count related issue which is not related to non-primary queries. In addition as you have noted doing non-primary queries can return documents which are not owned by that shard – orphaned one. These documents can get there from failed migrations, and of course during migrations, for example, so it is expected that it will happen at times.

Comment by Kay Agahd [ 25/May/12 ]

sorry, the formatting is a bit weird

Generated at Thu Feb 08 03:10:17 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.