|
Thanks a lot David for the detailed explanation. The example you provide is indeed one I didn't think of. I see why this is difficult to implement. It's only a psychological thing, but it somehow feels so much better to at least properly understand why functionality you rely upon is suddenly removed. This removes a lot of the frustration. I'll try to write a query converter in the meantime so we can upgrade and will wait for this ticket to be resolved someday...
|
|
Hi rgpublic,
To follow up, here's a more detailed explanation of what's happening here and the rationale for the change made in SERVER-15235.
The query planner is responsible for analyzing the regular expression and either classifying it as "simple" or "non-simple" (see the code here). As originally designed, a "simple regex" is a left-anchored regular expression with no "|" character. Such regular expressions can efficiently use an index by scanning the index for all entries that begin with the regexes prefix, and then filtering out only those index entries that match.
It was never the intention of the implementer to allow regexes with "|" to use an index. In the example you provide, the non-simple regex /^example(1|2)/ can indeed be answered by finding all index entries with the prefix example. However, this can lead to a correctness problem for other non-simple regular expressions. It is incorrect to answer
by find all index entries with the prefix a. This query plan would miss matching documents whose string begins with b.
The fix for the correctness problem described by SERVER-15235 was to classify all regexes with the "|" character as non-simple. You are correct that rather than preventing such non-simple regexes from using an index at all, MongoDB could do some more detailed analysis in order to extract the most efficient index bounds. For example, the bounds for /^a(a|$)|^b/ could be [a, c) in order to find all documents that begin either with a or with b. However, this is an improvement request, not a bug. Implementing this in a non-hacky way requires parsing the regular expression and analyzing the parse tree in order to compute the bounds.
We are now treating this ticket as exactly the improvement request described above. Please continue to watch it for progress updates.
Best,
Dave
|
|
Thanks, Dan, I see. I'm currently considering to do this: Run all of our queries through a preprocessor that determines the prefix (e.g. the part up to the first meta-character "^example" in this case) and then converts this:
type:'folder',
folder:/^example(1|2)(3|4)xyz/
into this:
type:'folder',
$and:[
{folder:/^example(1|2)(3|4)xyz/}
,{folder:{$gte:'example'}},{folder:{$lte:'examplf'}}]
In other words: If some query is a regex starting with ^ replace it with a 3 part $and query consisting of the original query and the detemined range.
I wonder: Would this cause MongoDB to reconsider e.g. a type_folder index and use the specified range? I somehow need a "recipe" to do this automatically.
BTW - if this works: What I don't quite understand is... Why doesn't MongoDB do something like this itself? It wouldn't need a full RegEx parser. You'd only need to find out if a RegEx string starts with "^" (trivial) and know which characters have a special meaning (*,[,]?,(,. etc.), implement escaping ("\") and simply stop if any unescaped special character is found. Granted, not every exotic corner case would be solved by this, but 99% of common queries that would outherweise be hit by this regression.
|
|
rgpublic, I very much feel your pain, but there was no viable alternative available to fix the correctness issue and maintain the performance you're expecting in the general case. I don't like asking users to change their queries to adjust to a breaking change in the database, but in this case we have no choice. If you have questions about converting more complex queries, please reach out.
Thanks
Dan
|
|
Ugh. Sorry to say this but this outcome is a huge shocker and disappointment for me. With every new version of MongoDB I'm looking forward to queries becoming faster - not slower. And, let's be honest, "query is not using an index" with only a medium sized database bascially means "cannot be queried", because the query takes so long it's just not acceptable for user-centric web applications. Another aspect I'd like to note here: For me, as a database user, databases are all about storing stuff and it's then the database's business to get it back in the fastest way possible when I'm asking for it. With many databases I feel there's sometimes a very detrimental tendency to shift back responsibility back to the user from version to version. One can always point to the users and recommend hints or tell them to rewrite their queries, but IMHO that's basically nothing other than changing the query language. It's the same as, for instance, removing $or. And like all other programming languages, people rely on the stability and compatiblity of a query language when upgrading. Simply ripping out such important functionality that was perfectly working for us for a long time is just horrible. We have these types of queries all over our applications and it's far from trivial to rewrite them, because the queries are not always as simple as given in the example. These types of queries are very common when you try to store any kind of hierarchical data. This will basically remove any possibility we can upgrade our database in the near future. It's a simple as that. And this at a time when we were in dire need of a performance boost and really looking forward to Mongo 2.8. Now we're stuck in the past. Very bad. Sigh.
|
|
While implementing SERVER-15235, we did some research into seeing whether PCRE exported any regex parsing functionality that we can use to extract the regex prefix for bounds calculation, but came up with nothing. We can file an improvement ticket (or turn this into an improvement ticket) to either do further research into third-party libraries that will parse regexes for us, or to try to extend our hand-parser to handle the '|' character (which is rather difficult to do correctly).
As noted in SERVER-15235, users can no longer expect regular expression predicates with the '|' character to be indexed (such as {folder: /^example(1|2)}). We should note this in the query subsection of the 2.8 release notes. There are workarounds for users that currently rely on these bounds: for example, the following rewrites of the reporter's query will generate bounds on both 'type' and 'folder' for the {type: 1, folder: 1} index:
- {type: 'folder', folder: /^example[12]/}
- {$or: [{type:'folder', folder:/^example1/}, {type:'folder', folder:/^example2/}]}
|
|
Thanks rgpublic. Looks like this is a change introduced in 2.8.0-rc1 by SERVER-15235, so the information you provided was enough for us to track this down.
|
|
Yes, Ramon, this is exactly what I suspect as well. I'm only a user and no database expert, but I would explain it in layman's terms like this: When I search for things starting with example followed by anything else the previous version was clever enough to assume that if a result should exist at all, it certainly must lie between example and examplf. Therefore it has to search to hugely less rows than the current version.
Unfortunately, Dan, I don't have 2.6.x. easily available right now anywhere. The whole 2.6.x series had so many problems with slow running queries that we skipped this version altogether.
Perhaps the explain(true) results of 2.4 are helpful to you, anyway. Here they are, followed by the 2.8 explain(true) results:
MongoDB shell version: 2.4.12
|
connecting to: edison
|
> db.fs.find({type:'folder',folder:/^example(1|2)/}).explain(true);
|
{
|
"cursor" : "BtreeCursor type_folder multi",
|
"isMultiKey" : false,
|
"n" : 0,
|
"nscannedObjects" : 0,
|
"nscanned" : 1,
|
"nscannedObjectsAllPlans" : 0,
|
"nscannedAllPlans" : 1,
|
"scanAndOrder" : false,
|
"indexOnly" : false,
|
"nYields" : 0,
|
"nChunkSkips" : 0,
|
"millis" : 0,
|
"indexBounds" : {
|
"type" : [
|
[
|
"folder",
|
"folder"
|
]
|
],
|
"folder" : [
|
[
|
"example",
|
"examplf"
|
],
|
[
|
/^example(1|2)/,
|
/^example(1|2)/
|
]
|
]
|
},
|
"allPlans" : [
|
{
|
"cursor" : "BtreeCursor type_folder multi",
|
"n" : 0,
|
"nscannedObjects" : 0,
|
"nscanned" : 1,
|
"indexBounds" : {
|
"type" : [
|
[
|
"folder",
|
"folder"
|
]
|
],
|
"folder" : [
|
[
|
"example",
|
"examplf"
|
],
|
[
|
/^example(1|2)/,
|
/^example(1|2)/
|
]
|
]
|
}
|
}
|
],
|
"server" : "kelvin:27017"
|
}
|
>
|
== 2.8 ==
> db.fs.find({type:'folder',folder:/^example(1|2)/}).explain(true);
|
{
|
"queryPlanner" : {
|
"plannerVersion" : 1,
|
"namespace" : "diesel.fs",
|
"parsedQuery" : {
|
"$and" : [
|
{
|
"type" : {
|
"$eq" : "folder"
|
}
|
},
|
{
|
"folder" : /^example(1|2)/
|
}
|
]
|
},
|
"winningPlan" : {
|
"stage" : "FETCH",
|
"inputStage" : {
|
"stage" : "IXSCAN",
|
"filter" : {
|
"folder" : /^example(1|2)/
|
},
|
"keyPattern" : {
|
"type" : 1,
|
"folder" : 1
|
},
|
"indexName" : "type_folder",
|
"isMultiKey" : false,
|
"direction" : "forward",
|
"indexBounds" : {
|
"type" : [
|
"[\"folder\", \"folder\"]"
|
],
|
"folder" : [
|
"[\"\", {})",
|
"[/^example(1|2)/, /^example(1|2)/]"
|
]
|
}
|
}
|
},
|
"rejectedPlans" : [
|
{
|
"stage" : "KEEP_MUTATIONS",
|
"inputStage" : {
|
"stage" : "FETCH",
|
"filter" : {
|
"type" : {
|
"$eq" : "folder"
|
}
|
},
|
"inputStage" : {
|
"stage" : "IXSCAN",
|
"filter" : {
|
"folder" : /^example(1|2)/
|
},
|
"keyPattern" : {
|
"folder" : 1
|
},
|
"indexName" : "folder",
|
"isMultiKey" : false,
|
"direction" : "forward",
|
"indexBounds" : {
|
"folder" : [
|
"[\"\", {})",
|
"[/^example(1|2)/, /^example(1|2)/]"
|
]
|
}
|
}
|
}
|
}
|
]
|
},
|
"executionStats" : {
|
"executionSuccess" : true,
|
"nReturned" : 0,
|
"executionTimeMillis" : 2883,
|
"totalKeysExamined" : 2444679,
|
"totalDocsExamined" : 0,
|
"executionStages" : {
|
"stage" : "FETCH",
|
"nReturned" : 0,
|
"executionTimeMillisEstimate" : 1550,
|
"works" : 2444681,
|
"advanced" : 0,
|
"needTime" : 2444679,
|
"needFetch" : 0,
|
"saveState" : 0,
|
"restoreState" : 0,
|
"isEOF" : 1,
|
"invalidates" : 0,
|
"docsExamined" : 0,
|
"alreadyHasObj" : 0,
|
"inputStage" : {
|
"stage" : "IXSCAN",
|
"filter" : {
|
"folder" : /^example(1|2)/
|
},
|
"nReturned" : 0,
|
"executionTimeMillisEstimate" : 1500,
|
"works" : 2444679,
|
"advanced" : 0,
|
"needTime" : 2444679,
|
"needFetch" : 0,
|
"saveState" : 0,
|
"restoreState" : 0,
|
"isEOF" : 1,
|
"invalidates" : 0,
|
"keyPattern" : {
|
"type" : 1,
|
"folder" : 1
|
},
|
"indexName" : "type_folder",
|
"isMultiKey" : false,
|
"direction" : "forward",
|
"indexBounds" : {
|
"type" : [
|
"[\"folder\", \"folder\"]"
|
],
|
"folder" : [
|
"[\"\", {})",
|
"[/^example(1|2)/, /^example(1|2)/]"
|
]
|
},
|
"keysExamined" : 2444679,
|
"dupsTested" : 0,
|
"dupsDropped" : 0,
|
"seenInvalidated" : 0,
|
"matchTested" : 0
|
}
|
},
|
"allPlansExecution" : [
|
{
|
"nReturned" : 0,
|
"executionTimeMillisEstimate" : 1220,
|
"totalKeysExamined" : 2444680,
|
"totalDocsExamined" : 0,
|
"executionStages" : {
|
"stage" : "KEEP_MUTATIONS",
|
"nReturned" : 0,
|
"executionTimeMillisEstimate" : 1220,
|
"works" : 2444680,
|
"advanced" : 0,
|
"needTime" : 2444680,
|
"needFetch" : 0,
|
"saveState" : 0,
|
"restoreState" : 0,
|
"isEOF" : 0,
|
"invalidates" : 0,
|
"inputStage" : {
|
"stage" : "FETCH",
|
"filter" : {
|
"type" : {
|
"$eq" : "folder"
|
}
|
},
|
"nReturned" : 0,
|
"executionTimeMillisEstimate" : 1180,
|
"works" : 2444680,
|
"advanced" : 0,
|
"needTime" : 2444680,
|
"needFetch" : 0,
|
"saveState" : 0,
|
"restoreState" : 0,
|
"isEOF" : 0,
|
"invalidates" : 0,
|
"docsExamined" : 0,
|
"alreadyHasObj" : 0,
|
"inputStage" : {
|
"stage" : "IXSCAN",
|
"filter" : {
|
"folder" : /^example(1|2)/
|
},
|
"nReturned" : 0,
|
"executionTimeMillisEstimate" : 1170,
|
"works" : 2444680,
|
"advanced" : 0,
|
"needTime" : 2444680,
|
"needFetch" : 0,
|
"saveState" : 0,
|
"restoreState" : 0,
|
"isEOF" : 0,
|
"invalidates" : 0,
|
"keyPattern" : {
|
"folder" : 1
|
},
|
"indexName" : "folder",
|
"isMultiKey" : false,
|
"direction" : "forward",
|
"indexBounds" : {
|
"folder" : [
|
"[\"\", {})",
|
"[/^example(1|2)/, /^example(1|2)/]"
|
]
|
},
|
"keysExamined" : 2444680,
|
"dupsTested" : 0,
|
"dupsDropped" : 0,
|
"seenInvalidated" : 0,
|
"matchTested" : 0
|
}
|
}
|
}
|
},
|
{
|
"nReturned" : 0,
|
"executionTimeMillisEstimate" : 1550,
|
"totalKeysExamined" : 2444679,
|
"totalDocsExamined" : 0,
|
"executionStages" : {
|
"stage" : "FETCH",
|
"nReturned" : 0,
|
"executionTimeMillisEstimate" : 1550,
|
"works" : 2444680,
|
"advanced" : 0,
|
"needTime" : 2444679,
|
"needFetch" : 0,
|
"saveState" : 0,
|
"restoreState" : 0,
|
"isEOF" : 1,
|
"invalidates" : 0,
|
"docsExamined" : 0,
|
"alreadyHasObj" : 0,
|
"inputStage" : {
|
"stage" : "IXSCAN",
|
"filter" : {
|
"folder" : /^example(1|2)/
|
},
|
"nReturned" : 0,
|
"executionTimeMillisEstimate" : 1500,
|
"works" : 2444679,
|
"advanced" : 0,
|
"needTime" : 2444679,
|
"needFetch" : 0,
|
"saveState" : 0,
|
"restoreState" : 0,
|
"isEOF" : 1,
|
"invalidates" : 0,
|
"keyPattern" : {
|
"type" : 1,
|
"folder" : 1
|
},
|
"indexName" : "type_folder",
|
"isMultiKey" : false,
|
"direction" : "forward",
|
"indexBounds" : {
|
"type" : [
|
"[\"folder\", \"folder\"]"
|
],
|
"folder" : [
|
"[\"\", {})",
|
"[/^example(1|2)/, /^example(1|2)/]"
|
]
|
},
|
"keysExamined" : 2444679,
|
"dupsTested" : 0,
|
"dupsDropped" : 0,
|
"seenInvalidated" : 0,
|
"matchTested" : 0
|
}
|
}
|
}
|
]
|
},
|
"serverInfo" : {
|
"host" : "diesel",
|
"port" : 27017,
|
"version" : "2.8.0-rc3",
|
"gitVersion" : "2d679247f17dab05a492c8b6d2c250dab18e54f2"
|
},
|
"ok" : 1
|
}
|
|
|
Thanks for the additional information rgpublic. Looks like this behavior change was introduced in 2.8.0-rc1. In versions up to and including 2.8.0-rc0 the index bounds for the regex field are:
"indexBounds" : {
|
"type" : [
|
"[\"folder\", \"folder\"]"
|
],
|
"folder" : [
|
"[\"example\", \"examplf\")",
|
"[/^example(1|2)/, /^example(1|2)/]"
|
]
|
},
|
and the operation completes instantaneously. Starting on 2.8.0-rc1 the index bounds become:
"indexBounds" : {
|
"type" : [
|
"[\"folder\", \"folder\"]"
|
],
|
"folder" : [
|
"[\"\", {})",
|
"[/^example(1|2)/, /^example(1|2)/]"
|
]
|
},
|
and the operation takes a few seconds.
|
|
Can you run same query on 2.6 using explain(true) and include the verbose execution stats for 2.6 and 2.8?
|
|
PS (sorry for so many comments, but it's a highly confusing issue for me): The performance seems to vary but - even independent of the sort - I can say that even without the sort, the query is way slower than it should be. On Mongo 2.4 the query returns almost instantaneous. IMHO the index bounds are not calculated correctly when using a regex of the form ^prefix(alternative1|alternative2).
== 2.4 ==
> db.fs.find({type:'folder',folder:/^example(1|2)/}).explain();
|
{
|
"cursor" : "BtreeCursor type_folder multi",
|
"isMultiKey" : false,
|
"n" : 0,
|
"nscannedObjects" : 0,
|
"nscanned" : 1,
|
"nscannedObjectsAllPlans" : 0,
|
"nscannedAllPlans" : 1,
|
"scanAndOrder" : false,
|
"indexOnly" : false,
|
"nYields" : 0,
|
"nChunkSkips" : 0,
|
"millis" : 0,
|
"indexBounds" : {
|
"type" : [
|
[
|
"folder",
|
"folder"
|
]
|
],
|
"folder" : [
|
[
|
"example",
|
"examplf"
|
],
|
[
|
/^example(1|2)/,
|
/^example(1|2)/
|
]
|
]
|
},
|
"server" : "kelvin:27017"
|
}
|
== 2.8 ==
> db.fs.find({type:'folder',folder:/^example(1|2)/}).explain();
|
{
|
"queryPlanner" : {
|
"plannerVersion" : 1,
|
"namespace" : "diesel.fs",
|
"parsedQuery" : {
|
"$and" : [
|
{
|
"type" : {
|
"$eq" : "folder"
|
}
|
},
|
{
|
"folder" : /^example(1|2)/
|
}
|
]
|
},
|
"winningPlan" : {
|
"stage" : "FETCH",
|
"inputStage" : {
|
"stage" : "IXSCAN",
|
"filter" : {
|
"folder" : /^example(1|2)/
|
},
|
"keyPattern" : {
|
"type" : 1,
|
"folder" : 1
|
},
|
"indexName" : "type_folder",
|
"isMultiKey" : false,
|
"direction" : "forward",
|
"indexBounds" : {
|
"type" : [
|
"[\"folder\", \"folder\"]"
|
],
|
"folder" : [
|
"[\"\", {})",
|
"[/^example(1|2)/, /^example(1|2)/]"
|
]
|
}
|
}
|
},
|
"rejectedPlans" : [
|
{
|
"stage" : "KEEP_MUTATIONS",
|
"inputStage" : {
|
"stage" : "FETCH",
|
"filter" : {
|
"type" : {
|
"$eq" : "folder"
|
}
|
},
|
"inputStage" : {
|
"stage" : "IXSCAN",
|
"filter" : {
|
"folder" : /^example(1|2)/
|
},
|
"keyPattern" : {
|
"folder" : 1
|
},
|
"indexName" : "folder",
|
"isMultiKey" : false,
|
"direction" : "forward",
|
"indexBounds" : {
|
"folder" : [
|
"[\"\", {})",
|
"[/^example(1|2)/, /^example(1|2)/]"
|
]
|
}
|
}
|
}
|
}
|
]
|
},
|
"serverInfo" : {
|
"host" : "diesel",
|
"port" : 27017,
|
"version" : "2.8.0-rc3",
|
"gitVersion" : "2d679247f17dab05a492c8b6d2c250dab18e54f2"
|
},
|
"ok" : 1
|
|
|
I've found some more clues: I just became aware that the problem only showed up when querying via RockMongo. This is because RockMongo, by default, adds a sort on "_id". After adding this on the mongo-Shell, the problem shows up there as well:
db.fs.find(
{type:'folder',folder:/^example(1|2)/}
).sort({_id:1}).explain();
takes WAY more time than
db.fs.find(
{type:'folder',folder:/^example1/}
).sort({_id:1}).explain();
|
|
[Sorry, needed to edit this. Was on the wrong database.]
> db.fs.getIndexes();
|
[
|
{
|
"v" : 1,
|
"key" : {
|
"_id" : 1
|
},
|
"name" : "_id_",
|
"ns" : "diesel.fs"
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"filename" : 1
|
},
|
"name" : "filename",
|
"ns" : "diesel.fs",
|
"background" : 1
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"folder" : 1
|
},
|
"name" : "folder",
|
"ns" : "diesel.fs",
|
"background" : 1
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"binary_id" : 1
|
},
|
"name" : "binary_id",
|
"background" : 1,
|
"ns" : "diesel.fs"
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"sessionid" : 1
|
},
|
"name" : "sessionid",
|
"ns" : "diesel.fs",
|
"background" : 1
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"type" : 1,
|
"folder" : 1
|
},
|
"name" : "type_folder",
|
"background" : 1,
|
"ns" : "diesel.fs"
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"username" : 1
|
},
|
"name" : "username",
|
"ns" : "diesel.fs",
|
"background" : 1
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"eventtype" : 1
|
},
|
"name" : "eventtype",
|
"ns" : "diesel.fs",
|
"background" : 1
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"email" : 1
|
},
|
"name" : "email",
|
"ns" : "diesel.fs",
|
"background" : 1
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"clientid" : 1
|
},
|
"name" : "clientid",
|
"ns" : "diesel.fs",
|
"background" : 1
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"time" : 1
|
},
|
"name" : "time",
|
"ns" : "diesel.fs",
|
"background" : 1
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"campaign" : 1
|
},
|
"name" : "campaign",
|
"ns" : "diesel.fs",
|
"background" : 1
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"messageid" : 1
|
},
|
"name" : "messageid",
|
"ns" : "diesel.fs",
|
"background" : 1
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"running" : 1
|
},
|
"name" : "running",
|
"background" : 1,
|
"ns" : "diesel.fs"
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"timestamp" : -1
|
},
|
"name" : "timestamp",
|
"ns" : "diesel.fs",
|
"background" : 1
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"enabled" : 1
|
},
|
"name" : "enabled",
|
"ns" : "diesel.fs",
|
"background" : 1
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"send_date" : 1
|
},
|
"name" : "send_date",
|
"ns" : "diesel.fs",
|
"background" : 1
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"company" : 1
|
},
|
"name" : "company",
|
"ns" : "diesel.fs",
|
"background" : 1
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"invoiceid" : 1
|
},
|
"name" : "invoiceid",
|
"ns" : "diesel.fs",
|
"background" : 1
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"creditid" : 1
|
},
|
"name" : "creditid",
|
"background" : 1,
|
"ns" : "diesel.fs"
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"fingerprint" : 1
|
},
|
"name" : "fingerprint",
|
"ns" : "diesel.fs",
|
"background" : 1
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"archive" : 1
|
},
|
"name" : "archive",
|
"ns" : "diesel.fs",
|
"background" : 1
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"client" : 1
|
},
|
"name" : "client",
|
"ns" : "diesel.fs",
|
"background" : 1
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"options.orderid" : 1
|
},
|
"name" : "options_orderid",
|
"ns" : "diesel.fs",
|
"background" : 1
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"campaign_date_started" : 1
|
},
|
"name" : "campaign_date_started",
|
"ns" : "diesel.fs",
|
"background" : 1
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"fulltext" : 1
|
},
|
"name" : "fulltext",
|
"ns" : "diesel.fs",
|
"background" : 1
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"to.email" : 1
|
},
|
"name" : "to_email",
|
"ns" : "diesel.fs",
|
"background" : 1
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"options.trackingid" : 1
|
},
|
"name" : "options_trackingid",
|
"background" : 1,
|
"ns" : "diesel.fs"
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"event_timestamp" : 1
|
},
|
"name" : "event_timestamp",
|
"ns" : "diesel.fs",
|
"background" : 1
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"last_event" : 1
|
},
|
"name" : "last_event",
|
"ns" : "diesel.fs",
|
"background" : 1
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"parentid" : 1
|
},
|
"name" : "parentid",
|
"ns" : "diesel.fs",
|
"background" : 1
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"category" : 1
|
},
|
"name" : "category",
|
"ns" : "diesel.fs",
|
"background" : 1
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"sessionid" : 1,
|
"eventtype" : 1
|
},
|
"name" : "sessionid_eventtype",
|
"ns" : "diesel.fs",
|
"background" : 1
|
},
|
{
|
"v" : 1,
|
"key" : {
|
"realm" : 1,
|
"type" : 1,
|
"folder" : 1
|
},
|
"name" : "realm_type_folder",
|
"ns" : "diesel.fs",
|
"background" : 1
|
}
|
]
|
|
|
Can you please post the output of getIndexes() for this collection?
|
Generated at Thu Feb 08 03:41:42 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.