[SERVER-18207] Allow Queries for limit String sizes Created: 25/Apr/15 Updated: 16/May/15 Resolved: 15/May/15 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Querying |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | New Feature | Priority: | Major - P3 |
| Reporter: | Yair Lenga | Assignee: | Ramon Fernandez Marina |
| Resolution: | Duplicate | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||
| Participants: | |||||||||
| Description |
|
Please add an option to limit maximum size of retrieved strings. The limit should apply to any STRING (or BLOB). Motivation: The above change will reduce the amount of data that need to be fetched on the original calls. Currently, 100X data has transferred from Mongo, since there is no way to cap the size of the CLOB, which could be anywhere. |
| Comments |
| Comment by Yair Lenga [ 16/May/15 ] |
|
Posted comment to reconsider on |
| Comment by Ramon Fernandez Marina [ 15/May/15 ] |
|
yair.lenga@gmail.com, this looks like a subset of the functionality requested in Regards, |
| Comment by Yair Lenga [ 25/Apr/15 ] |
|
Situation similar to: For the application to use the '$substr', it need to know the location and the name of of the attribute that will have large strings. In the case that attributes are added on a regular basis, taking advantage of the MongoDB ability to extend the document dynamically, the reading application does not have a way to know which attributes will have large strings. The application is forced to query the whole document to find out what data is available. Having the ability to find out where the arrays are located, and their sizes, using the above feature, will make it possible for the application to identify the arrays and their sizes, and decide on which subset of arrays/indices to retrieve. In my specific case, data can be insert into the MongoDB Document, raising the size of the document to >2MB. Our application is expected to fetch the data sets base, and show the user a list of available data items/documents in a grid. He can then choose which data item to expand. We have to fetch ~400 MB, of data (200 rows, 2MB) just to find the available data. Having this cap, will allow us to reduce the queries data to < 1MB. Comparing with JDBC, it is equivalent to running "sp_columns" to find column attributes, and then forming a "select a1, a2, a3, ...", as oppose to "select *". |
| Comment by Ramon Fernandez Marina [ 25/Apr/15 ] |
|
yair.lenga@gmail.com, I believe what you're looking for is the $substr aggregation operator. Please try it out and let us know if it works for you. Thanks, |