[JAVA-307] The low speed of range query with two constraints Created: 25/Mar/11 Updated: 11/Sep/19 Resolved: 04/Sep/12 |
|
| Status: | Closed |
| Project: | Java Driver |
| Component/s: | API |
| Affects Version/s: | 2.5 |
| Fix Version/s: | None |
| Type: | Task | Priority: | Critical - P2 |
| Reporter: | shawn yang | Assignee: | Unassigned |
| Resolution: | Done | Votes: | 1 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Environment: |
network card: 1000Mbit/s, memory 8G, Mongodb 1.8, Mongodb driver 2.5 |
||
| Description |
|
I use single java client to do range query with two constraints and i have create index on query key, the read speed only can reach to 6M/s (Read back 6M, nearly spend 1.1 second ), but my network card is "1000Mbit/s" and all of query data have been in memory(The data size < Memory size)? stats info: "objects" : 2936299, select report_path from table where visit_time > start_visit_time and visit_time < end_visit_time BasicDBObject query = new BasicDBObject(); BasicDBObject keys = new BasicDBObject(); DBCursor cur = ct.find(query, keys); |
| Comments |
| Comment by Jeffrey Yemin [ 04/Sep/12 ] |
|
Apologies for letting this sit so long without a response. Please re-open if you want to pursue it further. |
| Comment by shawn yang [ 25/Mar/11 ] |
|
Update the test cases, add the detail consume time in each part (return DBCursor VS read back data ): BasicDBObject range = new BasicDBObject(); BasicDBObject query = new BasicDBObject(); BasicDBObject keys = new BasicDBObject(); long st1 = System.currentTimeMillis(); DBCursor cur = ct.find(query, keys).batchSize(1000); long st = System.currentTimeMillis(); while (cur.hasNext()) { cur.next(); } long ed = System.currentTimeMillis(); |
| Comment by shawn yang [ 25/Mar/11 ] |
|
1) Read back : the record number : 217611, the total data size is 6734899 bytes
--------------------------------------------------- When i remove toString, it will consume 964ms, a little smaller. |
| Comment by Antoine Girbal [ 25/Mar/11 ] |
|
how many records are you reading back?
What do you mean by: |
| Comment by shawn yang [ 25/Mar/11 ] |
|
I open mongodb profiling and get the following info: > db.system.profile.find() }, fields: { visit_time: 1, _id: 0 }} reslen:64 294ms", "millis" : 294 } It seems the server process time only is 294ms, using mongostat to check %lock, its value is 0; The client and server are on the same server using Local look back. From test case: ---------------------- |
| Comment by shawn yang [ 25/Mar/11 ] |
|
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND If i don't calculate the total size of read back data, the cpu percent only is 9.7%, so i think if it is the java driver's problem? |
| Comment by shawn yang [ 25/Mar/11 ] |
|
1) sure, i have been used iostat to check it, it's idle 2) the percent of cpu only reach to ~22% So the bottleneck isn't disk io/ cpu/memory. |
| Comment by Eliot Horowitz (Inactive) [ 25/Mar/11 ] |
|
Can you verify disk is idle during this yes? |