[JAVA-27] buffer overrun for large documents causes an exception for DBCursor.hasNext() Created: 01/Sep/09 Updated: 02/Oct/09 Resolved: 15/Sep/09 |
|
| Status: | Closed |
| Project: | Java Driver |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | Jason Sachs | Assignee: | Eliot Horowitz (Inactive) |
| Resolution: | Cannot Reproduce | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Environment: |
WinXP, Java SE 6u13, Mongo Java driver mongo-0.7.jar |
||
| Description |
|
I have some large documents (representing the contents of files in the 250K-1MB range) that cannot be read with the Java driver. The failure happens with DBCursor.hasNext(): public void catalog() } When I run this on my database collection, I get the following. The call to find() succeeds, and the cursor's count() method works ok, but it bombs on the first hasNext() call, looks like some kind of buffer overrun. Found 10 objects. |
| Comments |
| Comment by Eliot Horowitz (Inactive) [ 15/Sep/09 ] |
|
seems to be a PHP issue |
| Comment by Jason Sachs [ 04/Sep/09 ] |
|
I'll try to code up a simple test case. Problem seems to be interoperation of PHP and JAVA operating on DBObjects containing binary data. |
| Comment by Eliot Horowitz (Inactive) [ 04/Sep/09 ] |
|
I'm having trouble reproducing |
| Comment by Jason Sachs [ 01/Sep/09 ] |
|
Well, I can't seem to get it to fail if I use Java to write in data, rather than PHP. The following program seems to read/write fine: public void testbugJava27(String charsetName) throws UnknownHostException, MongoException, UnsupportedEncodingException String s1 = bigString.toString(); StringBuilder sb = new StringBuilder(); ; ; for (int k = 0; k < daysOfChristmas.length; ++k) sb.append(s1); String s = sb.toString(); } obj.put("data", data); DBCursor cursor = coll.find(); } |
| Comment by Jason Sachs [ 01/Sep/09 ] |
|
Hmmm. My database seems to have UTF-8 issues so maybe this isn't a buffer overrun issue per se. (see |
| Comment by Jason Sachs [ 01/Sep/09 ] |
|
p.s. from the Red Herring Clarification Department: Those lines about it succeeding if I used additional arguments to find() are because my document contains a "metadata" field (small sub-object) and a "data" field (large), so if I download only the "metadata" field it works fine. |