[JAVA-339] Send/Recieve buffer limits on mongod server/java driver communication Created: 29/Apr/11  Updated: 11/Sep/19  Resolved: 10/Mar/12

Status: Closed
Project: Java Driver
Component/s: Performance
Affects Version/s: 2.4
Fix Version/s: None

Type: Task Priority: Minor - P4
Reporter: santosh kumar kancha Assignee: Unassigned
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Red Hat Linux



 Description   

We have setup MongoD instance in replica set mode and java driver in on application server in the same Data Center. (Second set of application servers in another datacenter)

Our payload between server and client is around 300K.
With smaller payloads, around 25K, the time taken to transfer the data between app server and mongod is less than a milli second (Single pass between these two)

With bigger payload (~300K) the client and server communication seems to be taking 16ms. The reason for this delay is due to multiple rounds of data transfer between them.
The time for lookup for different datacenter is around 300ms (due to multiple rounds of data transfer).

In mysql, there is an option to increase the Send/Recieve buffer (So all the data is sent in one shot instead of waiting for the ACK from the client during the transfer everytime)

Is there any such parameter available for MongoD ??

Thanks & Regards

  • santhu


 Comments   
Comment by Jeffrey Yemin [ 10/Mar/12 ]

Please re-open if you have any more feedback.

Comment by Jeffrey Yemin [ 15/Dec/11 ]

Hi Santosh,

Are we good to close this? Haven't heard back from you since Antoine's last comment.

Comment by Antoine Girbal [ 02/May/11 ]

I dont know how to set it on windows, smthing tells me it's not as easy as linux.
If you can, try running the client on a linux box too, since the java code is portable.

Often times setting this value higher will not speed up transfer much.
The connection speed is usually more limited by the size of the tcp congestion window.
That is why you should try to do several transfers in a row on same connection to see real speed.
Try doing a requestStart() then doing the request several times and see if it gets faster.

There are different settings to try to get congestion window higher:

  • use different congestion algos (tcp_congestion_control)
  • some OS let you bump up the initial congestion window
  • the congestion window tends to drop on idle connection (turn off tcp_slow_start_after_idle)
Comment by santosh kumar kancha [ 02/May/11 ]

Hi Antoine,

Thanks for the comment.
I will try this on the server machine.
My Java Client is running on Windows though.
Is there anything i should be setting up on windows for this ?

Thanks & Regards

  • santosh
Comment by Antoine Girbal [ 02/May/11 ]

Hi Santosh,
Most recent OS are very smart about socket buffer sizes.
On linux typically it will start with medium size then grow as needed (up to 4MB on linux).
$ cat /proc/sys/net/ipv4/tcp_rmem
4096 87380 4194304

You can set the buffer limits higher using some OS commands like
$ echo "87380 4194304 8388608" | sudo tee /proc/sys/net/ipv4/tcp_rmem

You would need to set it higher on both client and server side for both wmem and rmem.
Could you try it and see if you get better performance, in which case we can push for this new option?
thanks

Comment by Scott Hernandez (Inactive) [ 29/Apr/11 ]

The server will respond with a single packet (within your network's limits) unless you have specified a smaller batchSize. By default the server will return 4/16MB of data for a batch (in a query). Are you using a limit in your find?

Generated at Thu Feb 08 08:52:03 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.