Details
-
Improvement
-
Resolution: Done
-
Major - P3
-
None
-
2.6.5
-
None
Description
We use slice for users activity feed limitation at website.
Now we use the limit 10 000, herewith enough quantity of data is stored, so total amount is 1350 kb approximately. While using replica set we met the following problem that there is big traffic to secondary nodes. Log analysis showed that while using slice object is always transferred totally via Internet.
2014-10-06T10:12:33.796+0000 [repl writer worker 2] warning: log line attempted (1349k) over max size (10k), printing beginning and end ... applying op: { ts: Timestamp 1412590353000|6, h: -4609969471965806062, v: 2, op: "u", ns: "dm_social.feed_activity", o2: { _id: 1 }, o: { $set: { log2: [ { time: "1410172122", user_id: 100850514, name: "NNNN", sex: 2, username: "praisss", age: 26, country: "NNNN", city: "NNNN" }, { time: "1410172123", user_id: 100918283, name: "NNNN", sex: 2, username: "lydok0708", age: 35, country: "NNNN", city: "NNNN" }, { time: "1410172123", user_id: 101119576, name: "NNNN", sex: 2, username: "delfin51", age: 59, country: "NNNN", city: "NNNN" }, { time: "1410172129", user_id: 100908179, name: "NNNN", sex: 2, username: "lav13", age: 54, country: "NNNN", city: "NNNN" },
|
I would like to ask to improve slice function, so, when it is used, traffic for replicas would be the same as without slice being used.
Currently we solved the issue by using slice with probability of 1/100