[SERVER-19688] TTL Index is having more Latency Created: 31/Jul/15  Updated: 10/Aug/15  Resolved: 10/Aug/15

Status: Closed
Project: Core Server
Component/s: TTL
Affects Version/s: 3.0.4
Fix Version/s: None

Type: Bug Priority: Major - P3
Reporter: Kingsly J Assignee: Ramon Fernandez Marina
Resolution: Duplicate Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Duplicate
duplicates SERVER-19334 TTL index deletions cannot always kee... Closed
Operating System: ALL
Participants:

 Description   

Hi Team,
We are using mongodb V 3.0.4 and storage engine of "WiredTiger". In one of our collections', we are storing the security/auth tokens with TTL index on token expiry time. We are aware that, as per our mongo documentation, we will have a latency of around 1 min. But in our system we are seeing a latency of around 15 - 20 mins. As we are planning to implement authentication based on the same, it would be a major security concern.
Another interesting fact we found here is, even after 15 mins, the TTL logic is only deleting the oldest record and not all the expired records. Say for example, if TTL indexed column is having timestamps, 10:25am, 10:26am, 10:28 am etc... and while we check at around 10:40am only record with 10:25am is getting deleted and not all the expired data, similarly say at 10:45 am it is deleting only record with 10:26am and not all the expired data.
We checked if any option to reduce the sync time, but as per mongo documentation, we couldn't find a one.
Kindly guide us on the same.

Regards,
Kingsly J



 Comments   
Comment by Ramon Fernandez Marina [ 10/Aug/15 ]

Thanks for the additional details jebas, sorry to hear that expiring docs at a specific clock time didn't work well for you either. I'm afraid that the only suitable workaround at time may be to use a regular collection and do the expiration manually at the application level.

I am going to post your results in SERVER-19334 and close this ticket as a duplicate. Feel free to vote for SERVER-19334 and to watch it to receive updates.

Regards,
Ramón.

Comment by Kingsly J [ 04/Aug/15 ]

Hi Ramon,
Thanks for your reply.
we have tried the option of "expiring docs at specific time clock" also, and having the same issue. We had issues in checking with changing the timezones/ storage engine as it would affect our development process as few other instances are running on the same machine.
Regarding the workload, the TTL indexes collection will not have more than 500 records at any point in time. The overall workload is littlebit higher.

Regards,
Kingsly J

Comment by Ramon Fernandez Marina [ 03/Aug/15 ]

Hi jebas; after further discussion we believe this ticket is a duplicate of SERVER-19334, which we want to address in the current development cycle. It is much less likely that this is a bug in timezone handling.

Can you provide us more details with the overall load on this server as well as the load on the affected TTL collection? There may be ways to optimize this server's performance to lower the latency of document deletion. Also, have you tried expiring documents at a specific clock time as proposed above?

Thanks,
Ramón.

Comment by Ramon Fernandez Marina [ 01/Aug/15 ]

Thanks for your report jebas. I see your time zone is IST, which is currently UTC+5:30. I wonder if there's a bug in handling time zones in mongod, so here are some options to investigate this:

  • would you be able to set the time zone to a value that has full hour differences with UTC and repeat the experiment? For example, YEKT (UTC+5) or NOVT (UTC+6).
  • would you be able to repeat the experiment with the MMAPv1 storage engine for IST and YEKT/NOVT? In other words, if there's a bug is it also present in MMAPv1? That would help determine the next step.

A possible alternative you may want to try as well is to slightly modify your application to expire documents at a specific clock time. This feature might give you the fine-grained control you need in your application.

Thanks,
Ramón.

Generated at Thu Feb 08 03:51:45 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.