Uploaded image for project: 'WiredTiger'
  1. WiredTiger
  2. WT-2663

Measure the overhead of obtaining high-resolution time on AWS virtual instances

    • Type: Icon: Task Task
    • Resolution: Done
    • Priority: Icon: Minor - P4 Minor - P4
    • None
    • Affects Version/s: None
    • Component/s: None
    • Labels:
      None

      A while ago it was observed that measuring time on a virtual instance has huge overhead – several milliseconds. At the WT meeting in NYC we decided to investigate whether (1) measuring the time still has overhead on modern instances, and (2) whether we can measure the time with less overhead by directly accessing the CPU tsc register on x86 platforms.

      I measured the overhead of time measurements on real hardware and on an AWS virtual instance. Here are the results:

      Method Real: Intel X5365 3GHz Virtual: Intel E5-2670 2.5GHz
      clock_gettime 470ns 90ns
      tsc 30ns 10ns
      tsc with mfence* 30 ns 20ns

      Using the tsc instruction with the memory fence is recommended to prevent the compiler and the hardware from reordering the instructions surrounding the timestamp probe.

      My conclusion is that on a modern AWS instance measuring the time using clock_gettime does not have large overhead. Measuring the time using the CPU register has a smaller overhead, but we probably don't care about the difference of 80 nanoseconds for our purposes.

      My real machine has larger overhead of time measurement despite a higher clock rate; I think this is because that machine is very old.

      I verified that the time measuring methods are consistent with each other; so I am fairly certain that the methods I tested are correct.

            Assignee:
            fedorova Alexandra (Sasha) Fedorova
            Reporter:
            fedorova Alexandra (Sasha) Fedorova
            Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

              Created:
              Updated:
              Resolved: