pg_test_timingpg_test_timing1Applicationpg_test_timingmeasure timing overheadpg_test_timingoptionDescriptionpg_test_timing is a tool to measure the
timing overhead on your system and confirm that the system time never
moves backwards. It simply reads the system clock over and over again
as fast as it can for a specified length of time, and then prints
statistics about the observed differences in successive clock readings.
Smaller (but not zero) differences are better, since they imply both
more-precise clock hardware and less overhead to collect a clock reading.
Systems that are slow to collect timing data can give less accurate
EXPLAIN ANALYZE results.
This tool is also helpful to determine if
the track_io_timing configuration parameter is likely
to produce useful results.
Optionspg_test_timing accepts the following
command-line options:
Specifies the test duration, in seconds. Longer durations
give slightly better accuracy, and are more likely to discover
problems with the system clock moving backwards. The default
test duration is 3 seconds.
Specifies the cutoff percentage for the list of exact observed
timing durations (that is, the changes in the system clock value
from one reading to the next). The list will end once the running
percentage total reaches or exceeds this value, except that the
largest observed duration will always be printed. The default
cutoff is 99.99.
Print the pg_test_timing version and exit.
Show help about pg_test_timing command line
arguments, and exit.
UsageInterpreting Results
The first block of output has four columns, with rows showing a
shifted-by-one log2(ns) histogram of timing durations (that is, the
differences between successive clock readings). This is not the
classic log2(n+1) histogram as it counts zeros separately and then
switches to log2(ns) starting from value 1.
The columns are:
nanosecond value that is >= the durations in this
bucketpercentage of durations in this bucketrunning-sum percentage of durations in this and previous
bucketscount of durations in this bucket
The second block of output goes into more detail, showing the exact
timing differences observed. For brevity this list is cut off when the
running-sum percentage exceeds the user-selectable cutoff value.
However, the largest observed difference is always shown.
The example results below show that 99.99% of timing loops took between
8 and 31 nanoseconds, with the worst case somewhere between 32768 and
65535 nanoseconds. In the second block, we can see that typical loop
time is 16 nanoseconds, and the readings appear to have full nanosecond
precision.
See AlsoWiki
discussion about timing