$GETTIM and $GETTIM_PREC on modern systems
Posted: Thu Jul 07, 2022 2:35 pm
A question has arisen in the OpenSSL development community regarding the relative "entropy" offered by using $GETTIM and $GETTIM_PREC resolutions. This is in the context of the number of bits of resolution offered per tick of particular systems. Use of $GETTIM_PREC adds a dependency to OpenSSL when built on VMS V8.4 which precludes execution on V8.3 and less. If there is no significant gain in "bits per tick" under $GETTIM_PREC then why bother using it.
While it is imagined Alpha boxen are pretty-much passe, and only the very latest Itanium system might support higher tick rates, what is the situation with X86, and in particular hypervisors. Any *real* advantage to $GETTIM_PREC (in an "entropy generation" context)?
While a work-around has been put in place for executing OpenSSL build on V8.4 under earlier versions, it would simplify the code if $GETTIME_PREC could be eliminated completely in favour of $GETTIM and no real loss of precision bits.
While it is imagined Alpha boxen are pretty-much passe, and only the very latest Itanium system might support higher tick rates, what is the situation with X86, and in particular hypervisors. Any *real* advantage to $GETTIM_PREC (in an "entropy generation" context)?
While a work-around has been put in place for executing OpenSSL build on V8.4 under earlier versions, it would simplify the code if $GETTIME_PREC could be eliminated completely in favour of $GETTIM and no real loss of precision bits.