[deleted] [deleted]
You're misunderstanding what this is controlling. It does not control the rate of time updates.
Since you agree that there are diminishing returns and setting it to 1ms for example wouldn't be productive
Setting it to 1ms wouldn't help because it's not that precise due to network jitter. However, it wouldn't hurt to have it set to 1ms. It could potentially be set to 0 to essentially always update the clock after fetching time. We could have done that but it would require testing to make sure the code handles it. Instead, we chose a smaller threshold significantly larger than the difference introduced by any typical network jitter but still quite small.
fewer updates going all the time
It doesn't control the rate network time is fetched.
but if it was updating 3/sec nobody would even bother to change it
This doesn't control how often time is updated. You're misunderstanding what it controls.
No reason to update 20 in a single second, but thats just my opinion.
It never does this. Network time updates are very infrequent, as in many hours between checking. Not clear why you think it's updating at a rate controlled by this. It's not a rate but rather a threshold. It fetches the network time at a frequency not determined by this and then chooses whether to update the system clock with the time it retrieved based on whether the difference was beyond this threshold.