We try to predict the next headers the interface will ask for, and request them ahead of time, to be kept in the headers_cache. This saves network latency/round-trips, for a bit more memory usage and in some cases for more bandwidth. Note that due to PaddedRSTransport.WAIT_FOR_BUFFER_GROWTH_SECONDS, latency saved here can be longer than "real" network latency. This speeds up - binary search greatly, - backwards search to a small degree (although not that much as its algorithm should be changed a bit to make it cache-friendly) - catch-up greatly, if it's <10 blocks behind What remains is to speed up catch-up in case we are behind by many thousands of block. That behaviour is left unchanged here. The issue there is that we request chunks sequentially. So e.g. 1 chunk (2016 blocks) per 1 second.
7.0 KiB
7.0 KiB