- interface.tip is the server's tip.
- consider scenario:
- client has chain len 800_000, is up to date
- client goes offline
- suddenly there is a short reorg
e.g. blocks 799_998, 799_999, 800_000 are reorged
- client was offline for long time, finally comes back online again
- server tip is 1_000_000, tip_header does not connect to client's local chain
- PREVIOUSLY before commit, client would start backwards search
- first it asks for header 800_001, which does not connect
- then client asks for header ~600k, which checks
- client will do long binary search to find the forkpoint
- AFTER commit, client starts backwards search
- first it asks for header 800_001, which does not connect
- then client asks for header 799_999, etc
- that is, previously, on average, client did a short backwards search, followed by a long binary search
- now, on average, client does a longer backwards search, followed by a shorter binary search
- this works much nicer with the headers_cache
(- and thomasv said the old behaviour was not intentional)
note: print() statements and stderr logging don't have a consistent printing order.
Either can buffer log lines and flush them later, and the buffers are independent.
Just prior to this commit, test_fork_conflict and test_fork_noconflict were essentially identical copies.
The only diff was that test_fork_conflict set the global blockchain.blockchains,
but this was not even affecting its behaviour anymore.
Originally when this test was added, we had the concept of chain fork conflicting with each other:
we could not handle three-way chain-splits. As in, there could only be a single fork forking away
from the main chain at any given height.
see 7221fb3231
However, this restriction was removed and generalised later:
141ff99580
After which the "test_fork_conflict" test did not make sense anymore.
We try to predict the next headers the interface will ask for,
and request them ahead of time, to be kept in the headers_cache.
This saves network latency/round-trips, for a bit more memory usage
and in some cases for more bandwidth.
Note that due to PaddedRSTransport.WAIT_FOR_BUFFER_GROWTH_SECONDS,
latency saved here can be longer than "real" network latency.
This speeds up
- binary search greatly,
- backwards search to a small degree
(although not that much as its algorithm should be changed a bit to make it cache-friendly)
- catch-up greatly, if it's <10 blocks behind
What remains is to speed up catch-up in case we are behind by many thousands of block.
That behaviour is left unchanged here. The issue there is that we request chunks sequentially.
So e.g. 1 chunk (2016 blocks) per 1 second.