- interface.tip is the server's tip.
- consider scenario:
- client has chain len 800_000, is up to date
- client goes offline
- suddenly there is a short reorg
e.g. blocks 799_998, 799_999, 800_000 are reorged
- client was offline for long time, finally comes back online again
- server tip is 1_000_000, tip_header does not connect to client's local chain
- PREVIOUSLY before commit, client would start backwards search
- first it asks for header 800_001, which does not connect
- then client asks for header ~600k, which checks
- client will do long binary search to find the forkpoint
- AFTER commit, client starts backwards search
- first it asks for header 800_001, which does not connect
- then client asks for header 799_999, etc
- that is, previously, on average, client did a short backwards search, followed by a long binary search
- now, on average, client does a longer backwards search, followed by a shorter binary search
- this works much nicer with the headers_cache
(- and thomasv said the old behaviour was not intentional)
note: print() statements and stderr logging don't have a consistent printing order.
Either can buffer log lines and flush them later, and the buffers are independent.
Just prior to this commit, test_fork_conflict and test_fork_noconflict were essentially identical copies.
The only diff was that test_fork_conflict set the global blockchain.blockchains,
but this was not even affecting its behaviour anymore.
Originally when this test was added, we had the concept of chain fork conflicting with each other:
we could not handle three-way chain-splits. As in, there could only be a single fork forking away
from the main chain at any given height.
see 7221fb3231
However, this restriction was removed and generalised later:
141ff99580
After which the "test_fork_conflict" test did not make sense anymore.
We try to predict the next headers the interface will ask for,
and request them ahead of time, to be kept in the headers_cache.
This saves network latency/round-trips, for a bit more memory usage
and in some cases for more bandwidth.
Note that due to PaddedRSTransport.WAIT_FOR_BUFFER_GROWTH_SECONDS,
latency saved here can be longer than "real" network latency.
This speeds up
- binary search greatly,
- backwards search to a small degree
(although not that much as its algorithm should be changed a bit to make it cache-friendly)
- catch-up greatly, if it's <10 blocks behind
What remains is to speed up catch-up in case we are behind by many thousands of block.
That behaviour is left unchanged here. The issue there is that we request chunks sequentially.
So e.g. 1 chunk (2016 blocks) per 1 second.
Notably verifymessage and decrypt(message) were silently ignoring trailing garbage
or inserted non-base64 characters present in signatures/ciphertext.
(both the CLI commands and in the GUI)
I think it is much cleaner and preferable to treat such signatures/ciphertext as invalid.
In fact I find it surprising that base64.b64decode(validate=False) is the default.
Perhaps we should create a helper function for it that set validate=True and use that.
- CURRENT_WALLET is set when a single wallet is loaded in memory, and it
remains set after Electrum stops running.
- If several wallets are loaded at the same time, CURRENT_WALLET is unset,
and RPCs must specify the wallet explicitly (using --wallet for the CLI)
- The fallback to 'default_wallet' essentially only applies when
creating a new wallet file
If the full tx is missing, we should force mempool/confirmed txs to be LOCAL height,
however future txs should not be forced to LOCAL, they should remain FUTURE.
follow-up 197933debf
This new `Transaction.verify_sig_for_txin` function is an instance method of `Transaction` instead of `PartialTransaction`.
It takes a complete txin, a pubkey and a signature, and verifies the signature.
- `get_preimage_script` is renamed to `get_scriptcode_for_sighash` and now effectively has two implementations:
- the old impl became `PartialTxInput.get_scriptcode_for_sighash`
- this assumes we are the ones constructing a spending txin and can have knowledge beyond what will be revealed onchain
- the new impl is in the base class, `TxInput.get_scriptcode_for_sighash`
- this assumes the txin is already "complete", and mimics a consensus-verifier by extracting the required fields
from the already complete witness/scriptSig and the scriptpubkey of the funding utxo
- `serialize_preimage` now does not require a PartialTransaction, it also works on the base class Transaction
-----
I intend to use this for debugging only atm: I noticed TxBatcher sometimes creates invalid signatures by seeing
that bitcoind rejects txs with `mandatory-script-verify-flag-failed (Signature must be zero for failed CHECK(MULTI)SIG operation)`.
However the txs in question have multiple txins, with some txins containing multiple signatures, and bitcoind does not tell us
which txin/signature is invalid. Knowing which signature is invalid would be a start, but I can now add some temp debug logging
to `serialize_preimage` to compare the message being signed with the message being verified.
As can be seen from the tests, the signature and the pubkey needs to be manually extracted from the txin to be verified:
we still don't have a script interpreter so we don't have logic to "verify a txin". However this new code adds logic
to verify a signature for a txin/pubkey combo (which is a small part of an interpreter/verifier).
part configs
I noticed certain ln payments become very unreliable. These payments are ~21k sat, from gossip to gossip sender, with direct, unannounced channel.
Due to the recent fix https://github.com/spesmilo/electrum/pull/9723 `LNPathFinder.get_shortest_path_hops()` will not use channels for the last hop of a route anymore that aren't also passed to it in `my_sending_channels`:
```python
if edge_startnode == nodeA and my_sending_channels: # payment outgoing, on our channel
if edge_channel_id not in my_sending_channels:
continue
```
Conceptually this makes sense as we only want to send through `my_sending_channels`, however if the only channel between us and the receiver is a direct channel that we got from the r_tag and it's not passed in `my_sending_channel` it's not able to construct a route now.
Previously this type of payment worked as `get_shortest_path_hops()` knew of the direct channel between us and `nodeB` from the invoices r_tag and then just used this channel as the route, even though it was (often) not contained in `my_sending_channels`.
`my_sending_channels`, passed in `LNWallet.create_routes_for_payment()` is either a single channel or all our channels, depending on `is_multichan_mpp`:
```python
for sc in split_configurations:
is_multichan_mpp = len(sc.config.items()) > 1
```
This causes the unreliable, random behavior as `LNWallet.suggest_splits()` is supposed to `exclude_single_part_payments` if the amount is > `MPP_SPLIT_PART_MINAMT_SAT` (5000 sat).
As `mpp_split.py suggest_splits()` is selecting channels randomly, and then removes single part configs, it sometimes doesn't return a single configuration, as it removes single part splits, and also removes multi part splits if a part is below 10 000 sat:
```python
if target_parts > 1 and config.is_any_amount_smaller_than_min_part_size():
continue
```
This will result in a fallback to allow single part payments:
```python
split_configurations = get_splits()
if not split_configurations and exclude_single_part_payments:
exclude_single_part_payments = False
split_configurations = get_splits()
```
Then the payment works as all our channels are passed as `my_sending_channels` to `LNWallet.create_routes_for_payment()`.
However sometimes this fallback doesn't happen, because a few mpp configurations found in the first iteration of `suggest_splits` have been kept, e.g. [10500, 10500], but at the same time most others have been removed as they crossed the limit, e.g. [11001, 9999], (which happens sometimes with payments ~20k sat), this makes `suggest_splits` return very few usable channels/configurations (sometimes just one or two, even with way more available channels).
This makes payments in this range unreliable as we do not retry to generate new split configurations if the following path finding fails with `NoPathFound()`, and there is no single part configuration that allows the path finding to access all channels. Also this does not only affect direct channel payments, but all gossip payments in this amount range.
There seem to be multiple ways to fix this, i think one simple approach is to just disable `exclude_single_part_payments` if the splitting loop already begins to sort out configs on the second iteration (the first split), as this indicates that the amount may be too small to split within the given limits, and prevents the issue of having only few valid splits returned and not going into the fallback. However this also results in increased usage of single part payments.
the update_fee logic for lightning channels was not adapted to anchor
channels causing us to send update_fee with a eta target of 2 blocks.
This causes force closes when there are mempool spikes as the fees we
try to update to are a lot higher than e.g. eclair uses. Eclair will
force close if our fee is 10x > than their fee.