If the full tx is missing, we should force mempool/confirmed txs to be LOCAL height,
however future txs should not be forced to LOCAL, they should remain FUTURE.
follow-up 197933debf
This new `Transaction.verify_sig_for_txin` function is an instance method of `Transaction` instead of `PartialTransaction`.
It takes a complete txin, a pubkey and a signature, and verifies the signature.
- `get_preimage_script` is renamed to `get_scriptcode_for_sighash` and now effectively has two implementations:
- the old impl became `PartialTxInput.get_scriptcode_for_sighash`
- this assumes we are the ones constructing a spending txin and can have knowledge beyond what will be revealed onchain
- the new impl is in the base class, `TxInput.get_scriptcode_for_sighash`
- this assumes the txin is already "complete", and mimics a consensus-verifier by extracting the required fields
from the already complete witness/scriptSig and the scriptpubkey of the funding utxo
- `serialize_preimage` now does not require a PartialTransaction, it also works on the base class Transaction
-----
I intend to use this for debugging only atm: I noticed TxBatcher sometimes creates invalid signatures by seeing
that bitcoind rejects txs with `mandatory-script-verify-flag-failed (Signature must be zero for failed CHECK(MULTI)SIG operation)`.
However the txs in question have multiple txins, with some txins containing multiple signatures, and bitcoind does not tell us
which txin/signature is invalid. Knowing which signature is invalid would be a start, but I can now add some temp debug logging
to `serialize_preimage` to compare the message being signed with the message being verified.
As can be seen from the tests, the signature and the pubkey needs to be manually extracted from the txin to be verified:
we still don't have a script interpreter so we don't have logic to "verify a txin". However this new code adds logic
to verify a signature for a txin/pubkey combo (which is a small part of an interpreter/verifier).
part configs
I noticed certain ln payments become very unreliable. These payments are ~21k sat, from gossip to gossip sender, with direct, unannounced channel.
Due to the recent fix https://github.com/spesmilo/electrum/pull/9723 `LNPathFinder.get_shortest_path_hops()` will not use channels for the last hop of a route anymore that aren't also passed to it in `my_sending_channels`:
```python
if edge_startnode == nodeA and my_sending_channels: # payment outgoing, on our channel
if edge_channel_id not in my_sending_channels:
continue
```
Conceptually this makes sense as we only want to send through `my_sending_channels`, however if the only channel between us and the receiver is a direct channel that we got from the r_tag and it's not passed in `my_sending_channel` it's not able to construct a route now.
Previously this type of payment worked as `get_shortest_path_hops()` knew of the direct channel between us and `nodeB` from the invoices r_tag and then just used this channel as the route, even though it was (often) not contained in `my_sending_channels`.
`my_sending_channels`, passed in `LNWallet.create_routes_for_payment()` is either a single channel or all our channels, depending on `is_multichan_mpp`:
```python
for sc in split_configurations:
is_multichan_mpp = len(sc.config.items()) > 1
```
This causes the unreliable, random behavior as `LNWallet.suggest_splits()` is supposed to `exclude_single_part_payments` if the amount is > `MPP_SPLIT_PART_MINAMT_SAT` (5000 sat).
As `mpp_split.py suggest_splits()` is selecting channels randomly, and then removes single part configs, it sometimes doesn't return a single configuration, as it removes single part splits, and also removes multi part splits if a part is below 10 000 sat:
```python
if target_parts > 1 and config.is_any_amount_smaller_than_min_part_size():
continue
```
This will result in a fallback to allow single part payments:
```python
split_configurations = get_splits()
if not split_configurations and exclude_single_part_payments:
exclude_single_part_payments = False
split_configurations = get_splits()
```
Then the payment works as all our channels are passed as `my_sending_channels` to `LNWallet.create_routes_for_payment()`.
However sometimes this fallback doesn't happen, because a few mpp configurations found in the first iteration of `suggest_splits` have been kept, e.g. [10500, 10500], but at the same time most others have been removed as they crossed the limit, e.g. [11001, 9999], (which happens sometimes with payments ~20k sat), this makes `suggest_splits` return very few usable channels/configurations (sometimes just one or two, even with way more available channels).
This makes payments in this range unreliable as we do not retry to generate new split configurations if the following path finding fails with `NoPathFound()`, and there is no single part configuration that allows the path finding to access all channels. Also this does not only affect direct channel payments, but all gossip payments in this amount range.
There seem to be multiple ways to fix this, i think one simple approach is to just disable `exclude_single_part_payments` if the splitting loop already begins to sort out configs on the second iteration (the first split), as this indicates that the amount may be too small to split within the given limits, and prevents the issue of having only few valid splits returned and not going into the fallback. However this also results in increased usage of single part payments.
the update_fee logic for lightning channels was not adapted to anchor
channels causing us to send update_fee with a eta target of 2 blocks.
This causes force closes when there are mempool spikes as the fees we
try to update to are a lot higher than e.g. eclair uses. Eclair will
force close if our fee is 10x > than their fee.
* Timelock Recovery Extension
* Timelock Recovery Extension tests
* Use fee_policy instead of fee_est
Following 3f327eea07
* making tx with base_tx
Following ab14c3e138
* move plugin metadata from __init__.py to manifest.json
* removing json large indentation
* timelock recovery icon
* timelock recovery plugin: fix typos
* timelock recovery plugin: use menu instead of status bar.
The status bar should be used for displaying status. For example,
hardware wallet plugins use it because their connection status is
changing and needs to be displayed.
* timelock recovery plugin: ask for password only once
* timelock recovery plugin: ask whether to create cancellation tx in the initial window
* remove unnecessary code.
(calling run_hook from a plugin does not make sense)
* show alert and cancellation address at the end.
skip unnecessary dialog
* timelock recovery plugin: do not show transactions one by one.
Set the fee policy in the first dialog, and use the same fee
policy for all tx. We could add 3 sliders to this dialog, if
different fees are needed, but I think this really isn't
really necessary.
* simplify default_wallet for tests
All the lightning-related stuff is irrelevant for
this plugin.
Also use a different destination address
for the test recovery-plan (an address
that does not belong to the same wallet).
* Fee selection should be above fee calculation
also show fee calculation result with "fee: " label.
* hide Sign and Broadcast buttons during view
* recalculate cancellation transaction
The checkbox could be clicked after the fee rate
has been set. Calling update_transactions() may seem
inefficient, but it's the simplest way to avoid such edge-cases.
Also set the context's cancellation transaction to None when the
checkbox is unset.
* use context.cancellation_tx instead of checkbox value
context.cancellation_tx will be None iff the checkbox was unset
* hide cancellation address if not used
* init monospace font correctly
* timelock recovery plugin: add input info at signing time.
Fixes trezor exception: 'Missing previous tx'
* timelock recovery: remove unused parameters
* avoid saving the tx in a separate var
fixing the assertions
* avoid caching recovery & cancellation inputs
* timelock recovery: separate help window from agreement.
move agreement at the end of the flow, rephrase it
* do not cache alert_tx_outputs
* do not crash when not enough funds
not enough funds can happen
when multiple addresses are specified
in payto_e, with an amount larger
than the wallet has - so we set
the payto_e color to red.
It can also happen when the user
selects a really high fee, but this
is not common in a "recovery"
wallet with significant funds.
* If files not saved - ask before closing
* move the checkbox above the save buttons
people read the text from top to
bottom and may not understand
why the buttons are disabled
---------
Co-authored-by: f321x <f321x@tutamail.com>
Co-authored-by: ThomasV <thomasv@electrum.org>
This change modifies create_trampoline_onion to only include as many
available r_tags as there is space left in the trampoline onion payload.
Previously we tried to include all passed invoice r_tags of legacy
trampoline payments into the payload which caused an user facing
exception and payment failure as the onion can only store a max of 400
bytes.
A single, single hop r_tag is around 52 bytes and the payload
without r_tags is already at ~280 bytes. So usually there is enough
space for 2 r_tags.
The implementation shuffles the r_tags on each call
so the payment will try different route hints on the attempts (fee level
increase or user retry).
I have logged the following byte sizes of the trampoline onion with a 2
trampoline onion hop and changing amounts of r_tags:
3 rtags:
payload size [0]: 113 (hop size: 81)
payload size [1]: 440 (hop size: 295) ( 52 bytes/rtag )
payload size [2]: 550 (hop size: 78)
2 rtags:
payload size [0]: 113 (hop size: 81)
payload size [1]: 386 (hop size: 241) ( 52 bytes/rtag )
payload size [2]: 496 (hop size: 78)
1 rtag:
payload size [0]: 113 (hop size: 81)
payload size [1]: 334 (hop size: 189) ( 52 bytes/rtag )
payload size [2]: 444 (hop size: 78)
0 rtags:
payload size [0]: 113 (hop size: 81)
payload size [1]: 282 (hop size: 137)
payload size [2]: 392 (hop size: 78)
As can be seen in the data, using 2 trampoline hops there is not enough
space for even a single r_tag which is why this option is being removed
too.
follow-up 70d1e1170e
```
=============================== warnings summary ===============================
tests/test_util.py::TestUtil::test_custom_task_factory
/tmp/cirrus-ci-build/tests/test_util.py:504: RuntimeWarning: coroutine 'TestUtil.test_custom_task_factory.<locals>.foo' was never awaited
self.assertEqual(foo().__qualname__, task.get_coro().__qualname__)
Enable tracemalloc to get traceback where the object was allocated.
See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.
```
We added some code in 0b3a283586
to explicitly hold strong refs for all tasks/futures. At the time I was uncertain if that also solves
GC issues with asyncio.run_coroutine_threadsafe.
ref https://github.com/spesmilo/electrum/pull/9608#issuecomment-2703681663
Looks like it does. run_coroutine_threadsafe *is* going through the custom task factory.
See the unit test.
The somewhat confusing thing is that we need a few event loop iterations for the task factory to run,
due to how run_coroutine_threadsafe is implemented. And also, the task that we will hold as strong ref
in the global set is not the concurrent.futures.Future that run_coroutine_threadsafe returns.
So this commit simply "fixes" the unit test so that it showcases this, and removes related, older, plumbing
from util.py that we now know is no longer needed because of this.
```
$ export ELECTRUM_LINTERS=E9,E101,E129,E273,E274,E703,E71,E722,F5,F6,F7,F8,W191,W29,B
$ export ELECTRUM_LINTERS_IGNORE=B007,B009,B010,B019,B036,F541,F841
$ flake8 . --count --select="$ELECTRUM_LINTERS" --ignore="$ELECTRUM_LINTERS_IGNORE" --show-source --statistics --exclude "*_pb2.py,electrum/_vendor/"
./electrum/commands.py:98:1: F811 redefinition of unused 'format_satoshis' from line 48
def format_satoshis(x):
^
./electrum/commands.py:437:9: F811 redefinition of unused 'Mnemonic' from line 62
from .mnemonic import Mnemonic
^
./electrum/gui/qt/wizard/wallet.py:37:5: F811 redefinition of unused 'Daemon' from line 14
from electrum.daemon import Daemon
^
./electrum/lntransport.py:14:1: F811 redefinition of unused 'Optional' from line 12
from typing import NamedTuple, List, Tuple, Mapping, Optional, TYPE_CHECKING, Union, Dict, Set, Sequence
^
./electrum/lntransport.py:14:1: F811 redefinition of unused 'TYPE_CHECKING' from line 12
from typing import NamedTuple, List, Tuple, Mapping, Optional, TYPE_CHECKING, Union, Dict, Set, Sequence
^
./electrum/plugin.py:966:13: F811 redefinition of unused 'hid' from line 593
import hid
^
./electrum/plugin.py:1040:13: F811 redefinition of unused 'hid' from line 593
import hid
^
./electrum/util.py:44:1: F811 redefinition of unused 'json' from line 26
import json
^
./electrum/util.py:46:1: F811 redefinition of unused 'NamedTuple' from line 29
from typing import NamedTuple, Optional
^
./electrum/util.py:46:1: F811 redefinition of unused 'Optional' from line 29
from typing import NamedTuple, Optional
^
./electrum/util.py:1456:56: F811 redefinition of unused 'traceback' from line 34
async def __aexit__(self, exc_type, exc_value, traceback):
^
./electrum/wallet_db.py:536:9: F811 redefinition of unused 'LOCAL' from line 46
LOCAL = 1
^
./electrum/wallet_db.py:537:9: F811 redefinition of unused 'REMOTE' from line 46
REMOTE = -1
^
./tests/test_bitcoin.py:28:1: F811 redefinition of unused 'bitcoin' from line 9
from electrum import crypto, constants, bitcoin
^
./tests/test_txbatcher.py:11:1: F811 redefinition of unused 'Transaction' from line 7
from electrum.transaction import Transaction, PartialTxInput, PartialTxOutput, TxOutpoint
^
./tests/test_wallet_vertical.py:20:1: F811 redefinition of unused 'Transaction' from line 10
from electrum.transaction import Transaction, PartialTxOutput, tx_from_any, Sighash
^
16 F811 redefinition of unused 'format_satoshis' from line 48
16
```
With `txbatcher.SLEEP_INTERVAL = 0.01`, on my laptop, the batcher called try_broadcasting() even before next_tx() effectively awaited _tx_event.
This resulted in the test failing due to timeout.
Basically, if the txbatcher.SLEEP_INTERVAL was too low, the 2-4 event loop iterations needed to await _tx_event.wait() took too long.
(note that the exact number of event loop iterations needed depends on the python version and the OS)