Often when the wallet creates a tx, the flow is:
- create unsigned tx
- sign tx
- broadcast tx, but don't save it in history
- server sends notification that status of a subscribed address changed
- client calls scripthash.get_history
- client sees txid in scripthash.get_history response
- client calls blockchain.transaction.get to request missing tx
Instead, now when we broadcast a tx on an interface, we cache that tx *for that interface*,
and just before calling blockchain.transaction.get, we lookup in the cache.
Hence this will often save a network request.
adds unittest to `test_wallet_vertical.py` to verify it is not using the
existing ln reserve utxo as tx input if a reserve is still required (in
opposite to using it as input and creating a new reserve as output).
Adds a `closest_htlc_expiry_height` value to the `check_hold_invoice` cli command response.
This allows to see the next absolute expiry height of the pending htlcs
of a payment. Note, htlcs will get failed before the actual expiry
height (if block_height + 144 > htlc.cltv_abs).
Adds an additional value to the `check_hold_invoice` cli command: `invoice_amount_sat` which returns the requested amount value of the hold invoice.
Co-authored-by: ghost43 <somber.night@protonmail.com>
the cli command `check_hold_invoice` incorrectly assumes that
`lnworker.is_accepted_mpp(payment_hash)` is true for settled invoices,
however it is not as the received mpp entries will be removed from
the `lnworker.received_mpp_htlcs` shortly after adding the preimage to
lnworker (after the htlcs got removed from the channel).
Also renames `amount_sat` in the `check_hold_invoice` response to
`amount_sat_received` to make it more obvious that this is the currently
received amount instead of the amount the invoice of `payment_hash` has
been created with.
We often call str.format() on translated strings.
E.g. `_("time left: {} seconds").format(t1)`
If the translated string has a different format syntax, this can raise at runtime.
This PR adds some runtime checks that try to ensure the source string and the translated string
have a similar format syntax. If the checks fail, `_()` will "reject" the translation by
returning the source string.
fixes https://github.com/spesmilo/electrum/issues/10010
ref https://github.com/spesmilo/electrum/issues/10007#issue-3203378250
Allowing to create hold invoices just by providing a payment hash
instead of the preimage right from the beginning allows for additional
use cases where the recipient doesn't have access to the preimage when
creating the invoice.
"type_=float" behaves a bit weirdly. Was kinda broken before, still not fully "fixed" here.
With this commit, if used together with convert_setter, it at least behaves in a sane way.
```
$ ./run_electrum -o setconfig timeout 10
1.16 | E | __main__ | error running command (without daemon)
Traceback (most recent call last):
File "/home/user/wspace/electrum/./run_electrum", line 593, in handle_cmd
result = fut.result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/home/user/wspace/electrum/./run_electrum", line 268, in run_offline_command
result = await func(*args, **kwargs)
File "/home/user/wspace/electrum/electrum/commands.py", line 194, in func_wrapper
return await func(*args, **kwargs)
File "/home/user/wspace/electrum/electrum/commands.py", line 408, in setconfig
self._setconfig(key, value)
File "/home/user/wspace/electrum/electrum/commands.py", line 398, in _setconfig
cv.set(value)
File "/home/user/wspace/electrum/electrum/simple_config.py", line 126, in set
self._config_var._set_config_value(self._config, value, save=save)
File "/home/user/wspace/electrum/electrum/simple_config.py", line 89, in _set_config_value
raise ValueError(
ValueError: ConfigVar.set type-check failed. key='timeout'. type=<class 'float'>. value=10
```
- interface.tip is the server's tip.
- consider scenario:
- client has chain len 800_000, is up to date
- client goes offline
- suddenly there is a short reorg
e.g. blocks 799_998, 799_999, 800_000 are reorged
- client was offline for long time, finally comes back online again
- server tip is 1_000_000, tip_header does not connect to client's local chain
- PREVIOUSLY before commit, client would start backwards search
- first it asks for header 800_001, which does not connect
- then client asks for header ~600k, which checks
- client will do long binary search to find the forkpoint
- AFTER commit, client starts backwards search
- first it asks for header 800_001, which does not connect
- then client asks for header 799_999, etc
- that is, previously, on average, client did a short backwards search, followed by a long binary search
- now, on average, client does a longer backwards search, followed by a shorter binary search
- this works much nicer with the headers_cache
(- and thomasv said the old behaviour was not intentional)
note: print() statements and stderr logging don't have a consistent printing order.
Either can buffer log lines and flush them later, and the buffers are independent.
Just prior to this commit, test_fork_conflict and test_fork_noconflict were essentially identical copies.
The only diff was that test_fork_conflict set the global blockchain.blockchains,
but this was not even affecting its behaviour anymore.
Originally when this test was added, we had the concept of chain fork conflicting with each other:
we could not handle three-way chain-splits. As in, there could only be a single fork forking away
from the main chain at any given height.
see 7221fb3231
However, this restriction was removed and generalised later:
141ff99580
After which the "test_fork_conflict" test did not make sense anymore.
We try to predict the next headers the interface will ask for,
and request them ahead of time, to be kept in the headers_cache.
This saves network latency/round-trips, for a bit more memory usage
and in some cases for more bandwidth.
Note that due to PaddedRSTransport.WAIT_FOR_BUFFER_GROWTH_SECONDS,
latency saved here can be longer than "real" network latency.
This speeds up
- binary search greatly,
- backwards search to a small degree
(although not that much as its algorithm should be changed a bit to make it cache-friendly)
- catch-up greatly, if it's <10 blocks behind
What remains is to speed up catch-up in case we are behind by many thousands of block.
That behaviour is left unchanged here. The issue there is that we request chunks sequentially.
So e.g. 1 chunk (2016 blocks) per 1 second.