disables the fee slider in the swap dialog for reverse swaps as the tx
fee for claiming is not configurable by the user. Also replaces calls to
`sm.get_swap_tx_fee()` with `sm.get_fee_for_txbatcher()` as this is the
correct fee estimate for claim transactions, instead of the config fee
eta used by `get_swap_tx_fee()`.
We try to predict the next headers the interface will ask for,
and request them ahead of time, to be kept in the headers_cache.
This saves network latency/round-trips, for a bit more memory usage
and in some cases for more bandwidth.
Note that due to PaddedRSTransport.WAIT_FOR_BUFFER_GROWTH_SECONDS,
latency saved here can be longer than "real" network latency.
This speeds up
- binary search greatly,
- backwards search to a small degree
(although not that much as its algorithm should be changed a bit to make it cache-friendly)
- catch-up greatly, if it's <10 blocks behind
What remains is to speed up catch-up in case we are behind by many thousands of block.
That behaviour is left unchanged here. The issue there is that we request chunks sequentially.
So e.g. 1 chunk (2016 blocks) per 1 second.
qml used the user config fee policy for the forward swap onchain funding
tx which can be too high or low, depending on what transactions the user
did previously with the wallet. Setting it to eta:2 ensures that the
funding tx is paying a sane fee.
- makes it consistent between the qml and qt guis that now both show the hex pubkey
- previously qml was showing npub
- don't truncate to first 10 chars, as that's still easy to bruteforce
- the qt gui has space to display the full pubkey (64 hex chars)
and can use the TreeWidget's columns to truncate as needed
- qml has less space, truncate to 32 hex chars there (128 bits should be enough against bruteforce)
changes qeswaphelper to shate a single, long lived transport instance
instead of opening new transports to do swaps and fetch offers.
This allows to continuosly fetch offers, so events which get returned
later by slow relays don't get missed and the fee values stay updated.
Also fixes a race causing the list to miss some swapservers, as the
current implementation fetches only until
`swap_manager.is_initialized()` is set, which will get set as soon as an
event of the configured swapserver is received. So if the event of the
configured swapserver is received as first, all server events coming in
after it would get ignored.
trezor==0.13.10 pulls in new dep "slip10", which relies on importlib magic
see 19561f0429/slip10/__init__.py (L6)
```
10.13 | E | plugins.trezor.trezor | error importing trezor plugin deps
Traceback (most recent call last):
File "importlib/metadata/__init__.py", line 397, in from_name
StopIteration
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "electrum/plugins/trezor/trezor.py", line 29, in <module>
from .clientbase import TrezorClientBase, RecoveryDeviceInputMethod
File "PyInstaller/loader/pyimod02_importers.py", line 450, in exec_module
File "electrum/plugins/trezor/clientbase.py", line 18, in <module>
import trezorlib.device
File "PyInstaller/loader/pyimod02_importers.py", line 450, in exec_module
File "trezorlib/device.py", line 27, in <module>
File "PyInstaller/loader/pyimod02_importers.py", line 450, in exec_module
File "slip10/__init__.py", line 6, in <module>
File "importlib/metadata/__init__.py", line 889, in version
File "importlib/metadata/__init__.py", line 862, in distribution
File "importlib/metadata/__init__.py", line 399, in from_name
importlib.metadata.PackageNotFoundError: No package metadata was found for slip10
```
note: 3.12 just transitioned to security-only status,
so can't bump win/mac binaries without switching to 3.13
(as we don't compile our own cpython for those)
note: some files have two versions in them, e.g.:
```
assert CffiRecipe._version == "1.15.1"
class CffiRecipePinned(util.InheritedRecipeMixin, CffiRecipe):
version = "1.17.1"
```
The assert is left there as I think it might be useful to get a failure if we rebase p4a
and the upstream recipe version changes. There might be substantial changes in the upstream
recipe that we need to adapt to. In the happy case, if we rebase p4a, we just have to manually
update these asserts to the new versions at that time.