ref https://github.com/spesmilo/electrum/pull/9189
```
Traceback (most recent call last):
File "/home/user/wspace/electrum/electrum/gui/qt/my_treeview.py", line 166, in on_commitData
self.tv.on_edited(idx, edit_key=edit_key, text=new_text)
File "/home/user/wspace/electrum/electrum/gui/qt/history_list.py", line 699, in on_edited
self.hm.update_fiat(index)
File "/home/user/wspace/electrum/electrum/gui/qt/history_list.py", line 371, in update_fiat
self.dataChanged.emit(idx, idx, [Qt.ItemDataRole.DisplayRole, Qt.ForegroundRole])
AttributeError: type object 'Qt' has no attribute 'ForegroundRole'
```
add space
add gossip address field serialization, parsing and tests
fix linter
consolidate tests, fix intendation
refactor test in loops
add gossip address field serialization, parsing and tests
Adds a new test case for e2e trampoline payment, where there are multiple
Trampoline Forwarders which themselves are not directly connected.
This adds a regression test for https://github.com/spesmilo/electrum/pull/9431dffdebf778
Ideally, given an on-chain backup, after the remote force-closes, we should be able to spend our anchor output,
to CPFP the remote commitment tx (assuming the channel used OPTION_ANCHORS).
To spend the anchor output, we need to be able to sign with the local funding_privkey.
Previously we derived the funding_key from the channel_seed (which comes from os.urandom).
Prior to anchors, there was no use case for signing with the funding_key given a channel backup.
Now with anchors, we should make its derivation deterministic somehow, in a way so that it can
be derived given just an on-chain backup.
- one way would be to put some more data into the existing OP_RETURN
- uses block space
- the OP_RETURNs can be disabled via "use_recoverable_channels"
- only the initiator can use OP_RETURNs (so what if channel is in incoming dir?)
- instead, new scheme for our funding_key:
- we derive the funding_privkey from the lnworker root secret (derived from our bip32 seed)
- for outgoing channels:
- lnworker_root_secret + remote_node_id + funding_tx_nlocktime
- for incoming channels:
- lnworker_root_secret + remote_node_id + remote_funding_pubkey
- a check is added to avoid reusing the same key between channels:
not letting to user open more than one channel with the same peer in a single block
- only the first 16 bytes of the remote_node_id are used, as the onchain backup OP_RETURNs only contain that
- as the funding_privkey cannot be derived from the channel_seed anymore, it is included in the
imported channel backups, which in turn need a new version defined
- a wallet db upgrade is used to update already stored imported cbs
- alternatively we could keep the imported cbs as-is, so no new version, no new funding_privkey field, as it is clearly somewhat redundant given on-chain backups can reconstruct it
- however adding the field seems easier
- otherwise the existing code would try to derive the funding_privkey from the channel_seed
- also note: atm there is no field in the imported backups to distinguish anchor channels vs static-remotekey channels
- debian 11 only has python 3.9, deb12 has py3.11
- pip install pip is no longer needed, atm apt has new enough pip
- and on deb12, started getting "error: externally-managed-environment"
- faketime does not seem to work properly on debian 12
(getting reproducibility issues for the tarball)
- so instead we untar, fix the timestamps manually, and re-tar
The qtmultimedia-based qrreader has the concept of "strong_count":
before the scanner returns a decoded qr code result, it waits until
it has seen at least "strong_count" (e.g. 10) frames in which the qr code was seen and successfully decoded.
I think the idea might have been to reduce false positives, mis-decoding qr codes from bad frames.
However in practice it makes scanning even moderately sized qr codes really difficult for the user:
it takes several seconds (at least on my laptop cam) to obtain enough "clear" frames that count into the strong_count.
So I am lowering the strong_count to 2, down from CAMERA_FPS/3,
which makes it easier to scan, and I still haven't seen false positives even with this value.
Previously calling add_transaction with a malformed Transaction obj could
result in an exception late in the flow, after the walletdb was already side-effected.
Rollback of such side-effects is not implemented :/
but this small patch should at least cover and prevent some common cases.
```
File "/opt/electrum/electrum/address_synchronizer.py", line 358, in add_transaction
self.db.add_transaction(tx_hash, tx)
File "/opt/electrum/electrum/json_db.py", line 42, in wrapper
return func(self, *args, **kwargs)
File "/opt/electrum/electrum/wallet_db.py", line 1434, in add_transaction
tx = tx_from_any(str(tx))
File "/opt/electrum/electrum/transaction.py", line 1339, in tx_from_any
raise SerializationError(f"Failed to recognise tx encoding, or to parse transaction. "
```