This change modifies create_trampoline_onion to only include as many
available r_tags as there is space left in the trampoline onion payload.
Previously we tried to include all passed invoice r_tags of legacy
trampoline payments into the payload which caused an user facing
exception and payment failure as the onion can only store a max of 400
bytes.
A single, single hop r_tag is around 52 bytes and the payload
without r_tags is already at ~280 bytes. So usually there is enough
space for 2 r_tags.
The implementation shuffles the r_tags on each call
so the payment will try different route hints on the attempts (fee level
increase or user retry).
I have logged the following byte sizes of the trampoline onion with a 2
trampoline onion hop and changing amounts of r_tags:
3 rtags:
payload size [0]: 113 (hop size: 81)
payload size [1]: 440 (hop size: 295) ( 52 bytes/rtag )
payload size [2]: 550 (hop size: 78)
2 rtags:
payload size [0]: 113 (hop size: 81)
payload size [1]: 386 (hop size: 241) ( 52 bytes/rtag )
payload size [2]: 496 (hop size: 78)
1 rtag:
payload size [0]: 113 (hop size: 81)
payload size [1]: 334 (hop size: 189) ( 52 bytes/rtag )
payload size [2]: 444 (hop size: 78)
0 rtags:
payload size [0]: 113 (hop size: 81)
payload size [1]: 282 (hop size: 137)
payload size [2]: 392 (hop size: 78)
As can be seen in the data, using 2 trampoline hops there is not enough
space for even a single r_tag which is why this option is being removed
too.
Make path calculation check if channel is not in our sending channels but still uses our nodeID as starting node of the path.
I noticed an assertion error when trying to pay an invoice from a seed i have opened channels with in different wallet instances (same seed, different wallet).
Because the channel seemed suitable for sending the payment path finding included the channel for sending in the first position of the route but then
in pay_to_route the channel for route[0] could not be found as it is not included in our channel list, causing the assert and payment to fail.
- Allow plugins saved as zipfiles in user data dir
- plugins are authorized with a user chosen password
- pubkey derived from password is saved with admin permissions
follow-up 70d1e1170e
```
=============================== warnings summary ===============================
tests/test_util.py::TestUtil::test_custom_task_factory
/tmp/cirrus-ci-build/tests/test_util.py:504: RuntimeWarning: coroutine 'TestUtil.test_custom_task_factory.<locals>.foo' was never awaited
self.assertEqual(foo().__qualname__, task.get_coro().__qualname__)
Enable tracemalloc to get traceback where the object was allocated.
See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info.
```
Unlike a full bitcoin node, we rarely (if ever) validate the signatures of arbitrary transactions,
so I don't think these DOS issues can really be used against us.
ref https://rubin.io/bitcoin/2025/03/11/core-vuln-taproot-dos/
ref https://github.com/bitcoin/bitcoin/pull/24105
btw what is not explained in either source link is that the lack of caching is much
more serious for taproot as bip-342 lifted the 10 kbyte max size for scriptPubKeys.
force QEDaemon singleton, and refer to QEDaemon.instance where possible
In cases where we would run into circular dependencies, pass the instance
also refer to singleton QEConfig instead of passing instance in qeapp.py
QEInvoiceParser creates a zero amount output invoice when pasting an address, which would return the
wrong status when calling wallet.get_invoice_status() (there is some address heuristic in
wallet._is_onchain_invoice_paid which is associating with the previous payment)
We added some code in 0b3a283586
to explicitly hold strong refs for all tasks/futures. At the time I was uncertain if that also solves
GC issues with asyncio.run_coroutine_threadsafe.
ref https://github.com/spesmilo/electrum/pull/9608#issuecomment-2703681663
Looks like it does. run_coroutine_threadsafe *is* going through the custom task factory.
See the unit test.
The somewhat confusing thing is that we need a few event loop iterations for the task factory to run,
due to how run_coroutine_threadsafe is implemented. And also, the task that we will hold as strong ref
in the global set is not the concurrent.futures.Future that run_coroutine_threadsafe returns.
So this commit simply "fixes" the unit test so that it showcases this, and removes related, older, plumbing
from util.py that we now know is no longer needed because of this.