Unlike a full bitcoin node, we rarely (if ever) validate the signatures of arbitrary transactions,
so I don't think these DOS issues can really be used against us.
ref https://rubin.io/bitcoin/2025/03/11/core-vuln-taproot-dos/
ref https://github.com/bitcoin/bitcoin/pull/24105
btw what is not explained in either source link is that the lack of caching is much
more serious for taproot as bip-342 lifted the 10 kbyte max size for scriptPubKeys.
force QEDaemon singleton, and refer to QEDaemon.instance where possible
In cases where we would run into circular dependencies, pass the instance
also refer to singleton QEConfig instead of passing instance in qeapp.py
QEInvoiceParser creates a zero amount output invoice when pasting an address, which would return the
wrong status when calling wallet.get_invoice_status() (there is some address heuristic in
wallet._is_onchain_invoice_paid which is associating with the previous payment)
We added some code in 0b3a283586
to explicitly hold strong refs for all tasks/futures. At the time I was uncertain if that also solves
GC issues with asyncio.run_coroutine_threadsafe.
ref https://github.com/spesmilo/electrum/pull/9608#issuecomment-2703681663
Looks like it does. run_coroutine_threadsafe *is* going through the custom task factory.
See the unit test.
The somewhat confusing thing is that we need a few event loop iterations for the task factory to run,
due to how run_coroutine_threadsafe is implemented. And also, the task that we will hold as strong ref
in the global set is not the concurrent.futures.Future that run_coroutine_threadsafe returns.
So this commit simply "fixes" the unit test so that it showcases this, and removes related, older, plumbing
from util.py that we now know is no longer needed because of this.
channel_db.load_data() is slow, slowing down startup time (when trampoline is disabled).
util.list_enabled_bits() is one of the main contributors to the slowness, called by validate_features().
One could argue that we could even simply *not* call validate_features for gossip messages as part of load_data,
as they have already been validated before storing them in the db. However re-validating them there is a good
clean-up/sanity check IMO. Note that what is considered "valid" can change over time, so just because validate_features
passed when we originally received and stored a gossip message, it might no longer be valid a year later if the bolts change.
This caching decreases the time needed for load_data on two different machines / gossip dbs as below:
47 sec -> 10 sec
18 sec -> 6 sec
If instead of caching, I just rm the validate_features() calls, the benchmarks are almost identical, within noise.
That is, the cache looks really effective.
(the rest of the slowness is mostly due to lnmsg.decode_msg)
```
>>> lnutil.validate_features.cache_info()
CacheInfo(hits=172674, misses=287, maxsize=1000, currsize=277)
```
-----
We could alternatively directly cache util.list_enabled_bits (instead of validate_features).
That would be a bit slower and might end up using a lot more memory in some cases I think, but maybe conceptually would be cleaner.
Also note that if validate_features() raises an exception, that is not cached.