-
Notifications
You must be signed in to change notification settings - Fork 35.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Low-level cluster linearization code #30126
base: master
Are you sure you want to change the base?
Conversation
The following sections might be updated with supplementary metadata relevant to reviewers and maintainers. Code CoverageFor detailed information about the code coverage, see the test coverage report. ReviewsSee the guideline for information on the review process. ConflictsReviewers, this pull request conflicts with the following ones:
If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first. |
🚧 At least one of the CI tasks failed. Make sure to run all tests locally, according to the Possibly this is due to a silent merge conflict (the changes in this pull request being Leave a comment here, if you need help tracking down a confusing failure. |
079f02d
to
07f68f9
Compare
079f02d
to
b4bb178
Compare
Benchmarks on my Ryzen 5950X system:
This means that for a 64-transaction cluster, it should be possible to linearize (28.59 µs) with 100 candidate search iterations (3.32 µs) plus postlinearize (5.83 µs), within a total of 37.74 µs, on my system. |
I've dropped the dependency on #29625, and switched to using FastRandomContext instead; there is a measurable slowdown from using the (ChaCha20-based) FastRandomContext over the (xoroshiro128++-based) InsecureRandomContext introduced there, but it's no more than 1-2%. I can switch back to that approach if 29625 were to make it in. |
🚧 At least one of the CI tasks failed. Make sure to run all tests locally, according to the Possibly this is due to a silent merge conflict (the changes in this pull request being Leave a comment here, if you need help tracking down a confusing failure. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
glanced in the context of checking what of VecDeque/BitSet actually required
// Invoke bounded search to update best, with up to half of our remaining iterations as | ||
// limit. | ||
uint64_t iterations = (iteration_limit + 1) / 2; | ||
iteration_limit -= iterations; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in/out args for iterations is kinda spooky and made these few lines really hard to read, could FindCandidateSet
just return a std::pair
or similar?
S descendants; | ||
|
||
friend bool operator==(const Entry&, const Entry&) noexcept = default; | ||
friend auto operator<=>(const Entry&, const Entry&) noexcept = default; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For this operator are S ancestors
and S descendants
ordered? Is it just compared via m_val
for BitSets
? Is ordering by these elements meaningful?
**/ | ||
explicit DepGraph(ClusterIndex ntx) noexcept | ||
{ | ||
entries.resize(ntx); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it crashes once it hits S::Singleton
if violated but maybe an earlier crash is more immediately informative
entries.resize(ntx); | |
Assume(ntx <= S::Size()); | |
entries.resize(ntx); |
*/ | ||
DepGraph(const DepGraph<S>& depgraph, Span<const ClusterIndex> mapping) noexcept : entries(depgraph.TxCount()) | ||
{ | ||
// Fill in fee, size, ancestors. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
to avoid an OOB access regression
// Fill in fee, size, ancestors. | |
Assert(mapping.size() == depgraph.TxCount()); | |
// Fill in fee, size, ancestors. |
|
||
/** Data structure that holds a transaction graph's preprocessed data (fee, size, ancestors, | ||
* descendants). */ | ||
template<typename S> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not super important but can we use a more descriptive typename since our use-case is pretty narrow? I'm mentally substituting S
everywhere with bitset
which is a fair mental leap and it's relatively difficult to grep for being so short.
S_bitset
? I don't know why the conventions are the way they are 😅
} | ||
|
||
// Actually construct new work item on the queue. | ||
Assume(queue.size() < queue.capacity()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this for ensuring no allocations are going to take place? How do we know this?
This adds a bitset module that implements a BitSet<N> class, a variant of std::bitset with a few additional features that cannot be implemented in a wrapper without performance loss (specifically, finding first and last bit set, or iterating over all set bits).
…ypes This primarily adds the DepGraph class, which encapsulated precomputed ancestor/descendant information for a given transaction cluster, with a number of a utility features (inspectors for set feerates, computing reduced parents/children, adding transactions, adding dependencies), which will become needed in future commits.
This introduces a bespoke fuzzing-focused serialization format for DepGraphs, and then tests that this format can represent any graph, roundtrips, and then uses that to test the correctness of DepGraph itself. This forms the basis for future fuzz tests that need to work with interesting graph.
This is a class that encapsulated precomputes ancestor set feerates, and presents an interface for getting the best remaining ancestor set.
Similar to AncestorCandidateFinder, this encapsulates the state needed for finding good candidate sets using a search algorithm.
This adds a first version of the overall linearization interface, which given a DepGraph constructs a good linearization, by incrementally including good candidate sets (found using AncestorCandidateFinder and SearchCandidateFinder).
Add benchmarks for known bad graphs for the purpose of search (as an upper bound on work per search iterations) and ancestor sorting (as an upper bound on linearization work with no search iterations).
Add a correctness test for the overall linearization algorithm.
Add utility functions to DepGraph for finding connected components.
Before this commit, the worst case for linearization involves clusters which break apart in several smaller components after the first candidate is included in the output linearization. Address this by never considering work items that span multiple components of what remains of the cluster.
This is an STL-like container that interface-wise looks like std::deque, but is backed by a (fixed size, with vector-like capacity/reserve) circular buffer.
Switch to BFS exploration of the search tree in SearchCandidateFinder instead of DFS exploration. This appears to behave better for real world clusters. As BFS has the downside of needing far larger search queues, switch back to DFS temporarily when the queue grows too large.
To make search non-deterministic, change the BFS logic from always picking the first queue item, randomly picking the first or second queue item.
This implements the LIMO algorithm for linearizing by improving an existing linearization. See https://delvingbitcoin.org/t/limo-combining-the-best-parts-of-linearization-search-and-merging for details.
This is a requirement for a future commit, which will rely on quickly iterating over transaction sets in decreasing individual feerate order.
…ion) In each work item, keep track of a conservative overestimate of the best possible feerate that can be reached from it, and then use these to avoid exploring hopeless work items.
Keep track of which transactions in the graph have an individual feerate that is better than the best included set so far. Others do not need to be added to the pot set, as they cannot possibly help beating best.
Automatically add topologically-valid subsets of the potential set pot to inc. It can be proven that these must be part of the best reachable topologically-valid set from that work item.
Emperically, this approach seems to be more efficient in common real-life clusters, and does not change the worst case.
…ion) Cache the potential set inside work items, and use it to skip part of the computation of split-off work items from it.
Depends on #30160 and #30161. Eventually #28676 will end up being based on this.
This introduces low-level optimized cluster linearization code, including tests and some benchmarks. It is currently not hooked up to anything.
Roughly the commits are organized into 3 groups:
Ultimately, what this PR adds is functions
Linearize
,PostLinearize
, andMergeLinearizations
which operate on instances ofDepGraph
(instances of which represent pre-processed transaction clusters) to produce and/or improve linearizations for that cluster.Along the way two new data structures are introduced (
util/bitset.h
andutil/ringbuffer.h
), which could be useful more broadly. They have their own commits, which include tests.To provide assurance, the code heavily relies on fuzz tests. A novel approach is used here, where the fuzz input is parsed using the serialization.h framework rather than
FuzzedDataProvider
, with a custom serializer/deserializer forDepGraph
objects. By including serialization, it's possible to ascertain that the format can represent every relevant cluster, as well as potentially permitting the construction of ad-hoc fuzz inputs from clusters (not included in this PR, but used during development).The$S$ in every iteration found as the best out of (a) the best remaining ancestor set and (b) randomized computationally-bounded search. It incrementally builds up a linearization by finding good topologically-valid subsets to move to the front, in such a way that the resulting linearization has a diagram that is at least as good as the
Linearize(depgraph, iteration_limit, rng_seed, old_linearization)
function is an implementation of the (single) LIMO algorithm, with theold_linearization
passed in (if any).iteration_limit
andrng_seed
only control the (b) randomized search. Even with 0 iterations, the result will be as good as the old linearization, and the included sets at every point will have a feerate at least as high as the best remaining ancestor set at that point.The search algorithm used in the (b) step above largely follows Section 2 of How to linearize your cluster, though with a few changes:
best
value is initialized with the best topologically valid set known to the LIMO algorithm before search starts: the better one out of the highest-feerate remaining ancestor set, and the highest-feerate prefix of remaining transactions inold_linearization
.At a high level, the only missing optimization from that post is bottleneck analysis; my thinking is that it only really helps with clusters that are already relatively cheap to linearize (doing so would need to be done at a higher level, not inside the search algorithm).
The
PostLinearize(depgraph, linearization)
function performs an in-place improvement oflinearization
, using two iterations of the Linearization post-processing algorithm. The first running from back to front, the second from front to back.The
MergeLinearizations(depgraph, linearization1, linearization2)
function computes a new linearization for the provided cluster, given two existing linearizations for that cluster, which is at least as good as both inputs. The algorithm is described at a high level in merging incomparable linearizations.