Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Process unconfirmed_transactions by priority_fee and oldest timestamp #2977

Open
wants to merge 13 commits into
base: testnet3
Choose a base branch
from

Conversation

vicsn
Copy link
Contributor

@vicsn vicsn commented Jan 8, 2024

Motivation

Following EIP-1559 and common decency, this PR ensures we process transactions by priority_fee. If the priority_fee is the same, we order by timestamp (in nanoseconds), and otherwise randomly by transaction_id.

It may be narrowly individually rational for validators to only sort by priority_fee and ignore the timestamp/id, but "It is recommended that transactions with the same priority fee be sorted by time the transaction was received" - https://eips.ethereum.org/EIPS/eip-1559 . This improves the performance and security of the whole network, which also matters for validators.

Test Plan

Separated the queue insertion and popping functions so they can be tested.

node/consensus/Cargo.toml Outdated Show resolved Hide resolved
node/consensus/src/lib.rs Outdated Show resolved Hide resolved
node/consensus/Cargo.toml Outdated Show resolved Hide resolved
@@ -65,7 +71,7 @@ pub struct Consensus<N: Network> {
/// The unconfirmed solutions queue.
solutions_queue: Arc<Mutex<IndexMap<PuzzleCommitment<N>, ProverSolution<N>>>>,
/// The unconfirmed transactions queue.
transactions_queue: Arc<Mutex<IndexMap<N::TransactionID, Transaction<N>>>>,
transactions_queue: Arc<Mutex<BTreeMap<TransactionKey<N>, Transaction<N>>>>,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This may not catch duplicate transactions that come in at different times. We may now have multiple instances of the same transaction in the transactions_queue.

Copy link
Contributor Author

@vicsn vicsn Jan 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, good catch.

Just for shared context: Before adding to the queue, we already check if the tx was seen recently in an LRU cache: self.seen_transactions.lock().put(transaction_id, ()).is_some(). But seen_transactions can evict txs before they are popped from the queue, causing duplicate transactions to be added to the queue, passed to workers to verify, wasting compute.

Best I could come up with for now is to add yet another IndexSet which tracks transaction_id's in the queue: ac3a95b. Independently, the LRU cache is still useful as a fast check.

Somewhat related:

  • we didn't have any limit on the size of transactions_queue. Someone could spam e.g. 64_000 unique 1MB deployments and make the validators go OOM. Solutions don't have this issue because they are constant size. I added a limit of ~1 GB: 1704d35
  • I think we also need to evict from the cache to let retransmission succeed: ef61a9d

Co-authored-by: Raymond Chu <14917648+raychu86@users.noreply.github.com>
Signed-off-by: vicsn <victor.s.nicolaas@protonmail.com>
@howardwu howardwu added the post-launch This issue or pull request will be handled after mainnet launch label Feb 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
post-launch This issue or pull request will be handled after mainnet launch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants