Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How long does it take to restore the latest dump on an external node? #1553

Open
breezytm opened this issue Apr 2, 2024 · 4 comments
Open

Comments

@breezytm
Copy link

breezytm commented Apr 2, 2024

Hi,

I am trying to deploy an external node. The restore seems to be taking quite some time. It has been 9 days since I started but only 5305150/ 30496435 has been processed. If my math is correct, it is importing 24,500 blocks/hr. At this rate it should take well over a month and half. If I am wrong, what am I missing?

@EmilLuta
Copy link
Contributor

cc: @slowli, @tomg10

@slowli
Copy link
Contributor

slowli commented Apr 25, 2024

Hey, @breezytm! You're not wrong at all; that's within an expected rate of block processing on consumer-grade hardware. For this reason, we're currently developing snapshot recovery feature that will allow to recover the node state much faster (preliminary results are ~4–5 hours for the mainnet on consumer-grade hardware).

@breezytm
Copy link
Author

breezytm commented Apr 28, 2024

all
Rather than taking the approach of backing up and restoring the dump, why not back up the data folder in its entirety as a .tar file and make it available for download. This method would be much quicker to download, extract, and start the node. This is literally the approach every other blockchain projects are doing in terms of snapshot.

@slowli
Copy link
Contributor

slowli commented Apr 29, 2024

First, there's no single data folder; the data stored partially in Postgres, partially in RocksDB. Second, for either of these databases, data consistency and portability are concerns that prevent "just" taking data from the disk at any point in time. Third, the amount of data in Postgres and RocksDB (several TB) makes this approach hardly desirable for non-archival nodes even if other concerns were sorted out.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants