Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Backup to S3 #125

Open
sash2222 opened this issue Sep 22, 2021 · 3 comments
Open

Backup to S3 #125

sash2222 opened this issue Sep 22, 2021 · 3 comments
Assignees
Labels
question Further information is requested

Comments

@sash2222
Copy link

Hi, I haven't noticed anything about backups in the documentation. Somewhere I saw that it is not planned to dump to disk, I agree with that.
But the product would have expanded the use case if there was a backup function, say on S3, and the ability to raise the last backup from the storage at the start of the cluster.

@buraksezer
Copy link
Owner

Hi,

On-disk persistence(Redis AOF) is planned but there are a few things to implement before on-disk persistence. Backup to any other datastore would be overkill after implementing on-disk persistence.

@buraksezer buraksezer self-assigned this Sep 22, 2021
@buraksezer buraksezer added the question Further information is requested label Sep 22, 2021
@sash2222
Copy link
Author

Thanks for your reply, but I hasten to disagree about redundant backups.
Let's imagine (our case) that we have built into the authorization microservice, in fact, it caches the tuples of relations that it takes from MYsql in the subject. The microservice itself scales from 1 to 10-20-100 pods and is stateless. If we do a dump to disk, then for each pod we will have to connect a volum, which automatically does it under a constant. And if the state of the cluster is periodically flushed not to disk, but to the network service, this will allow the instances of the microservice to be left stateless.
Maybe then make some kind of provider (driver), in which to dump, and he himself will figure out where, to disk or in S3?

@buraksezer
Copy link
Owner

buraksezer commented Nov 21, 2021

Hi @sash2222

Hazelcast has MapLoader and MapStore SPIs to load/store distributed map entries.

MapLoader is an SPI. When you provide a MapLoader implementation and request an entry (using IMap.get()) that does not exist in memory, MapLoader’s load method loads that entry from the data store. This loaded entry is placed into the map and will stay there until it is removed or evicted.

MapStore is also an SPI. When a MapStore implementation is provided, an entry is also put into a user-defined data store. MapStore extends MapLoader. Later in this document, by MapStore we mean both MapStore and MapLoader since they compose a full-featured MapStore CRUD SPI.

So if you provide a driver implementation, Olric can push all distributed map entries asynchronously to S3 or any other data store and tries to fetch the entries from that data source if it doesn't exist in the cluster.

What about this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants