Skip to content
This repository has been archived by the owner on Mar 9, 2019. It is now read-only.

high iops because of MADV_RANDOM when doing sequential access #691

Open
vrecan opened this issue May 31, 2017 · 2 comments
Open

high iops because of MADV_RANDOM when doing sequential access #691

vrecan opened this issue May 31, 2017 · 2 comments

Comments

@vrecan
Copy link
Contributor

vrecan commented May 31, 2017

I noticed that on high latency filesystems (Elastic file system, GlusterFS) we were seeing really terrible performance with boltdb. I dug in a little bit more and noticed that we were advising the kernel that we are doing random IO. In my case this is not true, almost all of our io is sequential.

Original code

	if err := madvise(b, syscall.MADV_RANDOM); err != nil {
		return fmt.Errorf("madvise: %s", err)
	}

changing to this

	if err := madvise(b, syscall.MADV_NORMAL); err != nil {
		return fmt.Errorf("madvise: %s", err)
	}

We got more then a 60x performance increase. We were seeing reads of around 1500kb/s (~500iops) before and that changed to 60000kb/s (~100iops). We also saw a massive decrease in the number of iops required to read from the db. I was thinking about putting up a pr that allowed you to choose which MADV flag we use inside of boltdb. Does this sound like something you would accept?

@vrecan
Copy link
Contributor Author

vrecan commented May 31, 2017

I also saw improved io on my dev machine with local disks with the same order of magnitude. My local disks have a latency frequently under 1ms while my remote disks take 10-15ms for a single read. This caused a lot more stalling on the remote disks while the local one was fast enough to not usually be the bottleneck.

@funny-falcon
Copy link

But it was changed to RANDOM because in other usage patterns it is faster.

Looks like it should be configuration option.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants