You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have a test cluster with 3 SSD servers (master + volume on SSD disks IP 201/202/203) and 3 HDD servers (volume only, IP 211/212/213 on the snapshot below)
The first machine have a filer too, with s3 enabled, and the filers on leveldb
We created a s3 bucket / collection named test1, and setup fs.configure to store everything on SSD, via fs.configure -collection test1 -locationPrefix '/' -disk ssd -apply
Then we uploaded a bunch of video files from a peertube using s3cmd : s3cmd --verbose put --recursive fdbfb556-5345-4e01-91aa-e0c5c5d45049 s3://test1/
We found our video files in newly created volumes in the SSD volume servers, but a little file have been created in HDD volumes for this collection too. I'm not sure it's normal :/
please find below our master after the upload :
and the HDD volume server status :
System Setup
command lines :
weed master -mdir=/data/master -peers=10.10.4.201:9333,10.10.4.202:9333,10.10.4.203:9333 -defaultReplication=001 -resumeState -volumePreallocate -volumeSizeLimitMB=1000
on 201/202/203 : weed volume -dir=/data/volume -disk=ssd -max=0 -index=memory -minFreeSpace=10GiB -mserver=10.10.4.201:9333,10.10.4.202:9333,10.10.4.203:9333
on 211/212/213 : weed volume -dir=/data/volume -disk=hdd -max=0 -index=memory -minFreeSpace=10GiB -mserver=10.10.4.201:9333,10.10.4.202:9333,10.10.4.203:9333
on the main filer: weed filer -master=10.10.4.201:9333,10.10.4.202:9333,10.10.4.203:9333 -maxMB=20 -s3 -s3.allowDeleteBucketNotEmpty=false -s3.config=/etc/seaweedfs/s3.json -s3.domainName=test1.octos3.fr
since fs.configure tells seaweedfs to store everything on collection "test1" into ssd drives, I expect it to not create any volume with any files on hdd volume servers
Additional context
The created file is very small compared to our video files, and it seems to contain metadata from s3 protocol.
Describe the bug
We have a test cluster with 3 SSD servers (master + volume on SSD disks IP 201/202/203) and 3 HDD servers (volume only, IP 211/212/213 on the snapshot below)
The first machine have a filer too, with s3 enabled, and the filers on leveldb
We created a s3 bucket / collection named test1, and setup fs.configure to store everything on SSD, via
fs.configure -collection test1 -locationPrefix '/' -disk ssd -apply
Then we uploaded a bunch of video files from a peertube using s3cmd :
s3cmd --verbose put --recursive fdbfb556-5345-4e01-91aa-e0c5c5d45049 s3://test1/
We found our video files in newly created volumes in the SSD volume servers, but a little file have been created in HDD volumes for this collection too. I'm not sure it's normal :/
please find below our master after the upload :
and the HDD volume server status :
System Setup
weed master -mdir=/data/master -peers=10.10.4.201:9333,10.10.4.202:9333,10.10.4.203:9333 -defaultReplication=001 -resumeState -volumePreallocate -volumeSizeLimitMB=1000
on 201/202/203 :
weed volume -dir=/data/volume -disk=ssd -max=0 -index=memory -minFreeSpace=10GiB -mserver=10.10.4.201:9333,10.10.4.202:9333,10.10.4.203:9333
on 211/212/213 :
weed volume -dir=/data/volume -disk=hdd -max=0 -index=memory -minFreeSpace=10GiB -mserver=10.10.4.201:9333,10.10.4.202:9333,10.10.4.203:9333
on the main filer:
weed filer -master=10.10.4.201:9333,10.10.4.202:9333,10.10.4.203:9333 -maxMB=20 -s3 -s3.allowDeleteBucketNotEmpty=false -s3.config=/etc/seaweedfs/s3.json -s3.domainName=test1.octos3.fr
Expected behavior
since fs.configure tells seaweedfs to store everything on collection "test1" into ssd drives, I expect it to not create any volume with any files on hdd volume servers
Additional context
The created file is very small compared to our video files, and it seems to contain metadata from s3 protocol.
I launched this to see what they are :
the files are accessible here for a while: https://benjamin.sonntag.fr/download/seaweed/
If I do a fsck I found that they are marked as orphans:
The text was updated successfully, but these errors were encountered: