You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Running 2 volume servers on two different machines with assigned disk=ssd, and trying to create a file with replication 001 fails with no matching data node.
System Setup
On machine 1 (Ubuntu 22.04 x86), start a master server with a volume server
$ weed server -dataCenter=LHR -dir=/mnt/ssd/volumes -ip=192.168.1.201 -master -master.dir=/mnt/ssd/seaweedfs/master/ -volume -volume.disk=ssd -volume.max=4
$ weed version
version 30GB 3.64 b74e8082bac408138be99e128b8c28fd19eca7a6 linux amd64
On machine 2 (Ubuntu 20.04 arm64) start a volume server
$ weed volume -dataCenter=LHR -dir=/mnt/ssd/volumes -disk=ssd -ip=192.168.1.200 -max=3 -mserver=192.168.1.201:9333 -port=8080
$ weed version
version 30GB 3.64 b74e8082bac408138be99e128b8c28fd19eca7a6 linux arm64
Now, if on machine 1, I run
$ curl 'http://localhost:9333/dir/assign?dataCenter=LHR&replication=001&disk=ssd'
{"error":"failed to find writable volumes for collection: replication:001 ttl: error: No more writable volumes!"}
Expected behavior
I would expect that creating a file will succeed as 2 volume servers were available and a replication level of 001, means that 2 copies will be stored on 2 different volume servers.
Screenshots
If applicable, add screenshots to help explain your problem.
Additional context
Logs in the master on machine 1
Apr 10 00:21:11 n100 seaweedfs-master[696]: I0410 00:21:11.987097 volume_growth.go:99 create 6 volume, created 0: No matching data node found!
Apr 10 00:21:11 n100 seaweedfs-master[696]: LHR:Only has 0 racks with more than 2 free data nodes, not enough for 1.
Apr 10 00:21:12 n100 seaweedfs-master[696]: I0410 00:21:12.187155 master_server_handlers.go:133 dirAssign volume growth {"replication":{"node":1},"ttl":{"Count":0,"Unit":0},"disk":"ssd","dataCenter":"LHR"} from 127.0.0.1:53428
Apr 10 00:21:12 n100 seaweedfs-master[696]: I0410 00:21:12.187266 volume_growth.go:99 create 6 volume, created 0: No matching data node found!
Apr 10 00:21:12 n100 seaweedfs-master[696]: LHR:Only has 0 racks with more than 2 free data nodes, not enough for 1.
Apr 10 00:21:12 n100 seaweedfs-master[696]: I0410 00:21:12.387634 common.go:77 response method:GET URL:/dir/assign?dataCenter=LHR&replication=001&disk=ssd with httpStatus:406 and JSON:{"error":"failed to find writable volumes for collection: replication:001 ttl: error: No more writable volumes!"}
All of these disks are empty and have 128GB of space available.
The text was updated successfully, but these errors were encountered:
and it basically says that by default Seaweedfs allocate 8 volumes when it starts allocating, and the default volume size is 30G, and this FAQ advise you to either decrease the volume size or have more volumes available on your volume servers.
In your case, since you have little space (~128G per volume-server) I'd add something like -volumeSizeLimitMB=4096 to your volume command line, so that each volume has a size of 4GB, and -max=30 to get 30 volumes per server.
I also have a bunch of HDD used to save media files, can I have a different volumeSizeLimit for different volume servers? IIUC running weed volume -h does not give a flag for -volumeSizeLimitMB.
It would be pretty overkill to have a volume server for HDD that have a volume size limit of 4GB for a 4 TB HDD.
Describe the bug
Running 2 volume servers on two different machines with assigned disk=ssd, and trying to create a file with replication 001 fails with no matching data node.
System Setup
Now, if on machine 1, I run
Expected behavior
I would expect that creating a file will succeed as 2 volume servers were available and a replication level of 001, means that 2 copies will be stored on 2 different volume servers.
Screenshots
If applicable, add screenshots to help explain your problem.
Additional context
Logs in the master on machine 1
All of these disks are empty and have 128GB of space available.
The text was updated successfully, but these errors were encountered: