Under heavy load or slow networks the backend is not able to respond before the client times out. I see in the client log
Timeout was reached and in the server log a
499 indicating it was still processing the request when the client closed the connection. Ultimately this results in a read error.
Is there a way to configure the drive’s timeout so this can be tailored to the deployment environment?
After testing SeaDrive a few months on Mac and Linux I’m finding it very unreliable due to this issue.
When the drive is placed under load such as an rsync copy of large and numerous files, the drive client times out due to network and server latency.
In some cases it just takes a while for the server to respond with the data, and it’s trying but the drive client times out the operation. In other cases it seems the drive accepts the local requests, queues them, but as the first ones are still transferring data over the network the later queued reads fail with timeout just because data hasn’t been fully received and completed from the first (because data takes a while to transfer over networks). This results in various failures depending on the client app that issued the read.
This appears to be completely preventable if SeaDrive would follow normal I/O rules and block completion while the operation is pending instead of introducing a timeout that the app never asked for.
Could we please at least have a way to control the timeout from the client?