I want to upload a large trove of files and directories, contained in just one directory, to a fresh install of Seafile, and it’s not working.
All that I’m trying to do is take files that are on the server’s hard drive and upload them locally to the Seafile system. The web browser gets bogged down and small files, strangely, take a long time to upload. Then there’s a “Network error” after about an hour, for each file, and it continues on indefinitely through the files list, with one core at 100% the entire time, unable to upload any file. I have to try to cancel the upload with a very slowed down web browser.
- Ubuntu 18.04
- 3 GHz quad-core desktop
- 2 solid state hard drives. 1st drive with Ubuntu, Seafile, and files to be uploaded. 2nd drive for Seafile data.
- 8 GB of RAM.
- Seafile is running on Apache and MySQL.
I have shut off everything else that could be running so only Seafile could be using up resources. So, I would think this means the upload would go very quickly, but I found out the hard way that’s it’s extremely slow and then it crashes after about an hour.
I can upload directories with relatively small numbers of files just fine. I can even upload a large file one at a time, as I’ve tried this with a linux distro image that was about 800 MB, and it uploads it very quickly.
But when there’s a directory that’s between 1 and 10 GB, made up of many directories and files of various sizes, and I’m using the web interface with firefox, wow does it take a long time!
For a few minutes, one of the cores goes all the way to 100%, and the RAM starts to become filled. It fills the RAM until around 4.5 GB is used. After 5-10 minutes it actually starts to upload files. But wow does it transfer very slowly! It starts out with files that are 10-100 KB in size and it hangs for a bit for each one of them. Firefox keeps on popping up a message saying that something is hanging in the browser and should I kill it or wait. I can’t believe this is the 21st century and files less than 1 MB take 10 seconds or more just to process then upload. A larger file that’s 1 MB or more seems to have about the same amount of lag before it actually uploads, thus increasing the average data/time rate.
How does a large number of files cause so much lag? One of the cores is constantly at 100%, never ceasing, until it crashes after about an hour.
I have read that there’s a default time limit of one hour for a transfer, but I thought that was for one file, not a directory full of files. I guess it’s for a transfer of multiple files too? I have also read that changing the data chunks that Seafile uses from 1 MB to 2 MB or more can help. Does that speed up a transfer of a lot of small files?
I want to use Seafile because people say it has very fast sync and the Nextcloud sync is known for being very slow for large numbers of files. But thus far I can’t even have Seafile do what I think should be a pretty straightforward upload of files.