Slow upload trough webbrowser

Aldo Im getting faster upload it should go even faster it seem that it only can handle around 28MB/s.
It’s the same issue with download trough webbbrowser.
I have tried with both ethernet and wifi it’s still the same.
I have tried different webbbrowsers also and it’s still the same issue.

I have also tried with multi uploads but then it goes down on the secound one so it seem that it’s some kind of limit on 28MB/s.
How can I take that limit away?

I don’t have that issue with the seafile drive then I’m getting upload on 90MB/s

Can someone help me? I can’t buy the pro version until I get eveyrthing to work =/ My deadline is next friday and I really want to present Seafile. =/

@DerDanilo @shoeper


    server {
        listen       80;
        server_name xxx;
        rewrite ^ https://$http_host$request_uri? permanent;    # force redirect http to https
        server_tokens off;
    }
    server {
        listen 443;
        ssl on;
        ssl_certificate /etc/nginx/ssl/seafile.crt;        # path to your cacert.pem
        ssl_certificate_key /etc/nginx/ssl/seafile.key;    # path to your privkey.pem
        server_name xxx;
        proxy_set_header X-Forwarded-For $remote_addr;

        add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";
        server_tokens off;

        location / {
            fastcgi_pass    127.0.0.1:8000;
            fastcgi_param   SCRIPT_FILENAME     $document_root$fastcgi_script_name;
            fastcgi_param   PATH_INFO           $fastcgi_script_name;

            fastcgi_param   SERVER_PROTOCOL        $server_protocol;
            fastcgi_param   QUERY_STRING        $query_string;
            fastcgi_param   REQUEST_METHOD      $request_method;
            fastcgi_param   CONTENT_TYPE        $content_type;
            fastcgi_param   CONTENT_LENGTH      $content_length;
            fastcgi_param   SERVER_ADDR         $server_addr;
            fastcgi_param   SERVER_PORT         $server_port;
            fastcgi_param   SERVER_NAME         $server_name;
            fastcgi_param   REMOTE_ADDR         $remote_addr;
            fastcgi_param   HTTPS               on;
            fastcgi_param   HTTP_SCHEME         https;

            access_log      /var/log/nginx/seahub.access.log;
            error_log       /var/log/nginx/seahub.error.log;
            fastcgi_read_timeout 36000;
            client_max_body_size 0;
        }
        location /seafhttp {
            rewrite ^/seafhttp(.*)$ $1 break;
            proxy_pass http://127.0.0.1:8082;
            client_max_body_size 0;
            proxy_connect_timeout  36000s;
            proxy_read_timeout  36000s;
            proxy_send_timeout  36000s;
            send_timeout  36000s;
            proxy_request_buffering off;
        }
        location /seafdav {
            fastcgi_pass    127.0.0.1:8080;
            fastcgi_param   SCRIPT_FILENAME     $document_root$fastcgi_script_name;
            fastcgi_param   PATH_INFO           $fastcgi_script_name;

            fastcgi_param   SERVER_PROTOCOL     $server_protocol;
            fastcgi_param   QUERY_STRING        $query_string;
            fastcgi_param   REQUEST_METHOD      $request_method;
            fastcgi_param   CONTENT_TYPE        $content_type;
            fastcgi_param   CONTENT_LENGTH      $content_length;
            fastcgi_param   SERVER_ADDR         $server_addr;
            fastcgi_param   SERVER_PORT         $server_port;
            fastcgi_param   SERVER_NAME         $server_name;
            fastcgi_param   HTTPS               on;
            fastcgi_param   HTTP_SCHEME         https;

            client_max_body_size 0;
            proxy_connect_timeout  36000s;
            proxy_read_timeout  36000s;
            proxy_send_timeout  36000s;
            send_timeout  36000s;

            # This option is only available for Nginx >= 1.8.0. See more details below.
            proxy_request_buffering off;

        access_log      /var/log/nginx/seafdav.access.log;
        error_log       /var/log/nginx/seafdav.error.log;
    }
        location /media {
            root xxxx;
        }
    }

Ok, so when I’m deleting the “http2” in the NGINX settings after that I’m getting 28MB/s in upload and that’s kind of good but I could get faster.
I tought that http2 should bee faster then original.

Aldo Im getting faster upload it should go even faster it seem that it only can handle around 28MB/s.
It’s the same issue with download trough webbbrowser.
I have tried with both ethernet and wifi it’s still the same.
I have tried different webbbrowsers also and it’s still the same issue.

I have also tried with multi uploads but then it goes down on the secound one so it seem that it’s some kind of limit on 28MB/s.
How can I take that limit away?

I don’t have that issue with the seafile drive then I’m getting upload on 90MB/s

Can someone help me? I can’t buy the pro version until I get eveyrthing to work =/ My deadline is next friday and I really want to present Seafile. =/

@DerDanilo @shoeper

It looks like there somewhere is a bottleneck. What’s your network topology and could you have a look at the server load while downloading? Especially disk load would be interesting.

Seafile drive works different. It uses an algorithm an does not download whole files but blocks. These blocks are being downloaded in parallel and then being put together to get a file.

Ok, I’ll take a look later an get back to you with it, the info regarding the disk activity when I’m uploading / downloading.

But my server is:
Xeon E3-1265l lv2
16GB DDR3 RAM (8GB dedicated to Seafile)
Seafile storage are by it self on a WD Red 3TB
Seafile OS (Ubutnu) are on a SSD disk.

Se my comment above also, the disk E is the WD Red that I have VHDX storage for seafile on.
If this is not good enigh just tell me how to check it better and I’ll do that.
The only thing that are happening on that disk is that I’m uploading a 6GB file trough webbbrowser.

It looks like the disk just isn’t faster.

But the same disk can handle 90MB/s over the client, why’s that? Can I somehow configure so the upload can bee the same as when it’s uploading files from the client?

@shoeper

@Jonathan is the indexing of uploaded data via Seahub started while the upload is in progress or after it has been fully uploaded?

How many nginx workers did you set? How many worker connections are allowed?

It uploads the file and the splits it into chunks which takes additional disk I/O, I don’t know exactly if this happens only on the /tmp folder or where. Maybe @shoeper can help here.

worker_processes  8;

events {
  worker_connections 8096;
  multi_accept on;
  use epoll;
}

On our servers we don’t have such issues, as we run a HW Raid 10, 2GB Cache and BBU with SAS drives which are pretty fast.

Have you enabled resumable uploads?

And what is the latency (from client to server)?

What about the load on the client side? I’m just performing a large upload it is at 33 MiB and my chrome uses quite some cpu. It’s a desktop I5 with quite some power, so the client side can also be a limiting factor. In the chrome dev tools I can see that the file is being split in 1 MiB parts and uploaded that way. So from my point of view it is very likely that the disk doesn’t allow higher speeds as normal disks do reach at best 200 IOPS.

@shoeper

I have not enabled resumable, should I do that?
If I do understand it right that is so you can resume a upload if it’s get disconnected?

My client can handle it perfectly it’s i7 (new model) 24GB of ram and the nvme pcie Samsung evo 960.

I’m just curious how the disk can handle 90MB/s over SMB file transfer and also over the client transfer.
But not over the webbbrowser.

Worker process on nginx? I have no ide what that means.
Where do I fix that I’m guessing in the nginx config, right?

What’s the recommended settings for that config?

Same for downloads, around 40 to 70MB/ps for me here (max). SMB is a lot faster than nginx.
Only the aio module can maybe speed it up.

worker_processes auto; is just fine by default, as it detects your specs. (by default 8 active workers on a i7)
My upload speed is also around the 28MB/ps with a gbit connection.

You can try to compile nginx for your OS and test it with: --with-file-aio from scratch.

Ubuntu 16.04: Install the Latest Nginx From Source on Ubuntu 16.04 – GeoffStratton.com add the flag also.

@TomvB

Hi, so you also have around 30MB/s as max download/upload over the webbbrowser?

Where can I find the
worker_processes auto Setting? In which config file?

Do someone know if this is also the fact over WebDAV?

Because as I have been writing client, smb, internal transfers etc all are faster around 111MB/s.

Here is my settings:

What kind of downloads? Nginx is almost as fast as SMB here (difference is less than 5% at 100MiB/s+), but it heavily depends on the workload. One cannot compare random access to small files using an api with transferring big files via SMB.

@shoeper

My download are around 38MB/s trough browser.it do seem that’s maybe some bug or something.
I Can’t really buy it that it’s the server as the issue are both on upload and download and only in the webbbrowser.

Did you try also different webbrowsers? Firefox, chrome?
NGINX is not so fast as SMB…That’s why I’ve been looking for solutions, file-aio makes 1gbit possible.

(SMB in my case > 105MB/ps)
(NGINX in my case > 80MB/ps and unstable drops to 40MB/ps, because of the big data)

Hi,
Yes I have used different browsers and it’s the same.

It do seem that we are the only ones with this issue as the other ones in this thread don’t have this kind of issue =/