HTTP2 drastically slows down up and download

While testing performance we noticed that the up- and download is drastically slowed down with http/2 enabled. We tested this on multiple systems in multiple geo locations with different systems and OS versions. Using http/2 is a throughput bootleneck whereas it’s supposed to improve it.

Some example to see the huge difference we are talking about (MB/s, not Mbit/s):

  • Upload with http/2 enabled: 9 MB/s
  • Upload with http/2 disabled: 52 MB/s
  • Download with http/2 enabled: 18 MB/s
  • Download with http/2 disabled: 120 MB/s

Tested with: CE and PRO 7 + 8 (different versions)
Nginx running on all instances as public proxy.

Important fun fact: When using yet another proxy in front of nginx e.g. haproxy, the problem doesn’t seem to exist??!

[ "Seafile/Seahub" --> "Nginx" with private CA signed cert ] -->
[ haproxy --> public ssl offloading ] --> Internet --> Webbrowser/Client

I found another http/2 related thread that might give some clues.

@daniel.pan @Jonathan Can you please check what might cause the issue here?
@Community: Maybe someone else can to do some tests and report their findings.

Thanks in advance!

1 Like

What protocol do you use between Nginx and Seafile?

Seafile + Seahub + SeafDav runs locally and nginx talks to them via localhost, no host or network in between.

Thanks for checking.

Here is the nginx config from one of the reference systems:

server {
  listen          80;
  server_name     my.domain.com;
  server_tokens   off;

  return 301 https://$http_host$request_uri;

  location '/.well-known/acme-challenge' {
        default_type "text/plain";
        root /opt/seafile/certbot-webroot;
  }

}

server {
  listen          443 ssl;
  #listen         443 ssl http2;
  #listen         [::]:443 ssl http2;
  server_name     my.domain.com;
  server_tokens   off;

  ssl_certificate /etc/nginx/ssl/lefullchain.pem;
  ssl_certificate_key /etc/nginx/ssl/lekey.pem;

  proxy_set_header X-Forwarded-For $remote_addr;
  add_header Strict-Transport-Security "max-age=31536000; includeSubdomains";

  root /usr/share/nginx/html/;

  location '/.well-known/acme-challenge' {
        default_type "text/plain";
        root /opt/seafile/certbot-webroot;
  }

  location / {
         proxy_pass         http://127.0.0.1:8000;
         proxy_set_header   Host $host;
         proxy_set_header   X-Real-IP $remote_addr;
         proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
         proxy_set_header   X-Forwarded-Host $server_name;
         proxy_set_header   X-Forwarded-Proto https;
         proxy_http_version 1.1;
         proxy_connect_timeout  36000s;
         proxy_read_timeout  36000s;
         proxy_send_timeout  36000s;
         send_timeout  36000s;

         # used for view/edit office file via Office Online Server
         client_max_body_size 0;

         access_log      /var/log/nginx/seahub.access.log;
         error_log       /var/log/nginx/seahub.error.log;
  }

  location /seafhttp {
        rewrite ^/seafhttp(.*)$ $1 break;
        proxy_pass http://127.0.0.1:8082;
        client_max_body_size 0;
        proxy_connect_timeout  36000s;
        proxy_read_timeout  36000s;
        proxy_send_timeout  36000s;
        send_timeout  36000s;
        proxy_request_buffering off;
        proxy_http_version 1.1;
  }

  location /media {
        root /opt/seafile/haiwen/seafile-server-latest/seahub;
  }

  location /seafdav {
        set $destination $http_destination;
        if ($destination ~* ^https?://[^/]+(/seafdav/.+)$) {
                set $destination $1;
        }
        proxy_set_header Destination $destination;

        proxy_pass         http://127.0.0.1:8080;
        proxy_set_header   Host $host;
        proxy_set_header   X-Real-IP $remote_addr;
        proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Host $server_name;
        proxy_set_header   X-Forwarded-Proto $scheme;
        proxy_http_version 1.1;
        proxy_connect_timeout  36000s;
        proxy_read_timeout  36000s;
        proxy_send_timeout  36000s;
        send_timeout  36000s;

        client_max_body_size 0;
        proxy_request_buffering off;
        proxy_buffering off;

        # kill cache
        add_header Last-Modified $date_gmt;
        add_header Cache-Control 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
        if_modified_since off;
        expires off;
        etag off;

        access_log      /var/log/nginx/seafdav.access.log;
        error_log       /var/log/nginx/seafdav.error.log;
  }

}

Just an idea: Maybe the translation of http2 from the public to the http1.1 for the internal proxy connection is the problem?

Also thought about that. Some time in the past this was the way to configure the nginx vhost for Seafile to make it work properly. Maybe it’s time to remove it now.
@Jonathan Can you please confirm that proxy_http_version 1.1; is not required anymore!?

Thanks

The default is 1.0. So removing it is a step backwards.

Which protocol is Seafile Client actually using in its request (according to nginx logs)?

Also looks like there was at least one bug with regards to http2 in QT which I suspect is being used (Qt Bug Tracker).

Are connections reused (https://serverfault.com/a/1018581/602509)?

We tested with a few combinations:

  1. Nginx with proxy_request_buffering set to ‘on’: This makes Nginx buffer the entire file before sending to upstream (seaf-server). In this setting, the upload speed is totally decided by Nginx. Result: The upload speed is normal.
  2. Nginx with proxy_request_buffering set to ‘off’: Nginx will forward the data to seaf-server upon receiving it. In this setting, the upload speed is decided by both Nginx and seaf-server. Nginx will convert from http2 to http 1.1. Result: The upload speed is only half of setting 1.
  3. Nginx with proxy_request_buffering set to ‘off’, and replace seaf-server with the new fileserver written in Go language. The result is the same as setting 2.

All tests use Chrome as the client.

We could assume the http handling of Go standard library is quite efficient and conforms to standards. Given that the speed of settings 2 and 3 has no difference, we think the bottleneck is in the conversion from http2 to http 1.1 in Nginx.

3 Likes