The Database is best suitable for well-structured data, in the case of SeaFile e.g. for the information about the accounts, the libraries, the shares and so on.
While it is technically also possible to store binary data inside a database, it is a good design idea not to use this mechanism for server-side storing the contents of the files inside the libraries. Reading/writing megabytes and gigabytes of data blocks directly from/to filesystem is easier and more performant than pumping the data through the additional functional layers of SQL into the database.
My installation for example hosts a total of ~1.3TB from 9 users, and I consider this a moderately sized instance. This is an easy amount of data for a filesystem and even a backup copy of the server is no big issue. 2TB SSD is enough to hold this, and even with a low-power processor like an Atom C3558 I can still get the full gigabit bandwidth saturated when a sync over the LAN takes place. No fine-tuning was required, the Ubuntu and Docker on that box are running on defaults - simply because a filesystem is the easiest and best suitable option for this task.