I have a web based system that is consuming seafile shared library through davfs and serves certain files from it to my clients. There is about 40k files total and currently when a new file is dropped in through client we iterate every 2 hours through the whole list and update the web portal. I’d like the process to run every 5 minutes, but the problem is that it takes sometimes 90 minutes to list all files through davfs and then we still have iterate through it, so not really efficient.
I looked at the API and history call, which seemed promising, but what I’m struggling with is how do I get from item in history result to get the object (i.e. file) name and path ? I just can seem to figure out a way.
Perhaps there are other ways as well ?