Dataloss after seafile server reboot

Something strange happened:

Yesterday I had my seafile client (linux) running and it reported “file updated” messages on the screen for each file I modified in my homr dir.
So the CLIENT seemed to work fine!

The webinterface did NOT work. It said something like “Page unavailable due to a server hiccup”.

After several reboots my files for almost two months are gone :frowning:

It seems seafile server somehow forgot them and the client may have treaten them as to be deleted.

Hay anyone any idea what to check? Is there a little chance that my data is still somewhere on the server?

I did already a seaf-fsck on that repo but that did not report anything. Just “seaf-fsck run done”.

Any help is really appreciated as I do not hace any backup :-o

Versions:

seafile-server-6.0.7
Ubuntu 14.04.5 LTS

thanks, Thomas

The repositories may still be there. You can try going to a terminal window and checking the amount of space that the seafile data folder is using and see if it’s close to what you think it should be. The idea way would be to get the webui working again and then check the libraries to make certain they are intact.

Thanks a lot for your fast reply!

The webui and gui is working. The gui said the repo is in sync. Browsing the repo via seafile browser shows the data is missing there also!

Now about your advice about the file system:

My local folder is using 18G space
The path on the server:
/haiwen/seafile-data/storage/blocks$ du -sh 52e7434c-a4bc-49eb-9541-ffb20d4fec08
uses 22G space

It is at least more but I am not missing 4G. Just document, maybe 100MB or so.

What do check next? Doing an export via seaf-fsck?

thanks, Thomas

ah shit, the library is encrypted, so an export does not work.
A check only did work but did not give any results

Hm, I think it does not look well:

I checked the file system folders:

…/storage/blocks/{repo}/*
…/storage/commits/{repo}/*
…/storage/fs/{repo}/*

None of them has any file newer than the date that I am missing data.

How can that happen? Data for almost two months must leave some tracks somewhere?! - Are there any transaction logs?

thanks,

Thomas

So, just to get it right, because you didn’t explicitely mentioned it:

This means you already checked your library history and trash, right?

No, I did not check that, thanks!
Now I did it via webgui and
in the trash there is no missing file
in the history there is a gap of exactly the missing time range:

[…] newer entries
Added …2018-05-17
Added …2018-03-23
[…] older entries

So I DO know it was working yesterday before I rebooted.
I DO remember today it said on the gui “repo damaged” and I rebooted again. After reboot it synced successfully and the data was gone.

And finally what I just noticed: The time stamp on my seafile server says:
Sat Jul 7 07:18:41 CEST 2018

I dont know why that happened (ok, no ntpd running).

Can you imaging some strange side effects becaus of the wrong date?

It is possible for the wrong date to be an issue on some programs, but I’m not certain with Seafile. Most likely, it would affect histories if anything. But, with a gap of two months and the server date off by 2 months, sounds like your server traveled through time. :joy:

All joking aside, though, versioning and history would be the first place I would look providing I didn’t find what I was looking for just by browsing the libraries.

Of course you’re right! I rarely use the client. I just have it running and once in a while I need it so I am not aware of a lo t of basic features.

Anyways, about the time stamp:

I finally booted the box on
26.03.2018
17.05.2018 is today = 51 days difference
07.07.2018 future date of my time travelled server = another 52 days difference

So probably that caused somehow the problem.

Any ideas where else to look for the missing data?

The error you got about the repo being damaged is most concerning. Yet, after you rebooted, it corrected itself and started syncing? Do the server logs around that time frame report anything?

What kind of hardware is being used? What’s in the logs? My first thought when reading this was a hardware failure.

2 Likes

@shoeper

Also the direction I’m leaning. Either memory or hard drive.

I focussed meanwhile on rescueing deleted files on my client system using some ext4 capable tool. I got most of it.

The server acts wired. Maybe you’re right regarding an hardware issue. It is a cubietruck and ran for two years very stable.
But even if it has a problem - there is not one file dated of the time window that is missing. It looks like the machine was not powered on. Maybe it got hacked?!

OK, I go on digging into it and once I come back to seafile issues, I give an update.
Thanks, so far,

Thomas

Do you use a hard disk or the internal storage?

I also had a cubietruck before and my internal storage was dead within just a few month.

I’ve also read that with SSDs there can be the issue that they work until power is lost and then they’ll loose data that could be read before (the internal storage also is flash storage).

For a reliable system O strongly recommend to build/buy a server with an x86 cpu, ~2 GB ram, (optional) ssd for the host system, at least 2 disks with raid one (using e.g. ZFS & configuring a monthly scrub).

1 Like

I use a microSD card for OS and application files and a external usc hdd for seafile data.

So theories getting clearer now:

After some uptime my root file system changes to readonly. I found one entry in the logs reporting some inode failure that caused that change. So I probably have a media issue with my sdcard.
The strange thing is that the system acted like normal, I could upload files in some repo (a repo I do not sync with, just store) but after reboot I could not see the data in my seafile client.

Anyways, my action plan:

  1. Reimage a new sdcard with the dd image I took right before the problems began!
  2. Delete my sensitive data repo from seafile
  3. Resync it from scratch from client to a new repo

One question I do have:

There are other repos that holds several hundred gig of data that I just use as cloud webspace. I “successfully” uploaded a lot of data into them the last days before I noticed the trouble. So on my external hdd there are physically more files than shown in my seafile gui.
So after reimaging I have a running seafile server from two month ago and a data disk that contains addintional files. Can I have seafile server somehow to repair its state? Is that what seaf-fsck.sh does?

thanks, Thomas

Yes, you can do this with seaf-fsck.

No, I don’t think seaf fsck is helpful in this case.

Your issue is the database being on the sdcard which has been written to death. Because of this the pointer to the most recent version can get lost / reset at any time.

Make sure to not have anything write to your sdcard regularly. Best would be to only boot from the sdcard (or if possible directly from the disk) to start the system on the external disk.

For a reliable system I still recommend a system like described above. I run one since 2014 and had no major issues nor any data loss since. I did use (developer) boards like the cubietruck and beagle bone before and had issues with lost data within just a few month.

1 Like

Hi,
thanks, I got your point switching as much as possible to the hd. I probably want to go for that.

For now, you’re right, seaf-fsck did not help. I just it thi the “-r” option and it did not change anything.

Exact status is:

  • reimageing of seafile sdcard successfully done
    -> it now holds the seafile “sdcard”-state from two month ago
  • the external hd probably contains data that I uploadad during the “broken” seafile server phase.

Do you think the files are still there or did seafile delete them as the new sdcard has no reference to them?
If there files are still there, is there any other chance to make them appear in the seafile gui?

thanks a lot,

Thomas

The files are still there. Here is some other post on the forum on how to find the latest state and update the database to reflect it.

See Database recovered from backup - data intact