I would always recommend a secure erase of an SSD - if you want a "fresh
start". That will mark all the NAND cells as clear of data. That will
benefit the longevity of your device / wear levelling.

I've been messing about with native exfat over the past few months. I found
this to be a pretty decent shared partition file system - for use with MS
Windows. The read performance will saturate a 3Gbit SATA link - but write
performance is only in the order of 100Mbytes/second.

Personally having been burned by btrfs I would not try one of these
"experimental" file systems again... That was the same sort of pattern as
your experience. I carefully followed the Arch Wiki (large partition size -
due to COW issues, etc.) - was using it on my home brew NAS running
OpenSUSE as root /. One day it just "blew up" and was really screwed for
recovery (I did manage to get the few small bits of data I needed with some
Googling) - as none of the btrfs tools for this actually work! Back to ext4
for root / - now running Arch on that box... Ironically the native ZFS port
has always been stable on that box (with a very large storage array)!

Just my $0.02!!

On 24 February 2015 at 00:46, Peter Humphrey <pe...@prh.myzen.co.uk> wrote:

> Some list members might be interested in how I've got on with f2fs
> (flash-friendly file system).
>
> According to genlop I first installed f2fs on my Atom mini-server box on
> 1/11/14 (that's November, for the benefit of transpondians), but I'm
> pretty sure it must have been several months before that. I installed a
> SanDisk SDSSDP-064G-G25 in late February last year and my admittedly
> fallible memory says I changed to f2fs not many months after that, as
> soon as I discovered it.
>
> Until two or three weeks ago I had no problems at all. Then while doing
> a routine backup tar started complaining about files having been moved
> before it could copy them. It seems I had a copy of an /etc directory
> from somewhere (perhaps a previous installation) under /root and some
> files when listed showed question marks in all fields except their
> names. I couldn't delete them, so I re-created the root partition and
> restored from a backup.
>
> So far so good, but then I started getting strange errors last week. For
> instance, dovecot started throwing symbol-not-found errors. Finally,
> after remerging whatever packages failed for a few days,
> /var/log/messages suddenly appeared as a binary file again, and I'm
> pretty sure that bug's been fixed.
>
> Time to ditch f2fs, I thought, so I created all partitions as ext4 and
> restored the oldest backup I still had, then ran emerge -e world and
> resumed normal operations. I didn't zero out the partitions with dd;
> perhaps I should have.
>
> I'll watch what happens, but unless the SSD has failed after only a year
> I shouldn't have any problems.
>
> An interesting experience. Why should f2fs work faultlessly for several
> months, then suffer repeated failures with no clear pattern?
>
> --
> Rgds
> Peter.
>
>


-- 

All the best,
Robert

Reply via email to