On 02/21/2017 12:54 AM, Oleg Artemiev wrote:
On Mon, Feb 20, 2017 at 11:09 PM, Chris Laprise <tas...@openmailbox.org> wrote:
On 02/20/2017 09:16 AM, Oleg Artemiev wrote:
I mean apart from what the installer can support, in your case (I've read
some of your other partitioning messages) it seems unnecessary.
Yes, but when I installed opensuse last time I've seeb elegant
powerful interface - less things to do by hands.

Agreed, they have good admin UIs.


The idea that you have to treat SSDs as fragile has not withstood the
test
of time. In fact, SSDs are widely regarded as /more/ durable than HDDs
now.
I've bought my hdd 3-4 years ago (don't remember exactly). Newer ssd
may be better.
My one.. I just want to pay 1 day for installation and then keep this
for years. So why not to think twice and make setup that will be just
better in resource utilization?
IIRC, about any SSD post 2011 should be quite durable... so a Samsung 830 or
similar vintage should have no particular worries about longevity.
nice to hear, but anyway - how long those "no particular worries" go?
I mean that ssd by design is less stable to writes than hdd.
Why not then to partition just that way when ssd will get less writes?
It's not something really hard to make custom partitioning (except you
can't do it via installer).

Manufacturer SSD durability estimates have increased greatly, along with the length of available warranties.

http://techreport.com/review/27436/the-ssd-endurance-experiment-two-freaking-petabytes/2

http://www.networkworld.com/article/2873551/data-center/debunking-ssd-myths.html

   "Exhaustive studies have shown that SSDs have an annual failure rate
   of tenths of one percent, while the AFRs for HDDs can run as high as
   4 to 6 percent."


For comparison a Samsung 850 SSD is rated at 1.5 million hrs (EVO) and 2 million hrs (PRO) MBTF.... that's at least 50 percent more than a premium HDD (although HDD mfgs stopped using MTBF several years ago, probably because the SSDs were making them look terrible). The last time I recall reports of a surge in SSD failures (about 6 years ago) it wasn't even the flash that was at fault... the controllers were overheating, which was a common problem with many brands that used Sandforce controllers circa 2011.

The components and algorithms that determine the reliability of consumer SSDs has changed a lot for the better. I don't think there is currently reason to treat them as being in any way more fragile or wear-prone than HDDs.

BTW, besides not supporting raid 5/6 (no big deal for me), the other
downside for using Btrfs is still free-space reporting. It still isn't done
in a realistic manner, IMO... you may need to keep 30-60GB free space at all
times to avoid the fs going into read-only mode.
omg... Thank you - now it looks I don't want btrfs anymore. %) I
haven't used btrfs yet - just
wat I've read sounds promising. But pay 60Gigs for that - doesn't seem
to be what I want.
Better I'll go w/ custom lvm + some patching to get r/o root. :)

I certainly understand that. So far I'm being tolerant of this condition of using Btrfs. They actually did improve the reporting somewhat with the "filesystem usage" command, hoping it gets even better.

Just be aware that thin-provisioned LVM (it is /not/ traditional LVM) is also relatively new and has issues.

Chris


--
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/96a1accf-39ea-fca4-7934-4e590e0f7225%40openmailbox.org.
For more options, visit https://groups.google.com/d/optout.

Reply via email to