TL/DR: Thanks for feedback so far - there's been a wide range of thoughts about whether it's worth it; out of curiosity I'm going to continue to investigate and experiment.

Boring junk follows:

As to the perspective that persistent storage, generally, is a "consumable", I would extend that view to many more components - upgrades are the norm (computers become unsupported long before storage fails) and clocks may be pushed, some think too far, to the point where burnout is likely after the "support lifetime" of many a processor (further compounded by the fact that the lifetime management techniques may be proprietary). Many users find it worthwhile to overclock, but I find it risky. I'm more interested in reducing the rate of consumption. Signs above the printers where I used to work reminded us, every time we went to pick up a printout, that reducing consumption saves both money and reduces waste.

I completely respect the fact that applying the "consumable" label may be like an epiphany for some but, while I may be the odd one out, it's a very different sort of experience for me.

I do appreciate that a number of folks have mentioned that higher quality components have longer lifetimes and I'll keep an eye out when I get a replacement.


It doesn't suit everyone's style of communication, but, for better or (and often) worse, it's my nature to consider a larger domain when pondering a question - I admit I've failed to develop thegood habit of being explicit about "adjacent" interests; I'll keep working on it.

Reducing the kind of activity that causes wear on SS storage interests me for a number of reasons:

Raspberry Pi users often burnout SD cards and even the cheap ones cost almost as much as some of the boards. As these systems may be embedded, "zero maintenance" is a windmill that is may be tilted at.Other SBCs may have storage on the SOC die or may not be designed to be replaced. Phones and tablets don't seem to have easily replaced SS storage.

I also have a system which has a traditional HDD that I run as a server but is used intermittently. I've explored wake on LAN/WAN and it doesn't seem to support it, so I'm curious about getting the hard drive to stay in a power saving state - a wider problem but I think there is some overlap.

Growing the circle a bit more, reducing access to higher levels of the memory hierarchy is a general optimization technique: keeping data in registers, the on-chip local memory,  minimizing cache misses to avoid VRAM accesses, then bus accesses to system RAM, each more expensive in terms of time and power consumption.  It's something I don't know much about especially when it comes to disks (or CPUs for that matter), so I'm curious.


I'm just a tinkererand risk/reward is probably based on a different formula than a sysadmin: breaking things, not too badly, is good! It creates a learning opportunity. Somebody brought up the mount options atime, relatime and noatime. Reading up on this, I prefer noatime and I've configured my systems to use it on all storage devices. From what I've read, some things may break but probably not HW. As I learn what SW depends on it and why, I'll remove it or, because most is open source, modify it if I decide I really want to use it.I'm generally interested in reducing SW footprint, so finding stuff to get rid of is a bit of a background process.

I can refrain from asking tinkerer-y questions if BLU is an sysadmin's group or they are otherwise not appropriate for this list.

Thanks,

Dan


On 2022-12-01 16:59, Jerry Feldman wrote:
I think we need to look at different uses. Certainly, the use of SSDs in
servers is quite different from consumers. Rich pointed out very
correctly that they need to be viewed as consumables. I think it is very
important for home systems to make backups where on a server you might want
to set up RAID or some auto replication system or file system.

On Thu, Dec 1, 2022 at 4:19 PM Shirley Márquez Dúlcey <[email protected]>
wrote:

When SSDs first became available, they were a poor fit for Unix-like file
systems unless you made changes because maintaining atime (the time each
file was accessed) caused very rapid wear of an SSD. Current distros
mitigate that by automatically switching to a modified version of atime for
file systems located on an SSD; it only guarantees to show whether the
access time is more recent than the most recent change.

The very rapid changes to log files can still be an issue in some use
cases; again, write-behind caching lowers the impact of that, as the log
might be updated multiple times before being written to disk. Systems with
extreme workloads might benefit from using a battery-backed RAMdisk for the
log files.

On Thu, Dec 1, 2022 at 11:16 AM <[email protected]> wrote:

This is a space where "price" or "quality" make a difference.

A "good" SSD has a lot of extra sectors to map in when it detects a write
error. All done internally to the drive. Better drives do a lot of things
to reduce wear. Some do dedup. Some don't store blocks that are all zero
or blocks that are all ones.

Its kind of hard to adjust your usage, suffice to say, it is all based on
the amount of change. Individual SSD cells can handle from 3,000 to
100,000 writes depending on the technology. It is possible to pay twice
as
much for a drive that will have 30 times more usable write longevity.

If your data is largely unchanging, it doesn't matter. If you have a
highly dynamic write environment, go for single level cell NAND flash,
that will last the longest. Find a good enterprise drive that has extra
capacity to remap as cells fail.


Hi all,

The discussion about filesystems got me thinking about whether or not
it's worth trying to reduce SSD wear on my first system (laptop) to
have
one. It occurred to me that file cloning seems like it could save a few
writes...

I've heard that some SSDs wear out pretty quickly, but I'm not sure if
that's real or just rumor and innuendo.

Anyone have thoughts on whether it's worth trying to reduce wear on the
drive? If so, what kind of changes could I make to my system?

I've installed Ubuntu, which I've been happy with as I'm not much of a
sysadmin; I know it's resource heavy but I seem to be fine with 16gigs
of ram.

It's dual boot, but I haven't used windows except when I first got it
to
test; I'll wipe windows if I ever run low on space.


Thanks,

Dan

_______________________________________________
Discuss mailing list
[email protected]
http://lists.blu.org/mailman/listinfo/discuss


_______________________________________________
Discuss mailing list
[email protected]
http://lists.blu.org/mailman/listinfo/discuss

_______________________________________________
Discuss mailing list
[email protected]
http://lists.blu.org/mailman/listinfo/discuss



_______________________________________________
Discuss mailing list
[email protected]
http://lists.blu.org/mailman/listinfo/discuss

Reply via email to