I have a number of Samsung EVO SSDs running in production equipment.
They were either original or replacements for rotating drives.
Most of these PCs are on 24x7 and I have yet to have a SSD failure.
I think I started using Samsung SSD drives right after Samsung
introduced them. However I think they were called something else before
they got the EVO name.
Most of the computers are running Linux. A few are running Windows
10. (Off the internet) Two are running Windows XP with the software
modified so they don't randomly write to the drives.
The good thing: They have never failed. The bad thing: The
customers don't call me every few years to replace drives!
For a while I was having motherboard failures due to he bad capacitors
that were installed on millions of motherboards, then hard drives would
fail about every 2-3 years, and then power supplies would die.
Now, with good motherboards void of the lousy capacitors, power supplies
are the number one issue.
I think I also have 6-8 EVO drives installed in laptops running Linux
and Windows 7 and 10. Both for laptops that I use and my family uses.
I am also sold on Samsung Monitors. I have a number of them and have
installed a number of them and none of them have died.
The Samsung SSDs are crazy reliable.
Dave
On 7/3/2020 2:39 PM, Ted wrote:
I'm quite partial to traditional 2.5" SATA SSD's; I have about 30
servers with SAS/SATA slots running either Kingston or Sandisk 3Gb/s
SSD's in 480Gb capacities. My home SAN runs 20 x 1TB SSD's (also
Kingston) and if I rummage through my gig bag, I'll probably find half
a dozen 1tb M2 SSD's in sata/usb3 cases. After about 6+ years in 24-7
run capacity (yes, servers do get rebooted and wiped/rebuilt of
course) in effectively webserver / asset server /db server setups -
meaning lots of writes, I have yet to have a 2.5" 3GB/s SATA ssd fail.
Conversely, those "ultra-awesome" Crucial Micron M2 SSD modules I have
had fail on 4 separate occasions - all of them within "warranty," and
Crucial was not able/willing to RMA any of them - completely lousy
customer service, which tempted me to just "buy and replace" through
amazon (no I didn't, morally incorrect, but tempting). I also have
some of the hybrids (both early Hitachi, whatever Apple was using in
the early mac pro tubes) - many of those have failed, so I avoid
hybrids like the plague, even if that new Fire series from Seagate is
touted as the next best thing....for full transparency, I do have
another SAN shelf with 24 1TB 2.5" traditional spindles (because it's
an SAS-only shelf without interposers) that has been a solid performer
for a long time - probably up 5+ years now, the only time off was
moving the server racks and power failures. It's a Netapp shelf, so
somewhat surprising that it has held up so well (nothing to do with
the drives however).
Which just goes to show that mileage may vary wildly. I could have a
dozen drives go out within 5 minutes of hitting send, or not. But for
power savings and speed*, and not having to worry about what happens
if a server is mounted directly on top of the UPS stack, or how the
drives get transported, SSD media is a benefit in my book.
(* - my server installs have shown to run faster against the default
SAS 72GB slow drives that my servers come with - some folks have shown
that SSDs can be slower than fast HD's with specific testing, and that
stable platters consume less continuous power than idle SSDs during
initial writing. My power bill tells a different story.)
Cheers,
Ted.
_______________________________________________
Emc-users mailing list
Emc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/emc-users
_______________________________________________
Emc-users mailing list
Emc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/emc-users