Re: [Emc-users] SSD reliability

2020-07-04 Thread Dave Cole
I have a number of Samsung EVO SSDs running in production equipment.  
They were either original or replacements for rotating drives.

Most of these PCs are on 24x7 and I have yet to have a SSD failure.
I think I started using Samsung SSD drives right after Samsung 
introduced them.  However I think they were called something else before 
they got the EVO name.
Most of the computers are running Linux.   A few are running Windows 
10.  (Off the internet)   Two are running Windows XP with the software 
modified so they don't randomly write to the drives.


The good thing:   They have never failed.   The bad thing:  The 
customers don't call me every few years to replace drives!


For a while I was having motherboard failures due to he bad capacitors 
that were installed on millions of motherboards, then hard drives would 
fail about every 2-3 years, and then power supplies would die.
Now, with good motherboards void of the lousy capacitors, power supplies 
are the number one issue.


I think I also have 6-8 EVO drives installed in laptops running Linux 
and Windows 7 and 10.  Both for laptops that I use and my family uses.
I am also sold on Samsung Monitors.  I have a number of them and have 
installed a number of them and none of them have died.


The Samsung SSDs are crazy reliable.

Dave



On 7/3/2020 2:39 PM, Ted wrote:
I'm quite partial to traditional 2.5" SATA SSD's; I have about 30 
servers with SAS/SATA slots running either Kingston or Sandisk 3Gb/s 
SSD's in 480Gb capacities. My home SAN runs 20 x 1TB SSD's (also 
Kingston) and if I rummage through my gig bag, I'll probably find half 
a dozen 1tb M2 SSD's in sata/usb3 cases. After about 6+ years in 24-7 
run capacity (yes, servers do get rebooted and wiped/rebuilt of 
course) in effectively webserver / asset server /db server setups - 
meaning lots of writes, I have yet to have a 2.5" 3GB/s SATA ssd fail.


Conversely, those "ultra-awesome" Crucial Micron M2 SSD modules I have 
had fail on 4 separate occasions - all of them within "warranty," and 
Crucial was not able/willing to RMA any of them - completely lousy 
customer service, which tempted me to just "buy and replace" through 
amazon (no I didn't, morally incorrect, but tempting). I also have 
some of the hybrids (both early Hitachi, whatever Apple was using in 
the early mac pro tubes) - many of those have failed, so I avoid 
hybrids like the plague, even if that new Fire series from Seagate is 
touted as the next best thingfor full transparency, I do have 
another SAN shelf with 24 1TB 2.5" traditional spindles (because it's 
an SAS-only shelf without interposers) that has been a solid performer 
for a long time - probably up 5+ years now, the only time off was 
moving the server racks and power failures. It's a Netapp shelf, so 
somewhat surprising that it has held up so well (nothing to do with 
the drives however).


Which just goes to show that mileage may vary wildly. I could have a 
dozen drives go out within 5 minutes of hitting send, or not. But for 
power savings and speed*, and not having to worry about what happens 
if a server is mounted directly on top of the UPS stack, or how the 
drives get transported, SSD media is a benefit in my book.


(* - my server installs have shown to run faster against the default 
SAS 72GB slow drives that my servers come with - some folks have shown 
that SSDs can be slower than fast HD's with specific testing, and that 
stable platters consume less continuous power than idle SSDs during 
initial writing. My power bill tells a different story.)


Cheers,

Ted.




___
Emc-users mailing list
Emc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/emc-users



___
Emc-users mailing list
Emc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/emc-users


Re: [Emc-users] SSD reliability --> wear out, faster boot time

2020-07-04 Thread N
> There's a bit of a glitch with the 2004 update when it comes to SSDs. 
> https://www.youtube.com/watch?v=ffHIY6pOJUk
> It continually insists an SSD has to be "optimized" but there's a way to fix 
> it.
> 
> 
> On Friday, July 3, 2020, 11:34:30 AM MDT, Jon Elson 
>  wrote:  
>  
>  On 07/03/2020 12:03 AM, linden wrote:
> > Hello All,
> >
> >    Any one here have real world experience with 
> > reliability of Solid State Drives.
> >
> I have been using SSDs in several systems.  I have a travel 
> laptop that has a small one, Ubuntu 14.04, I think.  It gets 
> relatively light use.
> 
> My main desktop has a 120 GB SSD that was initialized in 
> Dec. 2016 and has gotten quite a bit of use,
> web browsing, email, electronic design, tax programs, and on 
> and on.  I have a spinning hard drive there for backup every 
> couple days.
> 
> One thing to do, especially on older systems is to set the 
> file system to noatime, and maybe a few other things.  This 
> prevents a directory write EVERY TIME you open a file.  
> Older systems didn't know to do this automatically when they 
> detected an SSD, and it would chew up the disk lifetime with 
> writes.
> Newer systems, I think, do know to set this for SSDs.  The 
> SSD in my desktop is a Micron brand unit.
> I think you really DO want to stay with well-known brands.

Not sure if SSD is flash but know for flash in Micro controllers there is 
usually a number of how many writes/erases it could handle. Think nor flash is 
cheaper. Flash is usually erased in blocks to all zeros or ones, then data 
could be written other way, a special algorithm is needed then data is stored 
to minimize block erase but in case flash is used for SSD this might be handled 
internally but still would expect the noatime plus a few other would be good.

Have a few computers with SSD and boot time is really fast, that's nice 
especially if computer is only used sometimes, do not want to wait for machine 
to boot.


___
Emc-users mailing list
Emc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/emc-users


Re: [Emc-users] SSD reliability

2020-07-03 Thread Thaddeus Waldner
If you want to replace an apple ssd with a cheaper, better NVMe one, you can 
buy an adapter to do just that. I just did that and strangely enough, even 
though the original was a SATA, the NVMe device works fine. 

https://www.amazon.com/Convert-Adapter-MacBook-Retina-Upgraded/dp/B07VVNKRYR/ref=sr_1_18?dchild=1&keywords=imac+2014+ssd+nvme+adapter&qid=1593782305&sr=8-18

> On Jul 3, 2020, at 6:27 PM, Gregg Eshelman via Emc-users 
>  wrote:
> 
> Apple has used several different and incompatible slim SSD types in recent 
> years. Now that they have finally adopted NVMe in the cheesegrater that costs 
> as much as a car, they're still locking the buyer in. The computer comes with 
> two modules installed but their serial numbers are programmed into the 
> firmware so they can only be replaced for failure or upgraded by Apple, and 
> the computer won't boot if they're removed.
> 
>On Friday, July 3, 2020, 7:24:57 AM MDT, Thaddeus Waldner 
>  wrote:  
> Be aware that M.2 is a socket spec that includes both SATA and NVMe type 
> devices.
> 
> https://www.atpinc.com/blog/what-is-m.2-M-B-BM-key-socket-3
> 
> Added to that, Apple began using PCIe drives before the NVMe standard was 
> established, so there’s another socket to be confused about if you run Mac.
>> 
>> Fast forward to the year 2020...  Today we don't use SSDs that are made to
>> look like HDD and are put inside of a box with a SATA interface.  That hack
>> was a transitional technique for retrofitting SSD into older computers.
>> The box is mostly filled with air and the SATA interface is dead-dog-slow
>> compared to PCIe.  A modern SSD comes on an M2 size card and plugs directly
>> into the PCIe bus and does not even try to pretend it is a SATA Hard Drive.
>>   If you are buying new storage you want a PCIe interface M.2 forms factor
>> SSD.
>> https://en.wikipedia.org/wiki/M.2
> 
> ___
> Emc-users mailing list
> Emc-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/emc-users



___
Emc-users mailing list
Emc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/emc-users


Re: [Emc-users] SSD reliability

2020-07-03 Thread Gregg Eshelman via Emc-users
There's a bit of a glitch with the 2004 update when it comes to SSDs. 
https://www.youtube.com/watch?v=ffHIY6pOJUk
It continually insists an SSD has to be "optimized" but there's a way to fix it.


On Friday, July 3, 2020, 11:34:30 AM MDT, Jon Elson 
 wrote:  
 
 On 07/03/2020 12:03 AM, linden wrote:
> Hello All,
>
>    Any one here have real world experience with 
> reliability of Solid State Drives.
>
I have been using SSDs in several systems.  I have a travel 
laptop that has a small one, Ubuntu 14.04, I think.  It gets 
relatively light use.

My main desktop has a 120 GB SSD that was initialized in 
Dec. 2016 and has gotten quite a bit of use,
web browsing, email, electronic design, tax programs, and on 
and on.  I have a spinning hard drive there for backup every 
couple days.

One thing to do, especially on older systems is to set the 
file system to noatime, and maybe a few other things.  This 
prevents a directory write EVERY TIME you open a file.  
Older systems didn't know to do this automatically when they 
detected an SSD, and it would chew up the disk lifetime with 
writes.
Newer systems, I think, do know to set this for SSDs.  The 
SSD in my desktop is a Micron brand unit.
I think you really DO want to stay with well-known brands.  
___
Emc-users mailing list
Emc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/emc-users


Re: [Emc-users] SSD reliability

2020-07-03 Thread Gregg Eshelman via Emc-users
Apple has used several different and incompatible slim SSD types in recent 
years. Now that they have finally adopted NVMe in the cheesegrater that costs 
as much as a car, they're still locking the buyer in. The computer comes with 
two modules installed but their serial numbers are programmed into the firmware 
so they can only be replaced for failure or upgraded by Apple, and the computer 
won't boot if they're removed.

On Friday, July 3, 2020, 7:24:57 AM MDT, Thaddeus Waldner 
 wrote:  
 Be aware that M.2 is a socket spec that includes both SATA and NVMe type 
devices.
 
https://www.atpinc.com/blog/what-is-m.2-M-B-BM-key-socket-3

Added to that, Apple began using PCIe drives before the NVMe standard was 
established, so there’s another socket to be confused about if you run Mac.
> 
> Fast forward to the year 2020...  Today we don't use SSDs that are made to
> look like HDD and are put inside of a box with a SATA interface.  That hack
> was a transitional technique for retrofitting SSD into older computers.
> The box is mostly filled with air and the SATA interface is dead-dog-slow
> compared to PCIe.  A modern SSD comes on an M2 size card and plugs directly
> into the PCIe bus and does not even try to pretend it is a SATA Hard Drive.
>  If you are buying new storage you want a PCIe interface M.2 forms factor
> SSD.
> https://en.wikipedia.org/wiki/M.2
  
___
Emc-users mailing list
Emc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/emc-users


Re: [Emc-users] SSD reliability

2020-07-03 Thread Curtis Dutton
I use SSD's in everything. I have had 1 fail. It was an A-DATA brand.



I have an intel somewhere that is OK and majority Samsung drives, both M2
format and SATA. I have deployed quite a few of them for Customers in
desktops and servers. Probably a total of 30 or so. No failures yet
(Fingers crossed!)


-Curt

On Fri, Jul 3, 2020 at 1:38 PM Jon Elson  wrote:

> On 07/03/2020 11:01 AM, Sam Sokolik wrote:
> > I can't remember ever having an issue with any ssd I have used.  My
> laptop
> > which currently has a Samsung SSD 860 EVO M.2 1TB
> >
> > Power_On_Hours = 11845
> >
> >
> My desktop SSD reports 57388 power-on hours.
>
> Jon
>
>
> ___
> Emc-users mailing list
> Emc-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/emc-users
>

___
Emc-users mailing list
Emc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/emc-users


Re: [Emc-users] SSD reliability

2020-07-03 Thread Ted
I'm quite partial to traditional 2.5" SATA SSD's; I have about 30 
servers with SAS/SATA slots running either Kingston or Sandisk 3Gb/s 
SSD's in 480Gb capacities. My home SAN runs 20 x 1TB SSD's (also 
Kingston) and if I rummage through my gig bag, I'll probably find half a 
dozen 1tb M2 SSD's in sata/usb3 cases. After about 6+ years in 24-7 run 
capacity (yes, servers do get rebooted and wiped/rebuilt of course) in 
effectively webserver / asset server /db server setups - meaning lots of 
writes, I have yet to have a 2.5" 3GB/s SATA ssd fail.


Conversely, those "ultra-awesome" Crucial Micron M2 SSD modules I have 
had fail on 4 separate occasions - all of them within "warranty," and 
Crucial was not able/willing to RMA any of them - completely lousy 
customer service, which tempted me to just "buy and replace" through 
amazon (no I didn't, morally incorrect, but tempting). I also have some 
of the hybrids (both early Hitachi, whatever Apple was using in the 
early mac pro tubes) - many of those have failed, so I avoid hybrids 
like the plague, even if that new Fire series from Seagate is touted as 
the next best thingfor full transparency, I do have another SAN 
shelf with 24 1TB 2.5" traditional spindles (because it's an SAS-only 
shelf without interposers) that has been a solid performer for a long 
time - probably up 5+ years now, the only time off was moving the server 
racks and power failures. It's a Netapp shelf, so somewhat surprising 
that it has held up so well (nothing to do with the drives however).


Which just goes to show that mileage may vary wildly. I could have a 
dozen drives go out within 5 minutes of hitting send, or not. But for 
power savings and speed*, and not having to worry about what happens if 
a server is mounted directly on top of the UPS stack, or how the drives 
get transported, SSD media is a benefit in my book.


(* - my server installs have shown to run faster against the default SAS 
72GB slow drives that my servers come with - some folks have shown that 
SSDs can be slower than fast HD's with specific testing, and that stable 
platters consume less continuous power than idle SSDs during initial 
writing. My power bill tells a different story.)


Cheers,

Ted.




___
Emc-users mailing list
Emc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/emc-users


Re: [Emc-users] SSD reliability

2020-07-03 Thread Jon Elson

On 07/03/2020 11:01 AM, Sam Sokolik wrote:

I can't remember ever having an issue with any ssd I have used.  My laptop
which currently has a Samsung SSD 860 EVO M.2 1TB

Power_On_Hours = 11845



My desktop SSD reports 57388 power-on hours.

Jon


___
Emc-users mailing list
Emc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/emc-users


Re: [Emc-users] SSD reliability

2020-07-03 Thread Jon Elson

On 07/03/2020 12:03 AM, linden wrote:

Hello All,

Any one here have real world experience with 
reliability of Solid State Drives.


I have been using SSDs in several systems.  I have a travel 
laptop that has a small one, Ubuntu 14.04, I think.  It gets 
relatively light use.


My main desktop has a 120 GB SSD that was initialized in 
Dec. 2016 and has gotten quite a bit of use,
web browsing, email, electronic design, tax programs, and on 
and on.  I have a spinning hard drive there for backup every 
couple days.


One thing to do, especially on older systems is to set the 
file system to noatime, and maybe a few other things.  This 
prevents a directory write EVERY TIME you open a file.  
Older systems didn't know to do this automatically when they 
detected an SSD, and it would chew up the disk lifetime with 
writes.
Newer systems, I think, do know to set this for SSDs.  The 
SSD in my desktop is a Micron brand unit.

I think you really DO want to stay with well-known brands.

Jon


___
Emc-users mailing list
Emc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/emc-users


Re: [Emc-users] SSD reliability

2020-07-03 Thread linden
Thanks Guys for the help and insight i will run with this samsung ssd 
and see how far it gets me.


I wont give up on ssds yet ;-)

linden

On 2020-07-03 9:28 a.m., Gene Heskett wrote:

On Friday 03 July 2020 12:01:01 Sam Sokolik wrote:


I can't remember ever having an issue with any ssd I have used.  My
laptop which currently has a Samsung SSD 860 EVO M.2 1TB

Power_On_Hours = 11845


Sam, I didn't think to ask my oldest ssd, but
  9 Power_On_Hours   23372
However:
SMART Error Log not supported
SMART Self-test Log not supported
Device does not support Selective Self Tests/Loggin

So I guess its not going to enlighten me about much else.

OTOH it boots and runs lcnc about 2 or 3x faster than it ever did with
spinning rust on the end of that cable.  Prolonging the life of that old
old dell dimension.


I think all of our linuxcnc installed are on ssd's also.

sam

On Fri, Jul 3, 2020 at 8:24 AM Thaddeus Waldner 

wrote:

Be aware that M.2 is a socket spec that includes both SATA and NVMe
type devices.

https://www.atpinc.com/blog/what-is-m.2-M-B-BM-key-socket-3

Added to that, Apple began using PCIe drives before the NVMe
standard was established, so there’s another socket to be confused
about if you run Mac.


Fast forward to the year 2020...  Today we don't use SSDs that are
made

to


look like HDD and are put inside of a box with a SATA interface.
That

hack


was a transitional technique for retrofitting SSD into older
computers. The box is mostly filled with air and the SATA
interface is dead-dog-slow compared to PCIe.  A modern SSD comes
on an M2 size card and plugs

directly


into the PCIe bus and does not even try to pretend it is a SATA
Hard

Drive.


  If you are buying new storage you want a PCIe interface M.2 forms
factor SSD.
https://en.wikipedia.org/wiki/M.2

___
Emc-users mailing list
Emc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/emc-users

___
Emc-users mailing list
Emc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/emc-users


Cheers, Gene Heskett



___
Emc-users mailing list
Emc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/emc-users


Re: [Emc-users] SSD reliability

2020-07-03 Thread Gene Heskett
On Friday 03 July 2020 12:01:01 Sam Sokolik wrote:

> I can't remember ever having an issue with any ssd I have used.  My
> laptop which currently has a Samsung SSD 860 EVO M.2 1TB
>
> Power_On_Hours = 11845
>
Sam, I didn't think to ask my oldest ssd, but 
 9 Power_On_Hours23372
However:
SMART Error Log not supported
SMART Self-test Log not supported
Device does not support Selective Self Tests/Loggin

So I guess its not going to enlighten me about much else.

OTOH it boots and runs lcnc about 2 or 3x faster than it ever did with 
spinning rust on the end of that cable.  Prolonging the life of that old 
old dell dimension.

> I think all of our linuxcnc installed are on ssd's also.
>
> sam
>
> On Fri, Jul 3, 2020 at 8:24 AM Thaddeus Waldner  
wrote:
> > Be aware that M.2 is a socket spec that includes both SATA and NVMe
> > type devices.
> >
> > https://www.atpinc.com/blog/what-is-m.2-M-B-BM-key-socket-3
> >
> > Added to that, Apple began using PCIe drives before the NVMe
> > standard was established, so there’s another socket to be confused
> > about if you run Mac.
> >
> > > Fast forward to the year 2020...  Today we don't use SSDs that are
> > > made
> >
> > to
> >
> > > look like HDD and are put inside of a box with a SATA interface. 
> > > That
> >
> > hack
> >
> > > was a transitional technique for retrofitting SSD into older
> > > computers. The box is mostly filled with air and the SATA
> > > interface is dead-dog-slow compared to PCIe.  A modern SSD comes
> > > on an M2 size card and plugs
> >
> > directly
> >
> > > into the PCIe bus and does not even try to pretend it is a SATA
> > > Hard
> >
> > Drive.
> >
> > >  If you are buying new storage you want a PCIe interface M.2 forms
> > > factor SSD.
> > > https://en.wikipedia.org/wiki/M.2
> >
> > ___
> > Emc-users mailing list
> > Emc-users@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/emc-users
>
> ___
> Emc-users mailing list
> Emc-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/emc-users


Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 


___
Emc-users mailing list
Emc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/emc-users


Re: [Emc-users] SSD reliability

2020-07-03 Thread Sam Sokolik
I can't remember ever having an issue with any ssd I have used.  My laptop
which currently has a Samsung SSD 860 EVO M.2 1TB

Power_On_Hours = 11845

I think all of our linuxcnc installed are on ssd's also.

sam

On Fri, Jul 3, 2020 at 8:24 AM Thaddeus Waldner  wrote:

> Be aware that M.2 is a socket spec that includes both SATA and NVMe type
> devices.
>
> https://www.atpinc.com/blog/what-is-m.2-M-B-BM-key-socket-3
>
> Added to that, Apple began using PCIe drives before the NVMe standard was
> established, so there’s another socket to be confused about if you run Mac.
>
> >
> > Fast forward to the year 2020...  Today we don't use SSDs that are made
> to
> > look like HDD and are put inside of a box with a SATA interface.  That
> hack
> > was a transitional technique for retrofitting SSD into older computers.
> > The box is mostly filled with air and the SATA interface is dead-dog-slow
> > compared to PCIe.  A modern SSD comes on an M2 size card and plugs
> directly
> > into the PCIe bus and does not even try to pretend it is a SATA Hard
> Drive.
> >  If you are buying new storage you want a PCIe interface M.2 forms factor
> > SSD.
> > https://en.wikipedia.org/wiki/M.2
> >
> >
>
>
>
> ___
> Emc-users mailing list
> Emc-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/emc-users
>

___
Emc-users mailing list
Emc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/emc-users


Re: [Emc-users] SSD reliability

2020-07-03 Thread Thaddeus Waldner
Be aware that M.2 is a socket spec that includes both SATA and NVMe type 
devices.
 
https://www.atpinc.com/blog/what-is-m.2-M-B-BM-key-socket-3

Added to that, Apple began using PCIe drives before the NVMe standard was 
established, so there’s another socket to be confused about if you run Mac.

> 
> Fast forward to the year 2020...  Today we don't use SSDs that are made to
> look like HDD and are put inside of a box with a SATA interface.  That hack
> was a transitional technique for retrofitting SSD into older computers.
> The box is mostly filled with air and the SATA interface is dead-dog-slow
> compared to PCIe.  A modern SSD comes on an M2 size card and plugs directly
> into the PCIe bus and does not even try to pretend it is a SATA Hard Drive.
>  If you are buying new storage you want a PCIe interface M.2 forms factor
> SSD.
> https://en.wikipedia.org/wiki/M.2
> 
> 



___
Emc-users mailing list
Emc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/emc-users


Re: [Emc-users] SSD reliability

2020-07-03 Thread Sync
Don't buy cheap or used SSDs, my experience is that those fail too 
early. I only run Intel or Samsung and have not had a hard failure in 
around 10 years.


Sync


___
Emc-users mailing list
Emc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/emc-users


Re: [Emc-users] SSD reliability

2020-07-03 Thread Gregg Eshelman via Emc-users
Yup, definitely want to disable all the logging Linux does. That's what's been 
bricking early Tesla Model S cars. They left logging on. The car computer runs 
Linux and it and the car software are installed on a non volatile storage 
soldered onto the computer board. As Tesla released updates and additions to 
the car software, the free space in storage got smaller and smaller. Since the 
OS and car software was mostly static, the wear leveling had less and less 
space to spread the wear around. Too many bad blocks and running out of hot 
swap spare blocks = non-working car.
 
The car software logged data to a removable standard SD card so no problem if 
that died, take the computer out, open it up, pop in a new card. To fix the 
soldered in storage there are two choices. Pay $ to Tesla for a 
replacement computer or send it to a 3rd party shop that can replace just the 
storage chip then load it up with the Tesla build of linux, with logging 
disabled, and the latest car software. Version 2 of the Model S computer has 
several changes, including larger storage for the OS and car software, dunno if 
they disabled the OS logging.

On Friday, July 3, 2020, 2:37:08 AM MDT, linden  wrote:  
 Thanks Chris for the insight into what may be going on.

The PCIe interface sounds like a possible solution for machines that 
have PCI slots unfortunately with the laptop I am stuck with this sata 
interface we will see how this Samsung drive holds up with linux mint 20

On 2020-07-03 12:13 a.m., Chris Albertson wrote:
> Your results are atypical.  It could be however the fault of the OS.  Each
> bit in an SSD has a certain number of read/write cycles before it might
> fail.  Some million of cycles.  Back in the "old days" some OSes would
> write continously to the same place on the drive.  For example you'd
> delete a file and then when a new file is created it would use the recently
> freed space.  Today on modern computers with modern OSes systems are
> designed to spread the usage evenly all over the drive.
>
> With a hard drive you WANT to bunch all your data so that itis physically
> close to minimized head movements but on an SSD you want the data dispersed
>
> Linux has a habit of creating and deleting tiny files like log files and
> such and making files in /tmp and would trash the SSD that was not wear
> leveled
>
> I think those days are over.  New SSD have built-in wear leveling.
> https://en.wikipedia.org/wiki/Wear_leveling
>
> The question is about older or even antique Linux systems, do they know how
> to enable wear leveling on SSDs?  I don't know when this was introduced to
> Linux.
>
> Fast forward to the year 2020...  Today we don't use SSDs that are made to
> look like HDD and are put inside of a box with a SATA interface.  That hack
> was a transitional technique for retrofitting SSD into older computers.
> The box is mostly filled with air and the SATA interface is dead-dog-slow
> compared to PCIe.  A modern SSD comes on an M2 size card and plugs directly
> into the PCIe bus and does not even try to pretend it is a SATA Hard Drive.
>    If you are buying new storage you want a PCIe interface M.2 forms factor
> SSD.
> https://en.wikipedia.org/wiki/M.2
  
___
Emc-users mailing list
Emc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/emc-users


Re: [Emc-users] SSD reliability

2020-07-03 Thread Gregg Eshelman via Emc-users
Two things for SSDs.
Never defrag them because they don't need it and doing a defrag doesn't 
actually defrag files due to the wear leveling system that never allows files 
to be written to sequential blocks. Defragging them just wears them out faster. 
Same for multiple pass data erasing for security. Do not do that. A quick 
reformat followed by forcing a TRIM operation should completely eliminate data 
beyond recovery.
 
Don't put any swap or virtual memory file, or scratch space on an SSD. Install 
a lot of RAM to minimize the need for swapping, and use a 500 GB Western 
Digital Blue SATA 3.0 hard drive for swap if you're pinching pennies.
Of course for most laptops you're stuck with just one drive so you have to put 
the swap file on the boot SSD. Some business laptops have two internal 2.5" 
bays and can take another in a caddy in place of the optical drive. I saw a 
video the other day on a laptop that had two internal 2.5" bays plus two NVMe 
slots and IIRC could also take one 2.5" in place of the optical drive.

Or you could skip a SATA connected SSD and hard drive and go with a PCIe to 
NVMe card (make sure it supports booting from it) and a big NVMe drive.The 
memory used on those seems to be something more durable than what's used in 
SATA SSDs.
Something to look for with any SSD is what it does when it wears out, which 
*should* take a rather long time. Many consumer and "prosumer" models will 
brick themselves when too many blocks show errors. The worst part is they shut 
off writing AND reading.
Intel's enterprise level SSDs remain readable when their error counter reaches 
its limit, but they slow writes to a crawl. That allows for backup of the data 
without resorting to an expensive data recovery that may not be possible with a 
self bricking SSD.

On Thursday, July 2, 2020, 11:49:23 PM MDT, linden  wrote: 
 
 Hello All,

     Any one here have real world experience with reliability of Solid 
State Drives.

I have not had much luck with them my self and am wondering is this 
normal or am I the exception to the rule as if you believe the 
advertising they should last almost for ever.

First Experience around 2011 I bought 2 OCZ SSDs in Austria from 2 
different retailers and ran them in 2 different laptops used for office 
work travel, a little software development and running industrial 
automation service software.  Both of these failed with in 6 months with 
no prior warning just one day not recognized on boot and that was it. 
This was using Ubuntu 8.04 i think

Last year I tried again and bought an AFATA SU650 Ultimate in Canada. 
This I got a little over a year ago and it failed yesterday I had some 
warning it would boot work for about 5 minutes then turn read only and 
my operating system would lock up. I got about 10 restarts like this 
before it failed to the point where it is detected by the bios but it is 
not mountable or read able. This was using Linux Mint 19 and 20.

For comparison an old western digital or Toshiba mechanical drive 
usually last 4 plus years as long as not subjected to excessive shock 
and  for the most part make noise before failing completely giving you 
some warning.

I am running a used Samsung SSD now as a replacement in my current 
laptop. There are obvious performance advantages but with these 
reliability issues I still don't want to put them in production linuxcnc 
machines or anything critical.

Any one else have similar experience or recommendations for a reliable 
solid state drive.

thanks Linden  
___
Emc-users mailing list
Emc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/emc-users


Re: [Emc-users] SSD reliability

2020-07-03 Thread linden


On 2020-07-03 12:20 a.m., Gene Heskett wrote:

On Friday 03 July 2020 01:03:39 linden wrote:


Hello All,

      Any one here have real world experience with reliability of Solid
State Drives.

I have not had much luck with them my self and am wondering is this
normal or am I the exception to the rule as if you believe the
advertising they should last almost for ever.

First Experience around 2011 I bought 2 OCZ SSDs in Austria from 2
different retailers and ran them in 2 different laptops used for
office work travel, a little software development and running
industrial automation service software.  Both of these failed with in
6 months with no prior warning just one day not recognized on boot and
that was it. This was using Ubuntu 8.04 i think

Last year I tried again and bought an AFATA SU650 Ultimate in Canada.
This I got a little over a year ago and it failed yesterday I had some
warning it would boot work for about 5 minutes then turn read only and
my operating system would lock up. I got about 10 restarts like this
before it failed to the point where it is detected by the bios but it
is not mountable or read able. This was using Linux Mint 19 and 20.

For comparison an old western digital or Toshiba mechanical drive
usually last 4 plus years as long as not subjected to excessive shock
and  for the most part make noise before failing completely giving you
some warning.

I am running a used Samsung SSD now as a replacement in my current
laptop. There are obvious performance advantages but with these
reliability issues I still don't want to put them in production
linuxcnc machines or anything critical.

Any one else have similar experience or recommendations for a reliable
solid state drive.

thanks Linden


I've had better luck with the drives than I've had with the USB to sata
adaptors.

In fact I have 3 in daily use, one as the boot drive for a milling
machine, and 2 as development drives on an (was rpi3b, but its now an
rpi4b) and while the usb2 interface for those speedy drives had a high
failure rate, replacing the adapter with a different brand has revived
one such adata drive 3 times, and it will serve as the compile
scratchpad for both a 4.19-preempt-rt kernel, or a fresh copy of
linuxcnc's master branch. Takes the wear and tear off the u-sd the pi
boots from.  In short, since I put the pi's swap on one of those drives,
I have had zero drive or u-sd trouble in 2 years.  Theres a 120G
kingston in the mill, no spinning rust in either the mill or the Sheldon
Lathe.

Cheers, Gene Heskett


thanks Gene

with the laptop I will see how this Samsung drive holds up with linux 
mint 20


hopefully it is just my Mitus  touch turning things to crap and this 
drive lasts a little longer with this modern version of linux


linden



___
Emc-users mailing list
Emc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/emc-users


Re: [Emc-users] SSD reliability

2020-07-03 Thread linden

Thanks Chris for the insight into what may be going on.

The PCIe interface sounds like a possible solution for machines that 
have PCI slots unfortunately with the laptop I am stuck with this sata 
interface we will see how this Samsung drive holds up with linux mint 20


On 2020-07-03 12:13 a.m., Chris Albertson wrote:

Your results are atypical.  It could be however the fault of the OS.   Each
bit in an SSD has a certain number of read/write cycles before it might
fail.  Some million of cycles.  Back in the "old days" some OSes would
write continously to the same place on the drive.   For example you'd
delete a file and then when a new file is created it would use the recently
freed space.   Today on modern computers with modern OSes systems are
designed to spread the usage evenly all over the drive.

With a hard drive you WANT to bunch all your data so that itis physically
close to minimized head movements but on an SSD you want the data dispersed

Linux has a habit of creating and deleting tiny files like log files and
such and making files in /tmp and would trash the SSD that was not wear
leveled

I think those days are over.  New SSD have built-in wear leveling.
https://en.wikipedia.org/wiki/Wear_leveling

The question is about older or even antique Linux systems, do they know how
to enable wear leveling on SSDs?   I don't know when this was introduced to
Linux.

Fast forward to the year 2020...  Today we don't use SSDs that are made to
look like HDD and are put inside of a box with a SATA interface.  That hack
was a transitional technique for retrofitting SSD into older computers.
The box is mostly filled with air and the SATA interface is dead-dog-slow
compared to PCIe.  A modern SSD comes on an M2 size card and plugs directly
into the PCIe bus and does not even try to pretend it is a SATA Hard Drive.
   If you are buying new storage you want a PCIe interface M.2 forms factor
SSD.
https://en.wikipedia.org/wiki/M.2



On Thu, Jul 2, 2020 at 10:49 PM linden  wrote:


Hello All,

  Any one here have real world experience with reliability of Solid
State Drives.

I have not had much luck with them my self and am wondering is this
normal or am I the exception to the rule as if you believe the
advertising they should last almost for ever.

First Experience around 2011 I bought 2 OCZ SSDs in Austria from 2
different retailers and ran them in 2 different laptops used for office
work travel, a little software development and running industrial
automation service software.  Both of these failed with in 6 months with
no prior warning just one day not recognized on boot and that was it.
This was using Ubuntu 8.04 i think

Last year I tried again and bought an AFATA SU650 Ultimate in Canada.
This I got a little over a year ago and it failed yesterday I had some
warning it would boot work for about 5 minutes then turn read only and
my operating system would lock up. I got about 10 restarts like this
before it failed to the point where it is detected by the bios but it is
not mountable or read able. This was using Linux Mint 19 and 20.

For comparison an old western digital or Toshiba mechanical drive
usually last 4 plus years as long as not subjected to excessive shock
and  for the most part make noise before failing completely giving you
some warning.

I am running a used Samsung SSD now as a replacement in my current
laptop. There are obvious performance advantages but with these
reliability issues I still don't want to put them in production linuxcnc
machines or anything critical.

Any one else have similar experience or recommendations for a reliable
solid state drive.

thanks Linden



___
Emc-users mailing list
Emc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/emc-users






___
Emc-users mailing list
Emc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/emc-users


Re: [Emc-users] SSD reliability

2020-07-03 Thread Gene Heskett
On Friday 03 July 2020 01:03:39 linden wrote:

> Hello All,
>
>      Any one here have real world experience with reliability of Solid
> State Drives.
>
> I have not had much luck with them my self and am wondering is this
> normal or am I the exception to the rule as if you believe the
> advertising they should last almost for ever.
>
> First Experience around 2011 I bought 2 OCZ SSDs in Austria from 2
> different retailers and ran them in 2 different laptops used for
> office work travel, a little software development and running
> industrial automation service software.  Both of these failed with in
> 6 months with no prior warning just one day not recognized on boot and
> that was it. This was using Ubuntu 8.04 i think
>
> Last year I tried again and bought an AFATA SU650 Ultimate in Canada.
> This I got a little over a year ago and it failed yesterday I had some
> warning it would boot work for about 5 minutes then turn read only and
> my operating system would lock up. I got about 10 restarts like this
> before it failed to the point where it is detected by the bios but it
> is not mountable or read able. This was using Linux Mint 19 and 20.
>
> For comparison an old western digital or Toshiba mechanical drive
> usually last 4 plus years as long as not subjected to excessive shock
> and  for the most part make noise before failing completely giving you
> some warning.
>
> I am running a used Samsung SSD now as a replacement in my current
> laptop. There are obvious performance advantages but with these
> reliability issues I still don't want to put them in production
> linuxcnc machines or anything critical.
>
> Any one else have similar experience or recommendations for a reliable
> solid state drive.
>
> thanks Linden
>
I've had better luck with the drives than I've had with the USB to sata 
adaptors.

In fact I have 3 in daily use, one as the boot drive for a milling 
machine, and 2 as development drives on an (was rpi3b, but its now an 
rpi4b) and while the usb2 interface for those speedy drives had a high 
failure rate, replacing the adapter with a different brand has revived 
one such adata drive 3 times, and it will serve as the compile 
scratchpad for both a 4.19-preempt-rt kernel, or a fresh copy of 
linuxcnc's master branch. Takes the wear and tear off the u-sd the pi 
boots from.  In short, since I put the pi's swap on one of those drives, 
I have had zero drive or u-sd trouble in 2 years.  Theres a 120G 
kingston in the mill, no spinning rust in either the mill or the Sheldon 
Lathe.

Cheers, Gene Heskett
-- 
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 


___
Emc-users mailing list
Emc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/emc-users


Re: [Emc-users] SSD reliability

2020-07-03 Thread Chris Albertson
Your results are atypical.  It could be however the fault of the OS.   Each
bit in an SSD has a certain number of read/write cycles before it might
fail.  Some million of cycles.  Back in the "old days" some OSes would
write continously to the same place on the drive.   For example you'd
delete a file and then when a new file is created it would use the recently
freed space.   Today on modern computers with modern OSes systems are
designed to spread the usage evenly all over the drive.

With a hard drive you WANT to bunch all your data so that itis physically
close to minimized head movements but on an SSD you want the data dispersed

Linux has a habit of creating and deleting tiny files like log files and
such and making files in /tmp and would trash the SSD that was not wear
leveled

I think those days are over.  New SSD have built-in wear leveling.
https://en.wikipedia.org/wiki/Wear_leveling

The question is about older or even antique Linux systems, do they know how
to enable wear leveling on SSDs?   I don't know when this was introduced to
Linux.

Fast forward to the year 2020...  Today we don't use SSDs that are made to
look like HDD and are put inside of a box with a SATA interface.  That hack
was a transitional technique for retrofitting SSD into older computers.
The box is mostly filled with air and the SATA interface is dead-dog-slow
compared to PCIe.  A modern SSD comes on an M2 size card and plugs directly
into the PCIe bus and does not even try to pretend it is a SATA Hard Drive.
  If you are buying new storage you want a PCIe interface M.2 forms factor
SSD.
https://en.wikipedia.org/wiki/M.2



On Thu, Jul 2, 2020 at 10:49 PM linden  wrote:

> Hello All,
>
>  Any one here have real world experience with reliability of Solid
> State Drives.
>
> I have not had much luck with them my self and am wondering is this
> normal or am I the exception to the rule as if you believe the
> advertising they should last almost for ever.
>
> First Experience around 2011 I bought 2 OCZ SSDs in Austria from 2
> different retailers and ran them in 2 different laptops used for office
> work travel, a little software development and running industrial
> automation service software.  Both of these failed with in 6 months with
> no prior warning just one day not recognized on boot and that was it.
> This was using Ubuntu 8.04 i think
>
> Last year I tried again and bought an AFATA SU650 Ultimate in Canada.
> This I got a little over a year ago and it failed yesterday I had some
> warning it would boot work for about 5 minutes then turn read only and
> my operating system would lock up. I got about 10 restarts like this
> before it failed to the point where it is detected by the bios but it is
> not mountable or read able. This was using Linux Mint 19 and 20.
>
> For comparison an old western digital or Toshiba mechanical drive
> usually last 4 plus years as long as not subjected to excessive shock
> and  for the most part make noise before failing completely giving you
> some warning.
>
> I am running a used Samsung SSD now as a replacement in my current
> laptop. There are obvious performance advantages but with these
> reliability issues I still don't want to put them in production linuxcnc
> machines or anything critical.
>
> Any one else have similar experience or recommendations for a reliable
> solid state drive.
>
> thanks Linden
>
>
>
> ___
> Emc-users mailing list
> Emc-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/emc-users
>


-- 

Chris Albertson
Redondo Beach, California

___
Emc-users mailing list
Emc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/emc-users