Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on

2023-04-19 Thread eric

On 4/19/23 21:23, Dale wrote:

Mark Knecht wrote:


> I wonder.  Is there a way to find out the smallest size file in a 
directory or sub directory, largest files, then maybe a average file 
size???  I thought about du but given the number of files I have here, 
it would be a really HUGE list of files. Could take hours or more 
too.  This is what KDE properties shows.


I'm sure there are more accurate ways but

sudo ls -R / | wc

give you the number of lines returned from the ls command. It's not 
perfect as there are blank lines in the ls but it's a start.


My desktop machine has about 2.2M files.

Again, there are going to be folks who can tell you how to remove 
blank lines and other cruft but it's a start.


Only takes a minute to run on my Ryzen 9 5950X. YMMV.



I did a right click on the directory in Dolphin and selected 
properties.  It told me there is a little over 55,000 files.  Some 1,100 
directories, not sure if directories use inodes or not. Basically, there 
is a little over 56,000 somethings on that file system.  I was curious 
what the smallest file is and the largest. No idea how to find that 
really.  Even du separates by directory not individual files regardless 
of directory.  At least the way I use it anyway.


If I ever have to move things around again, I'll likely start a thread 
just for figuring out the setting for inodes.  I'll likely know more 
about the number of files too.


Dale

:-)  :-)


If you do not mind using graphical solutions, Filelight can help you 
easily visualize where your largest directories and files are residing.


https://packages.gentoo.org/packages/kde-apps/filelight

Visualise disk usage with interactive map of concentric, segmented rings 


Eric



Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on

2023-04-19 Thread Dale
Mark Knecht wrote:
>
> > I wonder.  Is there a way to find out the smallest size file in a
> directory or sub directory, largest files, then maybe a average file
> size???  I thought about du but given the number of files I have here,
> it would be a really HUGE list of files.  Could take hours or more
> too.  This is what KDE properties shows.
>
> I'm sure there are more accurate ways but 
>
> sudo ls -R / | wc
>
> give you the number of lines returned from the ls command. It's not
> perfect as there are blank lines in the ls but it's a start.
>
> My desktop machine has about 2.2M files.
>
> Again, there are going to be folks who can tell you how to remove
> blank lines and other cruft but it's a start.
>
> Only takes a minute to run on my Ryzen 9 5950X. YMMV.
>

I did a right click on the directory in Dolphin and selected
properties.  It told me there is a little over 55,000 files.  Some 1,100
directories, not sure if directories use inodes or not.  Basically,
there is a little over 56,000 somethings on that file system.  I was
curious what the smallest file is and the largest.  No idea how to find
that really.  Even du separates by directory not individual files
regardless of directory.  At least the way I use it anyway. 

If I ever have to move things around again, I'll likely start a thread
just for figuring out the setting for inodes.  I'll likely know more
about the number of files too. 

Dale

:-)  :-) 


Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on

2023-04-19 Thread Mark Knecht
> I wonder.  Is there a way to find out the smallest size file in a
directory or sub directory, largest files, then maybe a average file
size???  I thought about du but given the number of files I have here, it
would be a really HUGE list of files.  Could take hours or more too.  This
is what KDE properties shows.

I'm sure there are more accurate ways but

sudo ls -R / | wc

give you the number of lines returned from the ls command. It's not perfect
as there are blank lines in the ls but it's a start.

My desktop machine has about 2.2M files.

Again, there are going to be folks who can tell you how to remove blank
lines and other cruft but it's a start.

Only takes a minute to run on my Ryzen 9 5950X. YMMV.


Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on

2023-04-19 Thread Dale
Frank Steinmetzger wrote:
> <<>>
>
> When formatting file systems, I usually lower the number of inodes from the 
> default value to gain storage space. The default is one inode per 16 kB of 
> FS size, which gives you 60 million inodes per TB. In practice, even one 
> million per TB would be overkill in a use case like Dale’s media storage.¹ 
> Removing 59 million inodes × 256 bytes ≈ 15 GB of net space for each TB, not 
> counting extra control metadata and ext4 redundancies.
>
> The defaults are set in /etc/mke2fs.conf. It also contains some alternative 
> values of bytes-per-inode for certain usage types. The type largefile 
> allocates one inode per 1 MB, giving you 1 million inodes per TB of space. 
> Since ext4 is much more efficient with inodes than ext3, it is even content 
> with 4 MB per inode (type largefile4), giving you 250 k inodes per TB.
>
> For root partitions, I tend to allocate 1 million inodes, maybe some more 
> for a full Gentoo-based desktop due to the portage tree’s sheer number of 
> small files. My Surface Go’s root (Arch linux, KDE and some texlive) uses 
> 500 k right now.
>
>
> ¹ Assuming one inode equals one directory or unfragmented file on ext4.
> I’m not sure what the allocation size limit for one inode is, but it is 
> *very* large. Ext3 had a rather low limit, which is why it was so slow with 
> big files. But that was one of the big improvements in ext4’s extended 
> inodes, at the cost of double inode size to house the required metadata.
>


This is interesting.  I have been buying 16TB drives recently.  After
all, with this fiber connection and me using torrents, I can fill up a
drive pretty fast, but I am slowing down as I'm no longer needing to
find more stuff to download.  Even 10GB per TB can add up.  For a 16TB
drive, that's 160GBs at least.  That's quite a few videos.  I didn't
realize it added up that fast.  Percentage wise it isn't a lot but given
the size of the drives, it does add up quick.  If I ever rearrange my
drives again and can change the file system, I may reduce the inodes at
least on the ones I only have large files on.  Still tho, given I use
LVM and all, maybe that isn't a great idea.  As I add drives with LVM, I
assume it increases the inodes as well.  If so, then reducing inodes
should be OK.  If not, I may increase drives until it has so many large
files it still runs out of inodes.  I suspect it adds inodes when I
expand the file system tho and I can adjust without worrying about it. 
I just have to set it when I first create the file system I guess.

This is my current drive setup. 


root@fireball / # pvs -O vg_name
  PV VG Fmt  Attr PSize    PFree
  /dev/sda7  OS lvm2 a--  <124.46g 21.39g
  /dev/sdf1  backup lvm2 a--   698.63g 0
  /dev/sde1  crypt  lvm2 a--    14.55t 0
  /dev/sdb1  crypt  lvm2 a--    14.55t 0
  /dev/sdh1  datavg lvm2 a--    12.73t 0
  /dev/sdc1  datavg lvm2 a--    <9.10t 0
  /dev/sdi1  home   lvm2 a--    <7.28t 0
root@fireball / #


The one marked crypt is the one that is mostly large video files.  The
one marked datavg is where I store torrents.  Let's not delve to deep
into that tho.  ;-)  As you can see, crypt has two 16TB drives now and
I'm about 90% full.  I plan to expand next month if possible.  It'll be
another 16TB drive when I do.  So, that will be three 16TB drives. 
About 43TBs.  Little math, 430GB of space for inodes.  That added up
quick. 

I wonder.  Is there a way to find out the smallest size file in a
directory or sub directory, largest files, then maybe a average file
size???  I thought about du but given the number of files I have here,
it would be a really HUGE list of files.  Could take hours or more too. 
This is what KDE properties shows.

26.1 TiB (28,700,020,905,777)

55,619 files, 1,145 sub-folders

Little math. Average file size is 460MBs. So, I wonder what all could be
changed and not risk anything??? I wonder if that is accurate enough???

Interesting info.

Dale

:-) :-)



Re: [gentoo-user] How to install Ruby bindings in an ebuild

2023-04-19 Thread Michael Orlitzky
On 2023-04-19 01:08:23, Ralph Seichter wrote:
> I need to install Ruby bindings (something.so) during an ebuild,
> specifically into the /usr/lib64/ruby/vendor_ruby/3.0.0/x86_64-linux
> directory.

Hey Ralph. I'm not an expert on the ruby eclasses, but they work more
or less like the python ones, if that helps at all.

In this case, it looks like you have a package that builds a binary
ruby extension. That extension should be married to a specific version
of ruby, namely the one it was built against. I think the best way to
support that in a package is to declare which ruby versions are
supported with USE_RUBY and ruby-ng.eclass. USE_RUBY will be cross-
referenced with the user's RUBY_TARGETS to determine which ruby
versions are ultimately supported. Then, parts of the ebuild will be
repeated for each ruby version that the ebuilds supports and that the
user wants.

You can fine-tune what happens in each phase with the eclass functions
each_ruby_. So for example, if this is a C package, you might
only want to run ./configure && make extension && make install-extension
in the separate version-specific phases, so as to avoid rebuilding the
whole entire package for each version of ruby.

As always, the devil is in the details, but ruby-ng.eclass is a good
starting point.



Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on

2023-04-19 Thread Frank Steinmetzger
Am Wed, Apr 19, 2023 at 01:00:33PM -0700 schrieb Mark Knecht:


> I think technically they default to the physical block size internally
> and the earlier ones, attempting to be more compatible with HDDs,
> had 4K blocks. Some of the newer chips now have 16K blocks but
> still support 512B Logical Block Addressing.
> 
> All of these devices are essentially small computers. They have internal
> controllers, DRAM caches usually in the 1-2GB sort of range but getting
> larger.

Actually, cheap(er) SSDs don’t have an own DRAM, but rely on the host for 
this. There is an ongoing debate in tech forums whether that is a bad thing 
or not. A RAM cache can help optimise writes by caching many small writes 
and aggregating them into larger blocks.

> The bus speeds they quote is because data is moving for the most
> part in and out of cache in the drive.

Are you talking about the pseudo SLC cache? Because AFAIK the DRAM cache has 
no influence on read performance.

> What I know I'm not sure about is how inodes factor into this.
> 
> For instance:
> 
> mark@science2:~$ ls -i
> 35790149  000_NOT_BACKED_UP
> 33320794  All_Files.txt
> 7840  All_Sizes_2.txt
> 7952  All_Sizes.txt
> 33329818  All_Sorted.txt
> 33306743  ardour_deps_install.sh
> 33309917  ardour_deps_remove.sh
> 33557560  Arena_Chess
> 33423859  Astro_Data
> 33560973  Astronomy
> 33423886  Astro_science
> 33307443 'Backup codes - Login.gov.pdf'
> 33329080  basic-install.sh
> 33558634  bin
> 33561132  biosim4_functions.txt
> 33316157  Boot_Config.txt
> 33560975  Builder
> 8822  CFL_88_F_Bright_Syn.xsc
> 
> If the inodes are on the disk then how are they
> stored? Does a single inode occupy a physical
> block? A 512 byte LBA? Something else?

man mkfs.ext4 says:
[…] the default inode size is 256 bytes for most file systems, except for 
small file systems where the inode size will be 128 bytes. […]

And if a file is small enough, it can actually fit inside the inode itself, 
saving the expense of another FS sector.


When formatting file systems, I usually lower the number of inodes from the 
default value to gain storage space. The default is one inode per 16 kB of 
FS size, which gives you 60 million inodes per TB. In practice, even one 
million per TB would be overkill in a use case like Dale’s media storage.¹ 
Removing 59 million inodes × 256 bytes ≈ 15 GB of net space for each TB, not 
counting extra control metadata and ext4 redundancies.

The defaults are set in /etc/mke2fs.conf. It also contains some alternative 
values of bytes-per-inode for certain usage types. The type largefile 
allocates one inode per 1 MB, giving you 1 million inodes per TB of space. 
Since ext4 is much more efficient with inodes than ext3, it is even content 
with 4 MB per inode (type largefile4), giving you 250 k inodes per TB.

For root partitions, I tend to allocate 1 million inodes, maybe some more 
for a full Gentoo-based desktop due to the portage tree’s sheer number of 
small files. My Surface Go’s root (Arch linux, KDE and some texlive) uses 
500 k right now.


¹ Assuming one inode equals one directory or unfragmented file on ext4.
I’m not sure what the allocation size limit for one inode is, but it is 
*very* large. Ext3 had a rather low limit, which is why it was so slow with 
big files. But that was one of the big improvements in ext4’s extended 
inodes, at the cost of double inode size to house the required metadata.

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

FINE: Tax for doing wrong.  TAX: Fine for doing fine.


signature.asc
Description: PGP signature


Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on

2023-04-19 Thread Mark Knecht
On Wed, Apr 19, 2023 at 12:39 PM Nikos Chantziaras  wrote:
>
> On 19/04/2023 22:26, Dale wrote:
> > So for future reference, let it format with the default?  I'm also
> > curious if when it creates the file system it will notice this and
> > adjust automatically. It might.  Maybe?
>
> AFAIK, SSDs will internally convert to 4096 in their firmware even if
> they report a physical sector size of 512 through SMART. Just a
> compatibility thing. So formatting with 4096 is fine and gets rid of the
> internal conversion.

I suspect this is right, or has been mostly right in the past.

I think technically they default to the physical block size internally
and the earlier ones, attempting to be more compatible with HDDs,
had 4K blocks. Some of the newer chips now have 16K blocks but
still support 512B Logical Block Addressing.

All of these devices are essentially small computers. They have internal
controllers, DRAM caches usually in the 1-2GB sort of range but getting
larger. The bus speeds they quote is because data is moving for the most
part in and out of cache in the drive.

In Dale's case, if he has a 4K file system block size then it's going to
send
4K to the drive and the drive will write 8 512 byte writes to put it in
flash.

If I have the same 4K file system block size I send 4K to the drive but
my physical block size is 4K so it's a single write cycle to get it
into flash.

What I *think* is true is that any time your file system block size is
smaller than the physical block size on the storage element then
simplistically you have the risk of write amplification.

What I know I'm not sure about is how inodes factor into this.

For instance:

mark@science2:~$ ls -i
35790149  000_NOT_BACKED_UP
33320794  All_Files.txt
7840  All_Sizes_2.txt
7952  All_Sizes.txt
33329818  All_Sorted.txt
33306743  ardour_deps_install.sh
33309917  ardour_deps_remove.sh
33557560  Arena_Chess
33423859  Astro_Data
33560973  Astronomy
33423886  Astro_science
33307443 'Backup codes - Login.gov.pdf'
33329080  basic-install.sh
33558634  bin
33561132  biosim4_functions.txt
33316157  Boot_Config.txt
33560975  Builder
8822  CFL_88_F_Bright_Syn.xsc

If the inodes are on the disk then how are they
stored? Does a single inode occupy a physical
block? A 512 byte LBA? Something else?

I have no clue.

>
> I believe Windows always uses 4096 by default and thus it's reasonable
> to assume that most SSDs are aware of that.
>


[gentoo-user] Re: Finally got a SSD drive to put my OS on

2023-04-19 Thread Nikos Chantziaras

On 19/04/2023 22:26, Dale wrote:
So for future reference, let it format with the default?  I'm also 
curious if when it creates the file system it will notice this and 
adjust automatically. It might.  Maybe?


AFAIK, SSDs will internally convert to 4096 in their firmware even if 
they report a physical sector size of 512 through SMART. Just a 
compatibility thing. So formatting with 4096 is fine and gets rid of the 
internal conversion.


I believe Windows always uses 4096 by default and thus it's reasonable 
to assume that most SSDs are aware of that.





Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on

2023-04-19 Thread Dale
Mark Knecht wrote:
>
>
> On Wed, Apr 19, 2023 at 10:59 AM Dale  > wrote:
> >
> > Peter Humphrey wrote:
> > > On Wednesday, 19 April 2023 09:00:33 BST Nikos Chantziaras wrote:
> > >
> > >> With my HDD:
> > >>
> > >>    # smartctl -x /dev/sda | grep -i 'sector size'
> > >>    Sector Sizes:     512 bytes logical, 4096 bytes physical
> > > Or, with an NVMe drive:
> > >
> > > # smartctl -x /dev/nvme1n1 | grep -A2 'Supported LBA Sizes'
> > > Supported LBA Sizes (NSID 0x1)
> > > Id Fmt  Data  Metadt  Rel_Perf
> > >  0 +     512       0         0
> > >
> > > :)
> > >
> >
> > When I run that command, sdd is my SDD drive, ironic I know.  Anyway, it
> > doesn't show block sizes.  It returns nothing.
> >
> > root@fireball / # smartctl -x /dev/sdd  | grep -A2 'Supported LBA Sizes'
> > root@fireball / #
>
> Note that all of these technologies, HDD, SDD, M.2, report different
> things
> and don't always report them the same way. This is an SDD in my 
> Plex backup server:
>
> mark@science:~$ sudo smartctl -x /dev/sdb
> [sudo] password for mark:  
> smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.15.0-69-generic] (local
> build)
> Copyright (C) 2002-20, Bruce Allen, Christian Franke,
> www.smartmontools.org 
>
> === START OF INFORMATION SECTION ===
> Model Family:     Crucial/Micron Client SSDs
> Device Model:     CT250MX500SSD1
> Serial Number:    1905E1E79C72
> LU WWN Device Id: 5 00a075 1e1e79c72
> Firmware Version: M3CR023
> User Capacity:    250,059,350,016 bytes [250 GB]
> Sector Sizes:     512 bytes logical, 4096 bytes physical
>
> In my case the physical block is 4096 bytes but 
> addressable in 512 byte blocks. It appears that
> yours is 512 byte physical blocks.
>
> [QUOTE]
> === START OF INFORMATION SECTION ===
> Model Family: Samsung based SSDs
> Device Model: Samsung SSD 870 EVO 500GB
> Serial Number:    S6PWNXXX
> LU WWN Device Id: 5 002538 XX
> Firmware Version: SVT01B6Q
> User Capacity:    500,107,862,016 bytes [500 GB]
> Sector Size:  512 bytes logical/physica
> [QUOTE]


So for future reference, let it format with the default?  I'm also
curious if when it creates the file system it will notice this and
adjust automatically. It might.  Maybe?

Dale

:-)  :-) 

P. S. Dang squirrels got in my greenhouse and dug up my seedlings. 
Squirrel hunting is next on my agenda.  :-@


Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on

2023-04-19 Thread Mark Knecht
On Wed, Apr 19, 2023 at 10:59 AM Dale  wrote:
>
> Peter Humphrey wrote:
> > On Wednesday, 19 April 2023 09:00:33 BST Nikos Chantziaras wrote:
> >
> >> With my HDD:
> >>
> >># smartctl -x /dev/sda | grep -i 'sector size'
> >>Sector Sizes: 512 bytes logical, 4096 bytes physical
> > Or, with an NVMe drive:
> >
> > # smartctl -x /dev/nvme1n1 | grep -A2 'Supported LBA Sizes'
> > Supported LBA Sizes (NSID 0x1)
> > Id Fmt  Data  Metadt  Rel_Perf
> >  0 + 512   0 0
> >
> > :)
> >
>
> When I run that command, sdd is my SDD drive, ironic I know.  Anyway, it
> doesn't show block sizes.  It returns nothing.
>
> root@fireball / # smartctl -x /dev/sdd  | grep -A2 'Supported LBA Sizes'
> root@fireball / #

Note that all of these technologies, HDD, SDD, M.2, report different things
and don't always report them the same way. This is an SDD in my
Plex backup server:

mark@science:~$ sudo smartctl -x /dev/sdb
[sudo] password for mark:
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.15.0-69-generic] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family: Crucial/Micron Client SSDs
Device Model: CT250MX500SSD1
Serial Number:1905E1E79C72
LU WWN Device Id: 5 00a075 1e1e79c72
Firmware Version: M3CR023
User Capacity:250,059,350,016 bytes [250 GB]
Sector Sizes: 512 bytes logical, 4096 bytes physical

In my case the physical block is 4096 bytes but
addressable in 512 byte blocks. It appears that
yours is 512 byte physical blocks.

[QUOTE]
=== START OF INFORMATION SECTION ===
Model Family: Samsung based SSDs
Device Model: Samsung SSD 870 EVO 500GB
Serial Number:S6PWNXXX
LU WWN Device Id: 5 002538 XX
Firmware Version: SVT01B6Q
User Capacity:500,107,862,016 bytes [500 GB]
Sector Size:  512 bytes logical/physica
[QUOTE]


Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on

2023-04-19 Thread Dale
Peter Humphrey wrote:
> On Wednesday, 19 April 2023 09:00:33 BST Nikos Chantziaras wrote:
>
>> With my HDD:
>>
>># smartctl -x /dev/sda | grep -i 'sector size'
>>Sector Sizes: 512 bytes logical, 4096 bytes physical
> Or, with an NVMe drive:
>
> # smartctl -x /dev/nvme1n1 | grep -A2 'Supported LBA Sizes'
> Supported LBA Sizes (NSID 0x1)
> Id Fmt  Data  Metadt  Rel_Perf
>  0 + 512   0 0
>
> :)
>

When I run that command, sdd is my SDD drive, ironic I know.  Anyway, it
doesn't show block sizes.  It returns nothing.

root@fireball / # smartctl -x /dev/sdd  | grep -A2 'Supported LBA Sizes'
root@fireball / #

This is the FULL output, in case it is hidden somewhere grep and I can't
find.  Keep in mind, this is a blank drive with no partitions or anything. 

root@fireball / # smartctl -x /dev/sdd
smartctl 7.3 2022-02-28 r5338 [x86_64-linux-5.14.15-gentoo] (local build)
Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family: Samsung based SSDs
Device Model: Samsung SSD 870 EVO 500GB
Serial Number:    S6PWNXXX
LU WWN Device Id: 5 002538 XX
Firmware Version: SVT01B6Q
User Capacity:    500,107,862,016 bytes [500 GB]
Sector Size:  512 bytes logical/physical
Rotation Rate:    Solid State Device
Form Factor:  2.5 inches
TRIM Command: Available, deterministic, zeroed
Device is:    In smartctl database 7.3/5440
ATA Version is:   ACS-4 T13/BSR INCITS 529 revision 5
SATA Version is:  SATA 3.3, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Wed Apr 19 12:57:03 2023 CDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
AAM feature is:   Unavailable
APM feature is:   Unavailable
Rd look-ahead is: Enabled
Write cache is:   Enabled
DSN feature is:   Unavailable
ATA Security is:  Disabled, frozen [SEC2]
Wt Cache Reorder: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x80) Offline data collection activity
    was never started.
    Auto Offline Data Collection:
Enabled.
Self-test execution status:  (   0) The previous self-test routine
completed
    without error or no self-test
has ever
    been run.
Total time to complete Offline
data collection:    (    0) seconds.
Offline data collection
capabilities:    (0x53) SMART execute Offline immediate.
    Auto Offline data collection
on/off support.
    Suspend Offline collection upon new
    command.
    No Offline surface scan supported.
    Self-test supported.
    No Conveyance Self-test supported.
    Selective Self-test supported.
SMART capabilities:    (0x0003) Saves SMART data before entering
    power-saving mode.
    Supports SMART auto save timer.
Error logging capability:    (0x01) Error logging supported.
    General Purpose Logging supported.
Short self-test routine
recommended polling time:    (   2) minutes.
Extended self-test routine
recommended polling time:    (  85) minutes.
SCT capabilities:  (0x003d) SCT Status supported.
    SCT Error Recovery Control
supported.
    SCT Feature Control supported.
    SCT Data Table supported.

SMART Attributes Data Structure revision number: 1
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME  FLAGS    VALUE WORST THRESH FAIL RAW_VALUE
  5 Reallocated_Sector_Ct   PO--CK   100   100   010    -    0
  9 Power_On_Hours  -O--CK   099   099   000    -    75
 12 Power_Cycle_Count   -O--CK   099   099   000    -    3
177 Wear_Leveling_Count PO--C-   100   100   000    -    0
179 Used_Rsvd_Blk_Cnt_Tot   PO--C-   100   100   010    -    0
181 Program_Fail_Cnt_Total  -O--CK   100   100   010    -    0
182 Erase_Fail_Count_Total  -O--CK   100   100   010    -    0
183 Runtime_Bad_Block   PO--C-   100   100   010    -    0
187 Uncorrectable_Error_Cnt -O--CK   100   100   000    -    0
190 Airflow_Temperature_Cel -O--CK   077   069   000    -    23
195 ECC_Error_Rate  -O-RC-   200   200   000    -    0
199 CRC_Error_Count -OSRCK   100   100   000    -    0
235 POR_Recovery_Count  -O--C-   099   099   000    -    1
241 Total_LBAs_Written  -O--CK   100   100   000    -    0
    ||_ K auto-keep
   

Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on

2023-04-19 Thread Mark Knecht
On Wed, Apr 19, 2023 at 3:35 AM Peter Humphrey 
wrote:
>
> On Wednesday, 19 April 2023 09:00:33 BST Nikos Chantziaras wrote:
>
> > With my HDD:
> >
> ># smartctl -x /dev/sda | grep -i 'sector size'
> >Sector Sizes: 512 bytes logical, 4096 bytes physical
>
> Or, with an NVMe drive:
>
> # smartctl -x /dev/nvme1n1 | grep -A2 'Supported LBA Sizes'
> Supported LBA Sizes (NSID 0x1)
> Id Fmt  Data  Metadt  Rel_Perf
>  0 + 512   0 0
>

That command, on my system anyway, does pick up all the
LBA sizes:

1) Windows - 1TB Sabrent:

Supported LBA Sizes (NSID 0x1)
Id Fmt  Data  Metadt  Rel_Perf
0 + 512   0 2
1 -4096   0 1

Data Units Read:8,907,599 [4.56 TB]
Data Units Written: 4,132,726 [2.11 TB]
Host Read Commands: 78,849,158
Host Write Commands:55,570,509

Error Information (NVMe Log 0x01, 16 of 63 entries)
Num   ErrCount  SQId   CmdId  Status  PELoc  LBA  NSIDVS
 0   1406 0  0x600b  0x4004  0x0280 0 -

2) Kubuntu - 1TB Crucial

Supported LBA Sizes (NSID 0x1)
Id Fmt  Data  Metadt  Rel_Perf
0 + 512   0 1
1 -4096   0 0

Data Units Read:28,823,498 [14.7 TB]
Data Units Written: 28,560,888 [14.6 TB]
Host Read Commands: 137,865,594
Host Write Commands:209,406,594

Error Information (NVMe Log 0x01, 16 of 16 entries)
Num   ErrCount  SQId   CmdId  Status  PELoc  LBA  NSIDVS
 0   1735 0  0x100c  0x4005  0x0280 0 -

3) Scratch pad - 128GB SSSTC (No name) M.2 chip mounted on Joylifeboard
PCIe card

Supported LBA Sizes (NSID 0x1)
Id Fmt  Data  Metadt  Rel_Perf
0 + 512   0 0

Data Units Read:363,470 [186 GB]
Data Units Written: 454,447 [232 GB]
Host Read Commands: 2,832,367
Host Write Commands:2,833,717

Error Information (NVMe Log 0x01, 16 of 64 entries)
No Errors Logged

NOTE: When I first got interested in M.2 I bought a PCI Express
card and an M.2 chip just to use for a while with Astrophotography
files which tend to be 24MB coming out of my camera but grow
to possibly 1GB as processing occurs. Total cost was about
$30 and might be a possible solution for Gentoo users who
want a faster scratch pad for system updates. Even this
second rate hardware has been reliable and it pretty fast:

https://www.amazon.com/gp/product/B09K4YXN33
https://www.amazon.com/gp/product/B08ZB6YVPW

mark@science2:~$ sudo hdparm -tT /dev/nvme2n1
/dev/nvme2n1:
Timing cached reads:   48164 MB in  1.99 seconds = 24144.06 MB/sec
Timing buffered disk reads: 1210 MB in  3.00 seconds = 403.08 MB/sec
mark@science2:~$

Although not as fast as M.2 on the MB where the Sabrent M.2 blows
away the Crucial M.2

mark@science2:~$ sudo hdparm -tT /dev/nvme0n1

/dev/nvme0n1:
Timing cached reads:   47660 MB in  1.99 seconds = 23890.55 MB/sec
Timing buffered disk reads: 5452 MB in  3.00 seconds = 1817.10 MB/sec
mark@science2:~$ sudo hdparm -tT /dev/nvme1n1

/dev/nvme1n1:
Timing cached reads:   47310 MB in  1.99 seconds = 23714.77 MB/sec
Timing buffered disk reads: 1932 MB in  3.00 seconds = 643.49 MB/sec
mark@science2:~$


Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on

2023-04-19 Thread Peter Humphrey
On Wednesday, 19 April 2023 09:00:33 BST Nikos Chantziaras wrote:

> With my HDD:
> 
># smartctl -x /dev/sda | grep -i 'sector size'
>Sector Sizes: 512 bytes logical, 4096 bytes physical

Or, with an NVMe drive:

# smartctl -x /dev/nvme1n1 | grep -A2 'Supported LBA Sizes'
Supported LBA Sizes (NSID 0x1)
Id Fmt  Data  Metadt  Rel_Perf
 0 + 512   0 0

:)

-- 
Regards,
Peter.






Re: [gentoo-user] Re: Finally got a SSD drive to put my OS on

2023-04-19 Thread Dale
Nikos Chantziaras wrote:
> On 19/04/2023 04:45, Dale wrote:
>> Filesystem created:   Sun Apr 15 03:24:56 2012
>> Lifetime writes:  993 GB
>>
>> That's for the main / partition.  I have /usr on it's own partition tho.
>>
>> Filesystem created:   Sun Apr 15 03:25:48 2012
>> Lifetime writes:  1063 GB
>>
>> I'd think that / and /usr would be the most changed parts of the OS.
>> After all, /bin and /sbin are on / too as is /lib*.  If that is even
>> remotely correct, both would only be around 2TBs.  That dang thing may
>> outlive me even if I don't try to minimize writes.  ROFLMBO
>
> I believe this only shows the lifetime writes to that particular
> filesystem since it's been created?
>
> You can use smartctl here too. At least on my HDD, the HDD's firmware
> keeps tracks of the lifetime logical sectors written. Logical sectors
> are 512 bytes (physical are 4096). The logical sector size is also
> shown by smartctl.
>
> With my HDD:
>
>   # smartctl -x /dev/sda | grep -i 'sector size'
>   Sector Sizes: 512 bytes logical, 4096 bytes physical
>
> Then to get the total logical sectors written:
>
>   # smartctl -x /dev/sda | grep -i 'sectors written'
>   0x01  0x018  6 37989289142  ---  Logical Sectors Written
>
> Converting that to terabytes written with "bc -l":
>
>   37988855446 * 512 / 1024^4
>   17.68993933033198118209
>
> Almost 18TB.
>
>
>


I'm sure it is since the file system was created.  Look at the year
tho.  It's about 11 years ago when I first built this rig.  If I've only
written that amount of data to my current drive over the last 11 years,
the SSD drive should last for many, MANY, years, decades even.  At this
point, I should worry more about something besides it running out of
write cycles.  LOL  I'd think technology changes will bring it to its
end of life rather than write cycles. 

Eventually, I'll have time to put it to use.  To much going on right now
tho. 

Dale

:-)  :-) 



[gentoo-user] Re: Finally got a SSD drive to put my OS on

2023-04-19 Thread Nikos Chantziaras

On 19/04/2023 04:45, Dale wrote:

Filesystem created:   Sun Apr 15 03:24:56 2012
Lifetime writes:  993 GB

That's for the main / partition.  I have /usr on it's own partition tho.

Filesystem created:   Sun Apr 15 03:25:48 2012
Lifetime writes:  1063 GB

I'd think that / and /usr would be the most changed parts of the OS.
After all, /bin and /sbin are on / too as is /lib*.  If that is even
remotely correct, both would only be around 2TBs.  That dang thing may
outlive me even if I don't try to minimize writes.  ROFLMBO


I believe this only shows the lifetime writes to that particular 
filesystem since it's been created?


You can use smartctl here too. At least on my HDD, the HDD's firmware 
keeps tracks of the lifetime logical sectors written. Logical sectors 
are 512 bytes (physical are 4096). The logical sector size is also shown 
by smartctl.


With my HDD:

  # smartctl -x /dev/sda | grep -i 'sector size'
  Sector Sizes: 512 bytes logical, 4096 bytes physical

Then to get the total logical sectors written:

  # smartctl -x /dev/sda | grep -i 'sectors written'
  0x01  0x018  6 37989289142  ---  Logical Sectors Written

Converting that to terabytes written with "bc -l":

  37988855446 * 512 / 1024^4
  17.68993933033198118209

Almost 18TB.