Re: Amanda (was: sata driver compataility Q)

2023-09-19 Thread Glenn
My all time fav for file level backups is BackupPC. De-duplicates files in the 
compressed pool.

Glenn 

On September 19, 2023 12:18:36 p.m. ADT, gene heskett  
wrote:
>On 9/19/23 08:59, Stefan Monnier wrote:
>>> Compared to the setup required for amanda, that sounds very inviting. Amanda
>>> has a very steep learning curve just because it is so versatile I'm still
>>> waiting on stuff, so no more actual progress.
>> 
>> I used Amanda many years ago and was quite pleased with it, but I must
>> say I'm having a hard time imagining it in my current world where tapes
>> don't make much sense for backups.
>> 
>Thats where vtapes come in, A vtape is nothing more than a directory on the 
>backup medium, which for me was a BIG hard drive with in my case, 60 suddirs, 
>used as tapes. Each contained individual files identified as to backup level 
>which was a way ti differentiate a full copy, or what had ben changed since 
>the last full, or what had been changed since the last level 1, wash rinse 
>repeat for ever deeper levels. And with or w/o compression. Executables 
>generally aren't worth the time to compress. Ditto for a dir full if pictures 
>or pdf's. They are not very compressible.  In the days of tapes, a buffer 
>drive was used to build up each entry as a big file that was then copied to 
>the tape w/o any shoe-shining of the tapedrive, saveibg the huge wear and tear 
>of the tape if it had to stop and wait for data from the compressor. Then back 
>uo a few feet, back forward to begin a fresh write at the end of the previous 
>track. But since spinning rust is random access, amd so is the vtape, I don't 
>think the anti-shoeshine has much if any advantage whrn using vtapes. With 
>some filesystems it might reduce fragmentation but that was never a problem 
>with ext4. I ran with that buffer drive for about 17 yeas, starting out with a 
>4 tape seagate dds4 tape drive but it was by far, the least dependable thing 
>in that whole chain. I was then backing up 3 cnc machines and this ones 
>predecessor, but the drive needed a months vacation in Oklahoma city about 2x 
>a year for a new head drum that seagate would not sell me, a CET with 
>extensive experience replacing even smaller, more precise and damn sure more 
>expensive at $3500 a copy dvc-pro broadcast vcr heads.  So I tried vtapes, 
>first on a 220G drive but soon opted for a bigger one as they became 
>available, and had just graduated to a pair of seagates first 2T's, both of 
>which just disapeared off the sata buss in the middle of the night. the main 
>drive for this machine and the amanda drive. They were about 2 weeks old. So I 
>rebuilt this machine using a 500G Samsung SSD. I was out of the amanda 
>business and lost everything with those 2 failures, whih upset me so much I 
>never tried to warranty them. I was done with spinning rust.
>
>Some of the loss was the only pix of my first wife who had a stroke and died 
>in '68 at 34. Left me with 3 children to raise, but the big C and a bottle of 
>scotch has since eliminated them. And my personal email archive that went back 
>to '98 when I built my first linux machine using a 400 mhz k6 cpu. Put RedHat 
>5.0 on it.  And I was in hog heaven, I never owned a windows machine until I 
>needed one for the road after I retired in 2002 and became a consultant, going 
>around to other tv stations putting out engineering fires created by wannabe 
>engineers. The windows xp on it lasted about 2 weeks that it took me to find 
>out windows xp had no drivers for the radio in it, but mandrake did.
>
>Amanda keeps a database, so if something gets erased you need later, it could 
>be recovered as long as in my case 60 days later before the vtape has been 
>reused.
>
>One of the things my wrapper did was append that database to the end of that 
>vtape when amanda was finished from its nightly run, thereby making it 
>possible to do a bare metal recovery to the state that existed during the run. 
>Without that, you lost the most recent run because the database you backed up 
>was yesterdays.
>
>So AFAIAC, amanda was the king. Then amanda was handed over to Zmanda, who 
>eventually went bust and sold it to betsol, who has done zip for it in several 
>years.  Community support from other users is all thats left.
>Not the end of it of course, but somebody who actually cares needs to fork it 
>and become its new leader. 95% of the work on amanda has been driven by 
>changes in tar over the last decade+.
>
>> What are the use cases where Amanda still beats the pants down of
>> competitors like Borg or Bup?
>
>I know nothing about either of those. This thread ought to have input from 
>their users so people can make more informed decisions as to which is best for 
>their situation.
>> 
>> 
>>  Stefan
>Take care & stay well, Stefan, and other readers.
>> 
>> .
>
>Cheers, Gene Heskett.
>-- 
>"There are four boxes to be used in defense of liberty:
> soap, ballot, jury, and ammo. Please use in that order."

Re: Amanda (was: sata driver compataility Q)

2023-09-19 Thread gene heskett

On 9/19/23 08:59, Stefan Monnier wrote:

Compared to the setup required for amanda, that sounds very inviting. Amanda
has a very steep learning curve just because it is so versatile I'm still
waiting on stuff, so no more actual progress.


I used Amanda many years ago and was quite pleased with it, but I must
say I'm having a hard time imagining it in my current world where tapes
don't make much sense for backups.

Thats where vtapes come in, A vtape is nothing more than a directory on 
the backup medium, which for me was a BIG hard drive with in my case, 60 
suddirs, used as tapes. Each contained individual files identified as to 
backup level which was a way ti differentiate a full copy, or what had 
ben changed since the last full, or what had been changed since the last 
level 1, wash rinse repeat for ever deeper levels. And with or w/o 
compression. Executables generally aren't worth the time to compress. 
Ditto for a dir full if pictures or pdf's. They are not very 
compressible.  In the days of tapes, a buffer drive was used to build up 
each entry as a big file that was then copied to the tape w/o any 
shoe-shining of the tapedrive, saveibg the huge wear and tear of the 
tape if it had to stop and wait for data from the compressor. Then back 
uo a few feet, back forward to begin a fresh write at the end of the 
previous track. But since spinning rust is random access, amd so is the 
vtape, I don't think the anti-shoeshine has much if any advantage whrn 
using vtapes. With some filesystems it might reduce fragmentation but 
that was never a problem with ext4. I ran with that buffer drive for 
about 17 yeas, starting out with a 4 tape seagate dds4 tape drive but it 
was by far, the least dependable thing in that whole chain. I was then 
backing up 3 cnc machines and this ones predecessor, but the drive 
needed a months vacation in Oklahoma city about 2x a year for a new head 
drum that seagate would not sell me, a CET with extensive experience 
replacing even smaller, more precise and damn sure more expensive at 
$3500 a copy dvc-pro broadcast vcr heads.  So I tried vtapes, first on a 
220G drive but soon opted for a bigger one as they became available, and 
had just graduated to a pair of seagates first 2T's, both of which just 
disapeared off the sata buss in the middle of the night. the main drive 
for this machine and the amanda drive. They were about 2 weeks old. So I 
rebuilt this machine using a 500G Samsung SSD. I was out of the amanda 
business and lost everything with those 2 failures, whih upset me so 
much I never tried to warranty them. I was done with spinning rust.


Some of the loss was the only pix of my first wife who had a stroke and 
died in '68 at 34. Left me with 3 children to raise, but the big C and a 
bottle of scotch has since eliminated them. And my personal email 
archive that went back to '98 when I built my first linux machine using 
a 400 mhz k6 cpu. Put RedHat 5.0 on it.  And I was in hog heaven, I 
never owned a windows machine until I needed one for the road after I 
retired in 2002 and became a consultant, going around to other tv 
stations putting out engineering fires created by wannabe engineers. The 
windows xp on it lasted about 2 weeks that it took me to find out 
windows xp had no drivers for the radio in it, but mandrake did.


Amanda keeps a database, so if something gets erased you need later, it 
could be recovered as long as in my case 60 days later before the vtape 
has been reused.


One of the things my wrapper did was append that database to the end of 
that vtape when amanda was finished from its nightly run, thereby making 
it possible to do a bare metal recovery to the state that existed during 
the run. Without that, you lost the most recent run because the database 
you backed up was yesterdays.


So AFAIAC, amanda was the king. Then amanda was handed over to Zmanda, 
who eventually went bust and sold it to betsol, who has done zip for it 
in several years.  Community support from other users is all thats left.
Not the end of it of course, but somebody who actually cares needs to 
fork it and become its new leader. 95% of the work on amanda has been 
driven by changes in tar over the last decade+.



What are the use cases where Amanda still beats the pants down of
competitors like Borg or Bup?


I know nothing about either of those. This thread ought to have input 
from their users so people can make more informed decisions as to which 
is best for their situation.



 Stefan

Take care & stay well, Stefan, and other readers.


.


Cheers, Gene Heskett.
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author, 1940)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Amanda (was: sata driver compataility Q)

2023-09-19 Thread Stefan Monnier
> Compared to the setup required for amanda, that sounds very inviting. Amanda
> has a very steep learning curve just because it is so versatile I'm still
> waiting on stuff, so no more actual progress.

I used Amanda many years ago and was quite pleased with it, but I must
say I'm having a hard time imagining it in my current world where tapes
don't make much sense for backups.

What are the use cases where Amanda still beats the pants down of
competitors like Borg or Bup?


Stefan



Re: sata driver compataility Q

2023-09-18 Thread gene heskett

On 9/18/23 07:57, Dan Ritter wrote:

gene heskett wrote:

[...]

It looks like the motherboard shares some PCIe and/or SATA lanes between
SATA ports and M.2 ports, so you must be careful with your choices.?? I
suggest installing an M.2 PCIe x4 SSD into slot M.2_1 and configuring it
for "PCIE mode", so that it works and all 6 SATA ports work.?? You will
want to use EUFI mode and GPT when installing Debian.



Based on this, and a full sized manual printout,  I've ordered a 2T WD
Black, supposedly a 2280 device. $100.

Question, when I put this in, what happens to the 32GB of dimms? How does
this fit into the architecture?  I assume this isn't volatile but is quick
storage.


The DIMM slots are different from the M.2 slots. The M.2 slots
are small PCIe interfaces; the installation procedure is to
insert the 2280 (22mm x 80mm) card at a slight upward angle,
then press it down and screw it in. It may ship with a glued-on
heat spreader or tiny radiator; if so, use it, don't peel it
off.

Note that it should appear as /dev/nvme0n1 or similar, rather
than /dev/sda. Partitions will be /dev/nvme0n1p1, p2...

The NVM bits stands for non-volatile memory; it's an SSD with a
different interface.


I propose to put this in as suggested, which should leave all 6 sata-III's
available, Install bookworm to it, w/o the current ectra controller get it
going, then put 3 of the 2T gigastones on sata1-2-3, use the bios to make a
raid5 of them and mount it as /home, prove it works with some throw away
stuff, then plug the existing raid10 controller & mount it as moi, then
format the raid5 again with gparted,


You should use mdadm rather than a BIOS RAID system -- better
recovery to other systems, more understandable error messages,
better support for fixing things that might go wrong.


Thanks, I will.


If the new drives are sda through sdf something like this is what
you want:

mdadm --create /dev/md/gsmoi5 -l raid5 -n 3 /dev/sda /dev/sdb /dev/sdc
mkfs.ext4 /dev/md/gsmoi5
mount /dev/md/gsmoi5 /home


and for your second set:

mdadm --create /dev/md/gsamanda5 -l raid5 -n 3 /dev/sdd /dev/sde /dev/sdf
mkfs.ext4 /dev/md/gsamanda5
mount /dev/md/gsamanda5 /amanda



Suggestions re other, more recent solutions will be accepted and studied.
Definitely must support backing up other machines of varying architectures
on my local network. In addition to a 4 pack of linux running wintel stuff,
there's the potential for 5 or so arms too. Gcodes for 3d printers are all
unrolled loops and bulky as can be.


The most flexible backup systems are the hardest to configure,
but nothing is much worse than amanda.

You might like borg. Borg is in Debian as 'borgbackup'.

In the other direction, using rsnapshot over ssh is relatively
simple and comes with the distinct advantage over both amanda
and borg that the backups are stored as normal files in a normal
filesystem, so recovery from an accidental deletion of a file or
directory is very straightforward.

Compared to the setup required for amanda, that sounds very inviting. 
Amanda has a very steep learning curve just because it is so versatile 
I'm still waiting on stuff, so no more actual progress.


Thanks Dan, tae care & stay well.

-dsr-
.


Cheers, Gene Heskett.
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author, 1940)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: sata driver compataility Q

2023-09-18 Thread David Christensen

On 9/17/23 18:17, gene heskett wrote:

On 9/17/23 17:52, David Christensen wrote:

On 9/17/23 03:26, gene heskett wrote:

On 9/16/23 19:46, David Christensen wrote:

On 9/15/23 19:37, gene heskett wrote:

On 9/15/23 20:12, David Christensen wrote:

On 9/15/23 15:04, gene heskett wrote:

On 9/15/23 17:35, David Christensen wrote:

On 9/15/23 12:28, gene heskett wrote:

I've just ordered some stuff to rebuild or expand my Raid setup.
This 16 port sata-III pci-e card:

along with a bigger drive cage, cables and such and some 
gigastone 2T drives to make a raid big enough to run amanda. 
And maybe put a new card in front of my 2T /home raid10.



... Asus PRIME Z370-A II motherboard ...



... i5-9600K CPU @ 3.70GHz ...


A fresh install of  Debian stable or old-stable should solve the 
storage I/O stuttering problems you are experiencing.


It looks like the motherboard shares some PCIe and/or SATA lanes 
between SATA ports and M.2 ports, so you must be careful with your 
choices.  I suggest installing an M.2 PCIe x4 SSD into slot M.2_1 and 
configuring it for "PCIE mode", so that it works and all 6 SATA ports 
work.  You will want to use EUFI mode and GPT when installing Debian.


Based on this, and a full sized manual printout,  I've ordered a 2T WD 
Black, supposedly a 2280 device. $100.



This one?

https://www.amazon.com/dp/B09QV5KJHV


Question, when I put this in, what happens to the 32GB of dimms?  How 
does this fit into the architecture?  I assume this isn't volatile but 
is quick storage.



You will still have 32GB of memory.  The WD Black is a fast SSD.  The 
crux will be configuring your motherboard firmware Setup program so that 
d-i can see the WD Black during installation and so that the new Debian 
installation can boot and run.



I propose to put this in as suggested, which should leave all 6 
sata-III's available, Install bookworm to it, w/o the current ectra 
controller get it going, then put 3 of the 2T gigastones on sata1-2-3, 
use the bios to make a raid5 of them and mount it as /home, prove it 
works with some throw away stuff, then plug the existing raid10 
controller & mount it as moi, then format the raid5 again with gparted,
and turn mc loose copying /moi to /home to get my working data back. 
Then 3 more 2T gigastones on the last 3 mobo sata ports, make another 
raid5 out of those mounted as amandatapes.. Unforch, I had a wrapper 
around amanda that it took me 5 years to fine tune but I've no idea if a 
backup copy exists anyplace it this midden heap. Amanda, as it exists, 
if you start a recovery to bare metal, can only restore to yesterday, my 
wrapper was a special deal that grabbed the database fom the just 
finished backup and put that into the vtape, uncompressed which if that 
was untared to the bare metal gave anmanda the data for recovery that 
would bring the system back to this mornings state. It also cleaned out 
the database of any links that referenced vtapes that were recycled and 
re-used.



Most hardware- and hybrid hardware/ software RAID solutions expect 
Windows -- e.g. the manufacturer provides a Windows bundle with device 
driver, CLI, GUI, etc..  Looking at the Asus PRIME Z370-A II Driver & 
Tools page, I see various Windows packages related to storage, but 
nothing for Linux.  Unless you can find suitable Debian packages, I 
would advise against motherboard RAID.



Debian supports software RAID via md, LVM, and btrfs.  I suggest that 
you use one of those.



ZFS is another possibility, but the learning curve is non-trivial.


So while I'm familiar with amanda, its been sold to an outfit that 
doesn't care, so its getting long in the tooth with only user support.


Suggestions re other, more recent solutions will be accepted and 
studied.  Definitely must support backing up other machines of varying 
architectures on my local network. In addition to a 4 pack of linux 
running wintel stuff, there's the potential for 5 or so arms too. Gcodes 
for 3d printers are all unrolled loops and bulky as can be.



I suggest starting with the WD Black and the new Debian installation.  A 
fresh install on a new device will simplify re-arranging the rest of 
your disks later.  The challenge will be deciding what data to put on it 
after Debian boot, swap, and root; and if and how to subdivide the space.




Thank you David, take care and stay well.


Likewise.  :-)


David



Re: sata driver compataility Q

2023-09-17 Thread gene heskett

On 9/17/23 17:52, David Christensen wrote:

On 9/17/23 03:26, gene heskett wrote:

On 9/16/23 19:46, David Christensen wrote:

On 9/15/23 19:37, gene heskett wrote:

On 9/15/23 20:12, David Christensen wrote:

On 9/15/23 15:04, gene heskett wrote:

On 9/15/23 17:35, David Christensen wrote:

On 9/15/23 12:28, gene heskett wrote:

I've just ordered some stuff to rebuild or expand my Raid setup.
This 16 port sata-III pci-e card:

along with a bigger drive cage, cables and such and some 
gigastone 2T drives to make a raid big enough to run amanda. And 
maybe put a new card in front of my 2T /home raid10.


Searching the mailing list archive, it appears that you have an Asus 
PRIME Z370-A II motherboard (?):


https://www.asus.com/motherboards-components/motherboards/prime/prime-z370-a-ii/

And, an Intel Core i5 processor (?).  Which model?


cpuinfo can't copy/paste, 6 core, i5-9600K CPU @ 3.70GHz ...



Okay.  I do not overclock, but the "K" suffix processor is usually the 
fastest OOTB in a given series.



How many GB of OS and apps do you have?  Home directory?  Bulk data? 
Amanda backups?  VM's?  Other?



Knowing how much and what kind of live, backup, and whatever data you 
have would help us make better suggestions for storage.  Similarly, your 
drives, HBA's, chassis, and chassis mods (notably drive bays).



While researching this thread, I came across an HBA that may interest 
both of us:


 https://www.amazon.com/dp/B09L3GLCL9

4x the bandwidth (3.94 GB/s), 8 more SATA ports (24 total), and $42.31
higher price ($117.30).  I would prefer this card for the bandwidth
alone, and I never know when I might need those extra ports to
temporarily connect a bunch of disks from other machines.


There is an Amazon review that states the IO CREST is PCIe 3.0 x2 
electrically.  If correct, the bandwidth is 1.97 GB/s, which should be 
sufficient to saturate about a dozen HDD's.



The Amazon page for the IO CREST provides a part number:

     SI-PEX40169


STFW for the part number leads to a Syba HBA:

 
https://www.sybausa.com/index.php?route=product/product_id=1095


     Uses 5 JMB575 ( 8 SATA Ports Per Slim SAS) connected to a JMB582 via
     Port Multiplier Mode


STFW for the chips:

* JMB582

     https://www.jmicron.com/products/list/15

     PCIe Gen3 x1 to Dual SATA 6Gb/s

* JMB575

     https://www.jmicron.com/products/list/16

     1 to 5 ports SATA 6Gb/s
     Port Multiplier / Port Selector


STFW it is hard to find good technical information, infer the 
architecture, or reason about the performance of the HBA.



I have some Syba PCIe 1.0 x1 to 2 @ SATA II HBA's on the shelf.  They 
have worked with Windows, Debian, and/or FreeBSD for many years.



Thats nice but only an x4 for $117, how about an x16, which I have an 
empty slot for a full length X16 at $160: > 
https://www.amazon.com/dp/B09K5GLJ8D



The Amazon page for the BEYIMEI HBA states:

     Use chipset 6 * ASM1064 + ASM1812 * 1 main control chip


STFW for those chips:

* ASM1064:

     https://www.asmedia.com.tw/product/A58yQC9Sp5qg6TrF/58dYQ8bxZ4UR9wG5

     ASM1064, a SATA host controller(AHCI) with upstream PCIe Gen3 x1 and
     downstream four SATA Gen3 ports. It’s a low latency, low cost and
     low power AHCI controller. With four SATA ports and cascaded port
     multipliers, ASM1064 can enable users to build up various high speed
     IO systems, including server, high capacity system storage or
     surveillance platforms.

* ASM1812:

     https://www.asmedia.com.tw/product/1e2yQ48sx5HT2pUF/b7FyQBCxz2URbzg0

     ASMedia PCIe product ASM1812, a low latency, low cost and low power
     12 lane , maximum 6 downstream ports packet switch. With upstream
     PCIe Gen2x4 bandwidth, ASM1812 can enable users to build up various
     high speed IO systems, including server, system storage or
     communication platforms.


Again, it is hard to find technical information, infer the architecture, 
or reason about performance of the HBA.




Or is the jmicron x4 better supported?



I do not know.  One option is to STFW for release notes, bug reports, 
reviews, etc..  After that, I expect it boils down to "buy it, try it; 
return if not satisfied".



Regardless of what you do with HBA's, I would connect the six SSD's 
to the six motherboard SATA III ports.


Which would tie them up. Is that the faster solution, moving the 
opticals to a different card? 



My optical drives are SATA I (1.5 Gbps).  I connect them to the slowest 
SATA ports in my computers.  I use the motherboard SATA III ports for my 
fastest SATA drives under the heaviest workloads -- SSD system drives 
and cache/ ZIL/ dedup vdev's.




The question then is can the bios see them to boot from them?



That depends upon the SATA port and the drive.  If you must, you can 
make a temporary connection while booting and running an optical disc.



I did grab what I think is a newer bios 

Re: sata driver compataility Q

2023-09-17 Thread David Christensen

On 9/17/23 03:26, gene heskett wrote:

On 9/16/23 19:46, David Christensen wrote:

On 9/15/23 19:37, gene heskett wrote:

On 9/15/23 20:12, David Christensen wrote:

On 9/15/23 15:04, gene heskett wrote:

On 9/15/23 17:35, David Christensen wrote:

On 9/15/23 12:28, gene heskett wrote:

I've just ordered some stuff to rebuild or expand my Raid setup.
This 16 port sata-III pci-e card:

along with a bigger drive cage, cables and such and some 
gigastone 2T drives to make a raid big enough to run amanda. And 
maybe put a new card in front of my 2T /home raid10.


Searching the mailing list archive, it appears that you have an Asus 
PRIME Z370-A II motherboard (?):


https://www.asus.com/motherboards-components/motherboards/prime/prime-z370-a-ii/

And, an Intel Core i5 processor (?).  Which model?


cpuinfo can't copy/paste, 6 core, i5-9600K CPU @ 3.70GHz ...



Okay.  I do not overclock, but the "K" suffix processor is usually the 
fastest OOTB in a given series.



How many GB of OS and apps do you have?  Home directory?  Bulk data? 
Amanda backups?  VM's?  Other?



Knowing how much and what kind of live, backup, and whatever data you 
have would help us make better suggestions for storage.  Similarly, your 
drives, HBA's, chassis, and chassis mods (notably drive bays).



While researching this thread, I came across an HBA that may interest 
both of us:


 https://www.amazon.com/dp/B09L3GLCL9

4x the bandwidth (3.94 GB/s), 8 more SATA ports (24 total), and $42.31
higher price ($117.30).  I would prefer this card for the bandwidth
alone, and I never know when I might need those extra ports to
temporarily connect a bunch of disks from other machines.


There is an Amazon review that states the IO CREST is PCIe 3.0 x2 
electrically.  If correct, the bandwidth is 1.97 GB/s, which should be 
sufficient to saturate about a dozen HDD's.



The Amazon page for the IO CREST provides a part number:

SI-PEX40169


STFW for the part number leads to a Syba HBA:

https://www.sybausa.com/index.php?route=product/product_id=1095

Uses 5 JMB575 ( 8 SATA Ports Per Slim SAS) connected to a JMB582 via
Port Multiplier Mode


STFW for the chips:

* JMB582

https://www.jmicron.com/products/list/15

PCIe Gen3 x1 to Dual SATA 6Gb/s

* JMB575

https://www.jmicron.com/products/list/16

1 to 5 ports SATA 6Gb/s
Port Multiplier / Port Selector


STFW it is hard to find good technical information, infer the 
architecture, or reason about the performance of the HBA.



I have some Syba PCIe 1.0 x1 to 2 @ SATA II HBA's on the shelf.  They 
have worked with Windows, Debian, and/or FreeBSD for many years.



Thats nice but only an x4 for $117, how about an x16, which I have an 
empty slot for a full length X16 at $160: > https://www.amazon.com/dp/B09K5GLJ8D



The Amazon page for the BEYIMEI HBA states:

Use chipset 6 * ASM1064 + ASM1812 * 1 main control chip


STFW for those chips:

* ASM1064:

https://www.asmedia.com.tw/product/A58yQC9Sp5qg6TrF/58dYQ8bxZ4UR9wG5

ASM1064, a SATA host controller(AHCI) with upstream PCIe Gen3 x1 and
downstream four SATA Gen3 ports. It’s a low latency, low cost and
low power AHCI controller. With four SATA ports and cascaded port
multipliers, ASM1064 can enable users to build up various high speed
IO systems, including server, high capacity system storage or
surveillance platforms.

* ASM1812:

https://www.asmedia.com.tw/product/1e2yQ48sx5HT2pUF/b7FyQBCxz2URbzg0

ASMedia PCIe product ASM1812, a low latency, low cost and low power
12 lane , maximum 6 downstream ports packet switch. With upstream
PCIe Gen2x4 bandwidth, ASM1812 can enable users to build up various
high speed IO systems, including server, system storage or
communication platforms.


Again, it is hard to find technical information, infer the architecture, 
or reason about performance of the HBA.




Or is the jmicron x4 better supported?



I do not know.  One option is to STFW for release notes, bug reports, 
reviews, etc..  After that, I expect it boils down to "buy it, try it; 
return if not satisfied".



Regardless of what you do with HBA's, I would connect the six SSD's to 
the six motherboard SATA III ports.


Which would tie them up. Is that the faster solution, moving the 
opticals to a different card? 



My optical drives are SATA I (1.5 Gbps).  I connect them to the slowest 
SATA ports in my computers.  I use the motherboard SATA III ports for my 
fastest SATA drives under the heaviest workloads -- SSD system drives 
and cache/ ZIL/ dedup vdev's.



The question then is can the bios see them 
to boot from them?



That depends upon the SATA port and the drive.  If you must, you can 
make a temporary connection while booting and running an optical disc.



I did grab what I think is a newer bios yesterday but haven't tried to 
install it yet.

  

Re: sata driver compataility Q

2023-09-17 Thread Andrew M.A. Cater
On Sun, Sep 17, 2023 at 06:26:49AM -0400, gene heskett wrote:
> On 9/16/23 19:46, David Christensen wrote:
> > On 9/15/23 19:37, gene heskett wrote:
> > > On 9/15/23 20:12, David Christensen wrote:
> > > > On 9/15/23 15:04, gene heskett wrote:
> > > > > On 9/15/23 17:35, David Christensen wrote:
> > > > > > On 9/15/23 12:28, gene heskett wrote:
> > > > > > > I've just ordered some stuff to rebuild or expand my Raid setup.
> > > > > > > This 16 port sata-III pci-e card:
> > > > > > > 
> > > > > > > along with a bigger drive cage, cables and such and
> > > > > > > some gigastone 2T drives to make a raid big enough
> > > > > > > to run amanda. And maybe put a new card in front of
> > > > > > > my 2T /home raid10.
> > > > > > 
> > > > > > Is everything going into one chassis?  Have you
> > > > > > considered an external drive chassis?
> > > > > 
> > > 
> > > Call me cheap, my choice is diy assembly, SS stampings you put
> > > together, all drive brackets and a dozen cables and a 5 drive power
> > > splitter, $30.
> > > Less than 5% of the price of that nice looking box.
> > 
> > 
> > Fair enough.
> > 
> > 
> > Searching the mailing list archive, it appears that you have an Asus
> > PRIME Z370-A II motherboard (?):
> > 
> > 
> > https://www.asus.com/motherboards-components/motherboards/prime/prime-z370-a-ii/
> > 
> > 
> > And, an Intel Core i5 processor (?).  Which model?
> 
> cpuinfo can't copy/paste, 6 core, i5-9600K CPU @ 3.70GHz but apparently very
> pushable, I ran one cycle of memtester pointed at 16G's and saw it above
> 4300 mhz a couple times, and that core hit 51C, which is 20C hotter than its
> ever run b4. With my intermittent load it often loafs at 800 mhz & 29C, 33C
> when OpenSCAD is munching on something I've designed.
> 
> > How many GB of OS and apps do you have?  Home directory?  Bulk data?
> > Amanda backups?  VM's?  Other?
> > 
> > 
> > 
> > 4x the bandwidth (3.94 GB/s), 8 more SATA ports (24 total), and $42.31
> > higher price ($117.30).  I would prefer this card for the bandwidth
> > alone, and I never know when I might need those extra ports to
> > temporarily connect a bunch of disks from other machines.
> > 
> > 
> > If your goal is maximum capacity at minimum cost, HDD's have larger
> > capacity, lower bandwidth, and lower cost per TB than SSD's -- per bay,
> > per drive, and per port.  Port multiplication makes sense with HDD's.
> > 
> > 
> > For file server and backup server roles, I use ZFS with HDD primary
> > storage devices and full bandwidth HBA.  Read performance, write
> > performance, and capacity utilization can be improved with ZFS
> > compression, read performance can be improved with an SSD cache vdev
> > (virtual device; e.g. partition), and write performance can be improved
> > with SSD intent log vdev (mirror of partitions).  For the backup server
> > role, capacity utilization can be improved with ZFS deduplication and an
> > SSD dedup vdev (mirror of partitions).
> > 
> > 
> > Regardless of what you do with HBA's, I would connect the six SSD's to
> > the six motherboard SATA III ports.
> > 
> Which would tie them up. Is that the faster solution, moving the opticals to
> a different card? The question then is can the bios see them to boot from
> them?
> 
> I did grab what I think is a newer bios yesterday but haven't tried to
> install it yet.
>  mt86plus_6.20_64.grub.iso.zip
> > 
> > A fresh install of  Debian stable or old-stable should solve the storage
> > I/O stuttering problems you are experiencing.  (That motherboard has
> > dual M.2 ports.  Installing Debian onto an M.2 PCIE 3.0 x4 SSD would be
> > very nice.)
> > 

See below: maybe get a new machine to do this on ...

> I know zip about the new m2 stuff. Link to good info appreciated. I didn't
> use it originally because I couldn't find the m2 sockets but probably didn't
> look too hard as I was more concerned with getting another system built
> after a usb socket on the previous occupant of that space caught fire and
> tried to burn the place down.
> 

Given what you have: m2 is probably faster than anything else you've got.
Chuck a terabyte drive in and forget about running Debian anywhere else
for your OS.

Any add in card is likely to be slower than your MB sata ports which are
relatively well connected / you will end up with contention between a 
relatively fast card and the motherboard with a bottleneck somewhere.

Unless you buy the very best server grade cards - and even then - cheap
cards are doing everything in software so you're just running an expensive
bunch of disks and doing software RAID anyway - at which point you might as
well be using mdadm. ZFS - a whole new learning curve for you, a bunch of
tuning you may or may not want to do - and another thing for you to blame
in due course, perhaps..

You have a fascination for buying drives and kludging things together -
you might honestly be better getting yourself a 

Re: sata driver compataility Q

2023-09-17 Thread gene heskett

On 9/16/23 19:46, David Christensen wrote:

On 9/15/23 19:37, gene heskett wrote:

On 9/15/23 20:12, David Christensen wrote:

On 9/15/23 15:04, gene heskett wrote:

On 9/15/23 17:35, David Christensen wrote:

On 9/15/23 12:28, gene heskett wrote:

I've just ordered some stuff to rebuild or expand my Raid setup.
This 16 port sata-III pci-e card:

along with a bigger drive cage, cables and such and some gigastone 
2T drives to make a raid big enough to run amanda. And maybe put a 
new card in front of my 2T /home raid10.


Is everything going into one chassis?  Have you considered an 
external drive chassis?


Got one of those coming too. Small possibility it will fit it the 
bottom of this huge and old tiger direct tower.  Because the 
radiator for the 6 core i5 is too tall, it hasn't had a side cover 
on it in years. If not, theres a 3 3.5" cage at the bottom of the 
stack I can stuff  with 2 SSD's per bay slot. A hidey place, front 
cover is solid, but cooling might be a problem. If worse comes to 
worse I could shoe-goo a 120x15 to the side of the cage.



This is what I meant:

 https://www.pc-pitstop.com/16-bay-25inch-sas-sata-jbod-tower


Call me cheap, my choice is diy assembly, SS stampings you put 
together, all drive brackets and a dozen cables and a 5 drive power 
splitter, $30.

Less than 5% of the price of that nice looking box.



Fair enough.


Searching the mailing list archive, it appears that you have an Asus 
PRIME Z370-A II motherboard (?):



https://www.asus.com/motherboards-components/motherboards/prime/prime-z370-a-ii/


And, an Intel Core i5 processor (?).  Which model?


cpuinfo can't copy/paste, 6 core, i5-9600K CPU @ 3.70GHz but apparently 
very pushable, I ran one cycle of memtester pointed at 16G's and saw it 
above 4300 mhz a couple times, and that core hit 51C, which is 20C 
hotter than its ever run b4. With my intermittent load it often loafs at 
800 mhz & 29C, 33C when OpenSCAD is munching on something I've designed.


How many GB of OS and apps do you have?  Home directory?  Bulk data? 
Amanda backups?  VM's?  Other?



While researching this thread, I came across an HBA that may interest 
both of us:



     https://www.amazon.com/dp/B09L3GLCL9

Thats nice but only an x4 for $117, how about an x16, which I have an 
empty slot for a full length X16 at $160:



Or is the jmicron x4 better supported?


4x the bandwidth (3.94 GB/s), 8 more SATA ports (24 total), and $42.31 
higher price ($117.30).  I would prefer this card for the bandwidth 
alone, and I never know when I might need those extra ports to 
temporarily connect a bunch of disks from other machines.



If your goal is maximum capacity at minimum cost, HDD's have larger 
capacity, lower bandwidth, and lower cost per TB than SSD's -- per bay, 
per drive, and per port.  Port multiplication makes sense with HDD's.



For file server and backup server roles, I use ZFS with HDD primary 
storage devices and full bandwidth HBA.  Read performance, write 
performance, and capacity utilization can be improved with ZFS 
compression, read performance can be improved with an SSD cache vdev 
(virtual device; e.g. partition), and write performance can be improved 
with SSD intent log vdev (mirror of partitions).  For the backup server 
role, capacity utilization can be improved with ZFS deduplication and an 
SSD dedup vdev (mirror of partitions).



Regardless of what you do with HBA's, I would connect the six SSD's to 
the six motherboard SATA III ports.


Which would tie them up. Is that the faster solution, moving the 
opticals to a different card? The question then is can the bios see them 
to boot from them?


I did grab what I think is a newer bios yesterday but haven't tried to 
install it yet.

 mt86plus_6.20_64.grub.iso.zip


A fresh install of  Debian stable or old-stable should solve the storage 
I/O stuttering problems you are experiencing.  (That motherboard has 
dual M.2 ports.  Installing Debian onto an M.2 PCIE 3.0 x4 SSD would be 
very nice.)


I know zip about the new m2 stuff. Link to good info appreciated. I 
didn't use it originally because I couldn't find the m2 sockets but 
probably didn't look too hard as I was more concerned with getting 
another system built after a usb socket on the previous occupant of that 
space caught fire and tried to burn the place down.


Can you find a link showing 

Re: sata driver compataility Q

2023-09-16 Thread David Christensen

On 9/15/23 19:37, gene heskett wrote:

On 9/15/23 20:12, David Christensen wrote:

On 9/15/23 15:04, gene heskett wrote:

On 9/15/23 17:35, David Christensen wrote:

On 9/15/23 12:28, gene heskett wrote:

I've just ordered some stuff to rebuild or expand my Raid setup.
This 16 port sata-III pci-e card:

along with a bigger drive cage, cables and such and some gigastone 
2T drives to make a raid big enough to run amanda. And maybe put a 
new card in front of my 2T /home raid10.


Is everything going into one chassis?  Have you considered an 
external drive chassis?


Got one of those coming too. Small possibility it will fit it the 
bottom of this huge and old tiger direct tower.  Because the radiator 
for the 6 core i5 is too tall, it hasn't had a side cover on it in 
years. If not, theres a 3 3.5" cage at the bottom of the stack I can 
stuff  with 2 SSD's per bay slot. A hidey place, front cover is 
solid, but cooling might be a problem. If worse comes to worse I 
could shoe-goo a 120x15 to the side of the cage.



This is what I meant:

 https://www.pc-pitstop.com/16-bay-25inch-sas-sata-jbod-tower


Call me cheap, my choice is diy assembly, SS stampings you put together, 
all drive brackets and a dozen cables and a 5 drive power splitter, $30.

Less than 5% of the price of that nice looking box.



Fair enough.


Searching the mailing list archive, it appears that you have an Asus 
PRIME Z370-A II motherboard (?):



https://www.asus.com/motherboards-components/motherboards/prime/prime-z370-a-ii/


And, an Intel Core i5 processor (?).  Which model?


How many GB of OS and apps do you have?  Home directory?  Bulk data? 
Amanda backups?  VM's?  Other?



While researching this thread, I came across an HBA that may interest 
both of us:



https://www.amazon.com/dp/B09L3GLCL9


4x the bandwidth (3.94 GB/s), 8 more SATA ports (24 total), and $42.31 
higher price ($117.30).  I would prefer this card for the bandwidth 
alone, and I never know when I might need those extra ports to 
temporarily connect a bunch of disks from other machines.



If your goal is maximum capacity at minimum cost, HDD's have larger 
capacity, lower bandwidth, and lower cost per TB than SSD's -- per bay, 
per drive, and per port.  Port multiplication makes sense with HDD's.



For file server and backup server roles, I use ZFS with HDD primary 
storage devices and full bandwidth HBA.  Read performance, write 
performance, and capacity utilization can be improved with ZFS 
compression, read performance can be improved with an SSD cache vdev 
(virtual device; e.g. partition), and write performance can be improved 
with SSD intent log vdev (mirror of partitions).  For the backup server 
role, capacity utilization can be improved with ZFS deduplication and an 
SSD dedup vdev (mirror of partitions).



Regardless of what you do with HBA's, I would connect the six SSD's to 
the six motherboard SATA III ports.



A fresh install of  Debian stable or old-stable should solve the storage 
I/O stuttering problems you are experiencing.  (That motherboard has 
dual M.2 ports.  Installing Debian onto an M.2 PCIE 3.0 x4 SSD would be 
very nice.)



David



Re: sata driver compataility Q

2023-09-16 Thread songbird
gene heskett wrote:
> On 9/16/23 06:07, songbird wrote:
>> gene heskett wrote:
>> ...
>>> This setup worked instantly under buster and bullseye, but takes from 30
>>> secs to 5 minutes to open a write requestor window asking where to put
>>> the download I clicked on under bookworrm.
>> 
>>trace the first part of the process and see what is
>> taking so long.
>
> I'd love to, but how do you trace a mouse click?  All the 
> "alphabet-trace" utils I know are cli only. Probably my fault, but...

  replace the /usr/bin/ by a script which traces 
the  in question...  or if it is X related there
are probably ways of tracing X.

  it might be a desktop, greeter or a window manager 
binary of some type but you should eventually be able 
to figure out what is going on.


  songbird



Re: sata driver compataility Q

2023-09-16 Thread gene heskett

On 9/16/23 06:07, songbird wrote:

gene heskett wrote:
...

This setup worked instantly under buster and bullseye, but takes from 30
secs to 5 minutes to open a write requestor window asking where to put
the download I clicked on under bookworrm.


   trace the first part of the process and see what is
taking so long.


I'd love to, but how do you trace a mouse click?  All the 
"alphabet-trace" utils I know are cli only. Probably my fault, but...



   songbird

.


Cheers, Gene Heskett.
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author, 1940)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: sata driver compataility Q

2023-09-16 Thread songbird
gene heskett wrote:
...
> This setup worked instantly under buster and bullseye, but takes from 30 
> secs to 5 minutes to open a write requestor window asking where to put 
> the download I clicked on under bookworrm.

  trace the first part of the process and see what is 
taking so long.


  songbird



Re: sata driver compataility Q

2023-09-15 Thread gene heskett

On 9/15/23 20:12, David Christensen wrote:

On 9/15/23 15:04, gene heskett wrote:

On 9/15/23 17:35, David Christensen wrote:

On 9/15/23 12:28, gene heskett wrote:

Greetings all;

I've just ordered some stuff to rebuild or expand my Raid setup.
This 16 port sata-III pci-e card:

along with a bigger drive cage, cables and such and some gigastone 
2T drives to make a raid big enough to run amanda. And maybe put a 
new card in front of my 2T /home raid10.


Is everything going into one chassis?  Have you considered an 
external drive chassis?


Got one of those coming too. Small possibility it will fit it the 
bottom of this huge and old tiger direct tower.  Because the radiator 
for the 6 core i5 is too tall, it hasn't had a side cover on it in 
years. If not, theres a 3 3.5" cage at the bottom of the stack I can 
stuff  with 2 SSD's per bay slot. A hidey place, front cover is solid, 
but cooling might be a problem. If worse comes to worse I could 
shoe-goo a 120x15 to the side of the cage.



This is what I meant:

     https://www.pc-pitstop.com/16-bay-25inch-sas-sata-jbod-tower


David
Call me cheap, my choice is diy assembly, SS stampings you put together, 
all drive brackets and a dozen cables and a 5 drive power splitter, $30.

Less than 5% of the price of that nice looking box.


.


Cheers, Gene Heskett.
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author, 1940)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: sata driver compataility Q

2023-09-15 Thread David Christensen

On 9/15/23 15:04, gene heskett wrote:

On 9/15/23 17:35, David Christensen wrote:

On 9/15/23 12:28, gene heskett wrote:

Greetings all;

I've just ordered some stuff to rebuild or expand my Raid setup.
This 16 port sata-III pci-e card:

along with a bigger drive cage, cables and such and some gigastone 2T 
drives to make a raid big enough to run amanda. And maybe put a new 
card in front of my 2T /home raid10.


Is everything going into one chassis?  Have you considered an external 
drive chassis?


Got one of those coming too. Small possibility it will fit it the bottom 
of this huge and old tiger direct tower.  Because the radiator for the 6 
core i5 is too tall, it hasn't had a side cover on it in years. If not, 
theres a 3 3.5" cage at the bottom of the stack I can stuff  with 2 
SSD's per bay slot. A hidey place, front cover is solid, but cooling 
might be a problem. If worse comes to worse I could shoe-goo a 120x15 to 
the side of the cage.



This is what I meant:

https://www.pc-pitstop.com/16-bay-25inch-sas-sata-jbod-tower


David



Re: sata driver compataility Q

2023-09-15 Thread gene heskett

On 9/15/23 17:56, Andy Smith wrote:

Hello,

On Fri, Sep 15, 2023 at 05:35:40PM -0400, gene heskett wrote:

This setup worked instantly under buster and bullseye, but takes from 30
secs to 5 minutes to open a write requestor window asking where to put the
download I clicked on under bookworrm.


I think you should work out why that happens before spending a lot
of money on new hardware. It doesn't seem at all likely to me that
your existing hardware is at fault. Buying new hardware risks
experiencing the same thing with still no idea why.

Thanks,
Andy

I won't argue on that point Andy, but it now been several months asking 
the same question from different angles and not getting a single helpful 
reply. I know my reputation is bad because I often use the box I'm 
supposed to stay inside of, as kindling to start the next campfire. I 
don't "stay inside that famous box" unless there is someone outside 
shooting at me, it which case I might shoot back. Computers can do 
anything you can write an interface to tickle the hardware for.


Now if someone can have me check something that might be aglay, my 
fingers to do that check are at your disposal. It eventually works, 100% 
of the time. But why the 30 sec to 5 minute delay before it allows me to 
do what I asked?


Frustrated is the PC word, but that is certainly not how I describe it 
in my lonelyness. There is not anyone here to hear me rant except me, 
which is probably a "good thing".


Thanks, take care and stay well Andy.

Cheers, Gene Heskett.
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author, 1940)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: sata driver compataility Q

2023-09-15 Thread gene heskett

On 9/15/23 17:35, David Christensen wrote:

On 9/15/23 12:28, gene heskett wrote:

Greetings all;

I've just ordered some stuff to rebuild or expand my Raid setup.
This 16 port sata-III pci-e card:

along with a bigger drive cage, cables and such and some gigastone 2T 
drives to make a raid big enough to run amanda. And maybe put a new 
card in front of my 2T /home raid10.


The card claims linux compatibility.

Can anyone advise me on the gotcha's of such a 16 port beast?

Thanks all.

Cheers, Gene Heskett.



Searching Amazon for "gigastone 2T", I see:


https://www.amazon.com/Gigastone-Internal-Compatible-Desktop-Laptop/dp/B0BN5978X1

     540 MB per second


PCIe 3.0 x1 is rated for 985 MB/s.

     https://en.wikipedia.org/wiki/Pcie


So, the PCIe 3.0 x1 connector is going to be a bottleneck when accessing 
more than one SSD.



I suggest that you pick an HBA with a wider PCIe connector -- PCIe 3.0 
x8 (7.88 GB/s) is a reasonable match for sixteen SSD's (8.64 GB/s). PCIe 
3.0 x16 would eliminate the PCIe bottleneck.



Is everything going into one chassis?  Have you considered an external 
drive chassis?
Got one of those coming too. Small possibility it will fit it the bottom 
of this huge and old tiger direct tower.  Because the radiator for the 6 
core i5 is too tall, it hasn't had a side cover on it in years. If not, 
theres a 3 3.5" cage at the bottom of the stack I can stuff  with 2 
SSD's per bay slot. A hidey place, front cover is solid, but cooling 
might be a problem. If worse comes to worse I could shoe-goo a 120x15 to 
the side of the cage.



Make sure your power supply(s) are adequate to the task.


400 watter, presently runs dead cold. 5V line rated at 16A, probably not 
using half that now.


David

.


Cheers, Gene Heskett.
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author, 1940)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: sata driver compataility Q

2023-09-15 Thread Andy Smith
Hello,

On Fri, Sep 15, 2023 at 05:35:40PM -0400, gene heskett wrote:
> This setup worked instantly under buster and bullseye, but takes from 30
> secs to 5 minutes to open a write requestor window asking where to put the
> download I clicked on under bookworrm.

I think you should work out why that happens before spending a lot
of money on new hardware. It doesn't seem at all likely to me that
your existing hardware is at fault. Buying new hardware risks
experiencing the same thing with still no idea why.

Thanks,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: sata driver compataility Q

2023-09-15 Thread gene heskett

On 9/15/23 15:56, Dan Ritter wrote:

gene heskett wrote:

Greetings all;

I've just ordered some stuff to rebuild or expand my Raid setup.
This 16 port sata-III pci-e card:

along with a bigger drive cage, cables and such and some gigastone 2T drives
to make a raid big enough to run amanda. And maybe put a new card in front
of my 2T /home raid10.

The card claims linux compatibility.

Can anyone advise me on the gotcha's of such a 16 port beast?


What you actually have there is 1 SATA-3 controller which should
be able to support 4 disks, and 4 port multipliers to support
16.

And I forgot to ask, which would be faster, 4 drives on port1 1-4, or 4 
drives on ports 1-5-9-13?



Effectively, every group of four disks is competing against
themselves. So performance is going to be mediocre.

It should work, though, as a giant dumping ground.

But why are you buying 16 x 2TB disks, if not for performance?

I've never heard of Gigastone and can offer no assessment of
them.

-dsr-
.


Cheers, Gene Heskett.
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author, 1940)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: sata driver compataility Q

2023-09-15 Thread gene heskett

On 9/15/23 15:56, Dan Ritter wrote:

gene heskett wrote:

Greetings all;

I've just ordered some stuff to rebuild or expand my Raid setup.
This 16 port sata-III pci-e card:

along with a bigger drive cage, cables and such and some gigastone 2T drives
to make a raid big enough to run amanda. And maybe put a new card in front
of my 2T /home raid10.

The card claims linux compatibility.

Can anyone advise me on the gotcha's of such a 16 port beast?


What you actually have there is 1 SATA-3 controller which should
be able to support 4 disks, and 4 port multipliers to support
16.

Effectively, every group of four disks is competing against
themselves. So performance is going to be mediocre.

Since the pci-e plug is the narrow one I suspected as much, the data 
path is too narrow, This asus mobo has 6 ports but I'd assume the mobo 
ports would be wider, perhaps even x4 but it is also not stated,
The existing 4 1T per drive raid10 is on its own x1 based 6 port 
controller. The other 2 ports are not used at present,


This setup worked instantly under buster and bullseye, but takes from 30 
secs to 5 minutes to open a write requestor window asking where to put 
the download I clicked on under bookworrm.  And just as often as I've 
mentioned it the subject of the reply if any, is changed w/o changing 
the subject line, to ignore it.




It should work, though, as a giant dumping ground.

Such as an amanda backup raid, and running it the wee hours, I could 
care less how long it takes as long as it is done by 05:30 or 06:00. If 
I can rearrange the usb breakouts, I can uncover another pci-e x1 socket 
for this card.



But why are you buying 16 x 2TB disks, if not for performance?


I'm not, I'll only have 6 of them when the rest get here.  One of them 
may go into one of my milling machines to replace the last spinning rust 
on the property.



I've never heard of Gigastone and can offer no assessment of
them.
Relatively new on amazon. A month ago nearly the only 2T available, in a 
month the 2T list has been had the decimal point shifted right at least 
1 place. And the prices have risen 10 bucks. $89/copy today.


-dsr-
.

Take care and stay well, Dan.

Cheers, Gene Heskett.
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author, 1940)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: sata driver compataility Q

2023-09-15 Thread David Christensen

On 9/15/23 12:28, gene heskett wrote:

Greetings all;

I've just ordered some stuff to rebuild or expand my Raid setup.
This 16 port sata-III pci-e card:

along with a bigger drive cage, cables and such and some gigastone 2T 
drives to make a raid big enough to run amanda. And maybe put a new card 
in front of my 2T /home raid10.


The card claims linux compatibility.

Can anyone advise me on the gotcha's of such a 16 port beast?

Thanks all.

Cheers, Gene Heskett.



Searching Amazon for "gigastone 2T", I see:


https://www.amazon.com/Gigastone-Internal-Compatible-Desktop-Laptop/dp/B0BN5978X1

540 MB per second


PCIe 3.0 x1 is rated for 985 MB/s.

https://en.wikipedia.org/wiki/Pcie


So, the PCIe 3.0 x1 connector is going to be a bottleneck when accessing 
more than one SSD.



I suggest that you pick an HBA with a wider PCIe connector -- PCIe 3.0 
x8 (7.88 GB/s) is a reasonable match for sixteen SSD's (8.64 GB/s). 
PCIe 3.0 x16 would eliminate the PCIe bottleneck.



Is everything going into one chassis?  Have you considered an external 
drive chassis?



Make sure your power supply(s) are adequate to the task.


David



sata driver compataility Q

2023-09-15 Thread gene heskett

Greetings all;

I've just ordered some stuff to rebuild or expand my Raid setup.
This 16 port sata-III pci-e card:

along with a bigger drive cage, cables and such and some gigastone 2T 
drives to make a raid big enough to run amanda. And maybe put a new card 
in front of my 2T /home raid10.


The card claims linux compatibility.

Can anyone advise me on the gotcha's of such a 16 port beast?

Thanks all.

Cheers, Gene Heskett.
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author, 1940)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page