Am Mon, Sep 18, 2023 at 07:16:17AM -0400 schrieb Rich Freeman:

> On Mon, Sep 18, 2023 at 6:13 AM Frank Steinmetzger <war...@gmx.de> wrote:
> >
> > Am Mon, Sep 18, 2023 at 12:17:20AM -0500 schrieb Dale:
> > > […]
> > > The downside, only micro ATX and
> > > mini ITX mobo.  This is a serious down vote here.
> >
> > Why is that bad? µATX comes with up to four PCIe slots. Even for ten drives,
> > you only need one SATA expander (with four or six on-board). Perhaps a fast
> > network card if one is needed, that makes two slots.
> 
> Tend to agree.  The other factor here is that desktop-oriented CPUs
> tend to not have a large number of PCIe lanes free for expansion
> slots, especially if you want 1-2 NVMe slots.  (You also have to watch
> out as the lanes for those can be shared with some of the expansion
> slots so you can't use both.)
> 
> If you want to consider a 10GbE+ card I'd definitely get something
> with integrated graphics,

That is a recommendation in any case. If you are a gamer, you have a 
fallback in case the GPU kicks the bucket. And if not, your power bill goes 
way down.

> because a NIC is going to need a 4-8x port
> most likely 

Really? PCIe 3.0 has 1 GB/s/lane, that is 8 Gbps/lane, so almost as much as 
10 GbE. OTOH, 10 GbE is a major power sink. Granted, 1 GbE is not much when 
you’re dealing with numerous TB. And then there is network over thunderbolt, 
of which I only recently learned. But this is probably very restricted in 
length. Which will also be the case for 10 GbE, so probably no options for 
the outhouse. :D

> > Speaking of RAM; might I interest you in server-grade hardware? The reason
> > being that you can then use ECC memory, which is a nice perk for storage.
> 
> That and way more PCIe lanes.  That said, it seems super-expensive,
> both in terms of dollars, and power use.  Is there any entry point
> into server-grade hardware that is reasonably priced, and which can
> idle at something reasonable (certainly under 50W)?

I have a four-bay NAS with server board (ASRock Rack E3C224D2I), actually my 
last surviving Gentoo system. ;-) With IPMI-Chip (which alone takes several 
watts), 16 GiB DDR3-ECC, an i3-4170 and 4×6 TB, it draws around 33..35 W 
from the plug at idle — that is after I enabled all powersaving items in 
powertop. Without them, it is around 10 W more. It has two gigabit ports 
(plus IPMI port) and a 300 W 80+ gold PSU.

> > I was going to upgrade my 9 years old Haswell system at some point to a new
> > Ryzen build. Have been looking around for parts and configs for perhaps two
> > years now but I can’t decide (perhaps some remember previous ramblings about
> > that).
> 
> The latest zen generation is VERY nice, but also pretty darn
> expensive.  Going back to zen3 might get you more for the money,
> depending on how big you're scaling up.

I’ve been looking at Zen 3 the whole time, namely the 5700G APU. 5 times the 
performance of my i5, for less power, and good graphics performance for the 
occasional game. I’m a bit paranoid re. Zen 4’s inclusion of Microsoft 
Pluton (“Chip-to-Cloud security”) and Zen 4 in gereral has higher idle 
consumption. But now that Phoenix, the Zen 4 successor to the 5700G, is 
about to become available, I am again hesitant to pull the trigger, waiting 
for the pricetag.

> A big part of the cost of
> zen4 is the motherboard, so if you're building something very high end
> where the CPU+RAM dominates, then zen4 may be a better buy.

I’m fine with middle-class. In fact I always thought i7s to be overpriced 
compared to i5s. The plus in performance of top-tier parts is usually bought 
with disproportionately high power consumption (meaning heat and noise).

> If you just want a low-core system then you're paying a lot just to get
> started.

I want to get the best bang within my constraints, meaning the 5700G (
8 cores). The 5600G (6 cores) is much cheaper, but I want to get the best 
graphics I can get in an APU. And I am always irked by having 6 cores (12 
threads), because it’s not a power of 2, so percentages in load graphs will 
look skewed. :D

> The advantage of
> distributed filesystems is that you can build them out of a bunch of
> cheap boxes […]
> When you start getting up to a dozen drives the cost of getting them
> to all work on a single host starts going up.  You need big cases,
> expansion cards, etc.  Then when something breaks you need to find a
> replacement quickly from a limited pool of options.  If I lose a node
> on my Rook cluster I can just go to newegg and look at $150 used SFF
> PCs, then install the OS and join the cluster and edit a few lines of
> YAML and the disks are getting formatted...

For a simple media storage, I personally would find this too cumbersome to 
manage. Especially if you stick to Gentoo and don’t have a homogeneous 
device pool (not to mention compile times). I’d choose organisational 
simplicity over hardware availability. (My NAS isn’t running for of the 
time, mostly due to power bill, but also to keep the hardware alive for 
longer.)

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

Talents find solutions, geniuses discover problems.

Attachment: signature.asc
Description: PGP signature

Reply via email to