On Wed, Apr 17, 2024 at 9:33 AM Dale <rdalek1...@gmail.com> wrote:
>
> Rich Freeman wrote:
>
> > All AM5 CPUs have GPUs, but in general motherboards with video outputs
> > do not require the CPU to have a GPU built in.  The ports just don't
> > do anything if this is lacking, and you would need a dedicated GPU.
> >
>
> OK.  I read that a few times.  If I want to use the onboard video I have
> to have a certain CPU that supports it?  Do those have something so I
> know which is which?  Or do I read that as all the CPUs support onboard
> video but if one plugs in a video card, that part of the CPU isn't
> used?  The last one makes more sense but asking to be sure.

To use onboard graphics, you need a motherboard that supports it, and
a CPU that supports it.  I believe that internal graphics and an
external GPU card can both be used at the same time.  Note that
internal graphics solutions typically steal some RAM from other system
use, while an external GPU will have its own dedicated RAM (and those
can also make use of internal RAM too).

The 7600X has a built-in RDNA2 GPU.   All the original Ryzen zen4 CPUs
had GPU support, but it looks like they JUST announced a new line of
consumer zen4 CPUs that don't have it - they all end in an F right
now.

In any case, if you google the CPU you're looking at it will tell you
if it supports integrated graphics.  Most better stores/etc have
filters for this feature as well (places like Newegg or PCPartPicker
or whatever).

If you don't play games, then definitely get integrated graphics.
Even if the CPU costs a tiny bit more, it will give you a free empty
16x PCIe slot at whatever speed the CPU supports (v5 in this case -
which is as good as you can get right now).

> That could mean a slight price drop for the things I'm looking at then.
> One can hope.  Right???

Everything comes down in price eventually...

>
> I might add, simply right clicking on the desktop can take sometimes 20
> or 30 seconds for the menu to pop up.  Switching from one desktop to
> another can take several seconds, sometimes 8 or 10.  This rig is
> getting slower.  Actually, the software is just getting bigger.  You get
> my meaning tho.  I bet the old KDE3 would be blazingly fast compared to
> the rig I ran it on originally.

That sounds like RAM but I couldn't say for sure.  In any case a
modern system will definitely help.

> Given the new rig can have 128GBs, I assume it comes in 32GB sticks.

Consumer DDR5 seems to come as large as 48GB, though that seems like
an odd size.

> I'd get 32GBs at first.  Maybe a month or so later get another 32GB.
> That'll get me 64Gbs.  Later on, a good sale maybe, buy another 32GB or
> a 64GB set and max it out.

You definitely want to match the timings, and you probably want to
match the sticks themselves.  Also, you generally need to be mindful
of how many channels you're occupying, though as I understand it DDR5
is essentially natively dual channel.  If you just stick one DDR4
stick in a system it will not perform as well as two sticks of half
the size.  I forget the gory details but I believe it comes down to
the timings of switching between two different channels vs moving
around within a single one.  DDR RAM timings get really confusing, and
it comes down to the fact that addresses are basically grouped in
various ways and randomly seeking from one address to another can take
a different amount of time depending on how the new address is related
to the address you last read.  The idea of "seeking" with RAM may seem
odd, but recent memory technologies are a bit like storage, and they
are accessed in a semi-serial manner.  Essentially the latencies and
transfer rates are such that even dynamic RAM chips are too slow to
work in the conventional sense.  I'm guessing it gets into a lot of
gory details with reactances and so on, and just wiring up every
memory cell in parallel like in the old days would slow down all the
voltage transitions.

> I've looked at server type boards.  I'd like to have one.  I'd like one
> that has SAS ports.

So, I don't really spend much time looking at them, but I'm guessing
SAS is fairly rare on the motherboards themselves.  They probably
almost always have an HBA/RAID controller in a PCIe slot.  You can put
the same cards in any PC, but of course you're just going to struggle
to have a slot free.  You can always use a riser or something to cram
an HBA into a slot that is too small for it, but then you're going to
suffer reduced performance.  For just a few spinning disks though it
probably won't matter.

Really though I feel like the trend is towards NVMe and that gets into
a whole different world.  U.2 allows either SAS or PCIe over the bus,
and there are HBAs that will handle both.  Or if you only want NVMe it
looks like you can use bifurcation-based solutions to more cheaply
break slots out.

I'm kinda thinking about going that direction when I expand my Ceph
cluster.  There are very nice NVMe server designs that can get 24
drives into 2U or whatever, but they are very modern and cost a
fortune even used it seems.  I'm kinda thinking about maybe getting a
used workstation with enough PCIe slots free that support bifurcation
and using one for a NIC and another for 4x U.2 drives.  If the used
workstation is cheap ($100-200) that is very low overhead per drive
compared to the server solutions.  (You can also do 4x M.2 instead.)
These days enterprise U.2 drives are the same price as SATA/M.2 for
the same feature set, and in U.2 you can get much larger capacity
drives.  It might be a while before the really big ones start becoming
cheap though...

-- 
Rich

Reply via email to