On Tue, Sep 19, 2023 at 1:05 PM Frank Steinmetzger <war...@gmx.de> wrote:
>
> Am Tue, Sep 19, 2023 at 11:01:48AM -0400 schrieb Rich Freeman:
>
> No, the chipset downlink is always four lanes wide.

The diagram you linked has 8, but I can't vouch for its accuracy.
Haven't looked into it for AM4.

> > Again, that is AM4 which I haven't looked into as much.  AM5 increases
> > the v5 lanes and still has some v4 lanes.
>
> AFAIR, PCIe 5 is only guaranteed for the NVMe slot. The rest is optional or
> subject to the chipset.

Actually, PCIe v5 isn't guaranteed for the NVMe slot either, or even
the first 16x slot.  It is all subject to the motherboard design.
There are AM5 MBs that don't have any PCIe v5 slots.

> > I'm sure PCIe v5 switching is hard/expensive, but they definitely
> > could mix things up however they want.  The reality is that most IO
> > devices aren't going to be busy all the time, so you definitely could
> > split 8 lanes up 64 ways, especially if you drop a generation or two
> > along the way.
>
> Unfortunately you can’t put low-speed connectors on a marketing sheet, when
> competitors have teh shizz.

Well, you can, but they don't fit on a tweet.  Just my really long emails...

We're not their target demographic in any case.  Now, if Dale wanted
more RGB lights and transparent water hoses, and not more PCIe slots,
the market would be happy to supply...

>
> > Server hardware definitely avoids many of the limitations, but it just
> > tends to be super-expensive.
>
> Which is funny because with the global cloud trend, you would think that its
> supply increases and prices go down.

I think the problem is that the buyers are way less price-sensitive.

When a medium/large company is buying a server, odds are they're
spending at least tens of thousands of dollars on the
software/maintenance side of the project, if not hundreds of thousands
or more.  They also like to standardize on hardware, so they'll pick
the one-size-fits-all solution that can work in any situation, even if
it is pricey.  Paying $5k for a server isn't a big deal, especially if
it is reliable/etc so that it can be neglected for 5 years (since
touching it involves dragging in the project team again, which
involves spending $15k worth of time just getting the project
approved).

The place where they are price-sensitive is on really large-scale
operations, like cloud providers, Google, social media, and so on -
where they need tens of thousands of identical servers.  These
companies would create demand for very efficiently-priced hardware.
However, at their scale they can afford to custom develop their own
stuff, and they don't sell to the public, so while that cheap server
hardware exists, you can't obtain it.  Plus it will be very tailored
to their specific use case.  If Google needs a gazillion workers for
their search engine they might have tensor cores and lots of CPU, and
maybe almost no RAM/storage.  If they need local storage they might
have one M.2 slot and no PCIe slots at all, or some other lopsided
config.  Backblaze has their storage pods that are basically one giant
stack of HDD replicators and almost nothing else.  They probably don't
even have sideband management on their hardware, or if they do it is
something integrated with their own custom solutions.

Oh, the other big user is the US government, and they're happy to pay
for a million of those $5k servers as long as they're assembled in the
right congressional districts.  Reducing the spending probably reduces
the number of jobs, so that is an anti-feature...  :)

-- 
Rich

Reply via email to