A z/16 has a maximum I/O bandwidth of 128 GBps. The limitation is no the number 
of channels, but the bandwidth to memory. I don't know if the I/O bandwidth has 
any impact on processor access to memory, but my understanding is that there is 
little, if any.

The z16 implementation allows one processor chip to access the cache in other 
processor chips. This helps to ensure integrity when one chip alters a memory 
location and another chip needs to access the same location.

What happens in x86 architecture systems when one chip has data in cache that 
it alters and another chip needs to access the same location in the shared 
memory? What happens when a DMA I/O operation needs to access the same memory 
that is contained in a processor's cache?

The hard part of designing an x86 system to handle very large amounts of I/O is 
the memory design, allowing the I/O subsystem to access large amounts of memory 
without impacting the processors.

-- 
Tom Marchant

On Wed, 2 Aug 2023 09:24:59 -0500, Grant Taylor <gtay...@tnetconsulting.net> 
wrote:

>On 8/1/23 10:26 PM, David Crayford wrote:
>> When you consider that a standard commodity rack server such as an
>> AMD EPYC can support 128 PCIe lanes and up to 8 memory channels I
>> would suggest x86 can handle a lot of I/O if you have the right gear.
>
>I think it's important to note that all of these are distinct and germane:
>
>  - what the hardware can theoretically support
>  - what the OS can support
>  - what is asked of them
>  - what people are willing to pay for
>
>Having the right gear is very important.  Effectively utilizing it is
>also important.
>
>Grant. . . .

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to