Take for example an Emulex (Broadcom) HBA. The quad port adapter can handle up to 10M IOPS with a throughput rate of 12,800MB/s full duplex using 16-lane PCIe which utilities DMA. All I/O is offloaded, interrupts, multiplexing etc. When you consider that a standard commodity rack server such as an AMD EPYC can support 128 PCIe lanes and up to 8 memory channels I would suggest x86 can handle a lot of I/O if you have the right gear.
https://docs.broadcom.com/doc/LPe35000-LPe36000-PB > On 2 Aug 2023, at 10:42 am, Grant Taylor > <0000023065957af1-dmarc-requ...@listserv.ua.edu> wrote: > > On 8/1/23 7:20 PM, David Crayford wrote: >> What’s the difference between between channelized I/O and a rack of x86 >> servers connected to a SAN using fibre channel driven by high speed HBAs? > > I don't know. > > My understanding is that Fibre Channel is an evolution of SCSI which is > supposedly a somewhat intelligent controller wherein the OS asks said > controller to fetch / store some data for it. As I understand it, the OS & > main CPU aren't involved in the transfer beyond asking the controller to do > the transfer on it's behalf. > > I'd have to reference documentation to see if / how much Direct Memory Access > comes into play vs the CPU's involvement in the transfer to / from the > controller. > > But between the controller and the back end drive, as I understand it, the > CPU ins't involved. > > So I can't say that "a rack of x86 servers connected to a SAN using fibre > channel" isn't using channelized I/O. I think in many ways they are. > > This is a place where minutia matters. > > > > Grnat. . . . > > ---------------------------------------------------------------------- > For IBM-MAIN subscribe / signoff / archive access instructions, > send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN