https://www.pro-tools-expert.com/production-expert-1/why-are-the-apple-m1-m1-pro-and-m1-max-chips-so-fast

On Tue, May 23, 2023 at 10:07 PM Steve Thompson <ste...@wkyr.net> wrote:
>
> This is the kind of thing I was looking for.
>
> I knew that the ARM CPUs were doing things differently.
>
> I also knew that Intel design, so far, just doesn't match up with
> what z/Arch has been doing.
>
> And this means older methods of comparing "machines" is now
> getting old.
>
> [I know about cache access, and OOO processing, and similar things.]
>
> But some of those things I expressed allowed me, on a
> napkin/serviette, to show people why the AIX/Wintel boxes could
> not do what that z Arch based z800 could do based on the cache
> line sizes and bus widths. Or the z13->z15.
>
> It looks like ARM is going to be a game changer.
>
> Steve Thompson
>
> On 5/23/2023 10:04 PM, David Crayford wrote:
> > On 24/5/2023 2:24 am, Seymour J Metz wrote:
> >> Clocking and bus width are certainly important factors in
> >> performance, but they are far from the only factors.
> >>
> > 100% agree. Things are a lot more complicated with modern
> > hardware.
> >
> > My MacBook Pro has a 128-bit memory bus and supports 200GB/s
> > memory bandwidth. The MacBook Pro Max tops out at 400GB/s. How
> > does it do it?
> > https://www.theregister.com/2020/11/19/apple_m1_high_bandwidth_memory_performance/
> >
> >
> >
> >> --
> >> Shmuel (Seymour J.) Metz
> >> http://mason.gmu.edu/~smetz3
> >>
> >> ________________________________________
> >> From: IBM Mainframe Discussion List [IBM-MAIN@LISTSERV.UA.EDU]
> >> on behalf of Steve Thompson [ste...@wkyr.net]
> >> Sent: Tuesday, May 23, 2023 1:50 PM
> >> To: IBM-MAIN@LISTSERV.UA.EDU
> >> Subject: Re: Architectural questions [WAS: Are Banks Breaking
> >> Up With Mainframes? | Forbes]
> >>
> >> I'm not asking about cache lines, necessarily.
> >>
> >> With the G3 chip set, as I recall, it had 256 byte wide cache
> >> lines, and CPU <-> C-store was using a 256 byte bus while the
> >> IOPs <-> C-store were using a 64 byte bus.
> >>
> >> What makes one system faster than the other has to do with
> >> clocking and bus width. So if RAM (C-Store) is running at say,
> >> 1/4 of the speed of the CPU, RAM is a bottleneck unless the cache
> >> area is much faster so that it buffers C-Store fetch/write.
> >>
> >> So knowing this about the other chip sets (non z) one can better
> >> do modeling to see what the ratio is in "LPARs" or CECs between
> >> the systems.
> >>
> >> I've seen something like a 7 (AIX/Power) to 1 (z800) ratio and
> >> the power system could not keep up on I/Os (bytes per second)
> >> even though its I/O system was physically faster (for 1-4 fiber
> >> optic connections) and the the z800 with slower fiber optic
> >> connections actually could do more bytes/second xfer because of
> >> how it was not BUS bound.
> >>
> >> Steve Thompson
> >>
> >> On 5/22/2023 8:48 PM, David Crayford wrote:
> >>> Good question. By bus size I'm assuming that your referring to
> >>> cache-lines? I wonder how much of a difference that makes with
> >>> OOO pipelines? What I can confirm is that my new Arm M2 MacBook
> >>> Pro which has a 32-byte cache-line sizes absolutely smashes my
> >>> AMD Ryzen 5 in Cinebench benchmarks.
> >>>
> >>> On 22/5/2023 8:26 pm, Steve Thompson wrote:
> >>>> I have a question about these systems, both z and not z.
> >>>>
> >>>> What is the current bus width supported?
> >>>>
> >>>> At the G3 level for "z" the CPU-RAM bus was 256 bytes wide, as
> >>>> I recall.
> >>>>
> >>>> For IOP to RAM it was 64 bytes wide.
> >>>>
> >>>> For the systems I run (off the shelf stuff for Linux and
> >>>> windows) the bus is still at 64bits (8 bytes). Yes it has bus
> >>>> mastering, and pathing to allow certain adapter cards to use
> >>>> 8bit "channels"....
> >>>>
> >>>> z/Arch POP has instructions for interrogating the bus sizes. I
> >>>> haven't had the time or opportunity to write any tests using
> >>>> this to find out what those bus sizes are it  would report on.
> >>>>
> >>>> Steve Thompson
> >>>>
> >>>> On 5/22/2023 7:52 AM, David Crayford wrote:
> >>>>> On 22/5/2023 1:26 pm, Attila Fogarasi wrote:
> >>>>>> Good point about NUMA....and it is still a differentiator
> >>>>>> and competitive
> >>>>>> advantage for IBM z.
> >>>>> How is NUMA a competitive advantage for z? Superdomes use
> >>>>> Intel UltraPath Interconnect (UPI) links that can do glueless
> >>>>> NUMA.
> >>>>>
> >>>>>> IBM bought Sequent 20+ years ago to get their
> >>>>>> excellent NUMA technology, and has since built some very
> >>>>>> clever cache
> >>>>>> topology and management algorithms.  AMD has historically
> >>>>>> been crippled in
> >>>>>> real-world performance due to cache inefficiencies.
> >>>>> What historical generation of AMD silicon was crippled? The
> >>>>> EPYC supports up to 384MB of L3 cache and the specs and
> >>>>> benchmarks suggest the chiplet architecture can easily handle
> >>>>> the I/O.
> >>>>>
> >>>>>> 10 years ago CICS was at 30 billion transactions per day, so
> >>>>>> volume has tripled in 10 years, during the massive growth of
> >>>>>> cloud.
> >>>>>> Healthy indeed.
> >>>>> I have a different perspective on what constitutes healthy.
> >>>>> Here in Australia, I've had the opportunity to meet
> >>>>> architects from various banks who are actively involved in or
> >>>>> have completed the process of migrating the read side of
> >>>>> their CICS transactions to distributed systems. They have
> >>>>> embraced technologies like CDC and streaming platforms such
> >>>>> as Apache Kafka and distributed data stores such as Cassandra
> >>>>> and MongoDb. This shift has been primarily driven by
> >>>>> disruptive technologies like mobile banking pushing up
> >>>>> mainframe software costs.
> >>>>>
> >>>>> This is a common architectural pattern.
> >>>>>
> >>>>> https://secure-web.cisco.com/1rtinlBf65Yp0gn5XW0sNTB97uCj9DC3PxiCEuxbH0-IXG14CybdFfPOIfmEf-RXuUxXIFFoLBm_-Q1cD-AEMZ8kHw-zh49NK_BMGM15VMYv7w_NMGFL55g9iswLdGsWfvcVivEFEGID-tlzNJ_QtWUcYyndPOsxLBqlF0Jny_zhtWfStfps6vAMe90NNG7B9Sanvy892rg60j-wFK35Z8ouajlaTnZDlApkrJGr5vFGLxMeaqQpFSHzbewgjibYIdPtDQkHQQ4xTC5vUhIaUIxxWnbPAHMXVysrWcCj2ugNAfn5YmBvOfQBf-kXAIeoGcvixmdesTRb7Exswwz_Jxngy-9--X-xMhmO4Cct_Zw5JhwO2CssUXY5XfwBFCBGNnUGAhKjMEDtz-WIV1NEDogj-wF0Qr_zXE8J8-b0a5bI/https%3A%2F%2Fwww.conferencecast.tv%2Ftalk-29844-nationwide-building-society-building-mobile-applications-and-a-speed-layer-with-mongodb%23.talkPage-header
> >>>>>
> >>>>>
> >>>>>
> >>>>>> On Mon, May 22, 2023 at 2:56 PM David
> >>>>>> Crayford<dcrayf...@gmail.com>  wrote:
> >>>>>>
> >>>>>>> Sent again in plain text. Apple mail is too clever for it’s
> >>>>>>> own good!
> >>>>>>>
> >>>>>>>> On 22 May 2023, at 12:46 pm, David
> >>>>>>>> Crayford<dcrayf...@gmail.com>  wrote:
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>> On 21 May 2023, at 12:52 pm, Howard
> >>>>>>>>> Rifkind<howard.rifk...@gmail.com>
> >>>>>>> wrote:
> >>>>>>>>> Hundreds of PC type servers still can’t handle the huge
> >>>>>>>>> amount of data
> >>>>>>> like a mainframe can.
> >>>>>>> Of course, that's an absurd statement! By "PC type," I
> >>>>>>> assume you're
> >>>>>>> referring to x86? We can easily break this down. First
> >>>>>>> things first, let's
> >>>>>>> forget about the "hundreds" requirement. A 32 single socket
> >>>>>>> system is
> >>>>>>> enough to match up.
> >>>>>>>
> >>>>>>> AMD EPYC is the poster child for single socket servers,
> >>>>>>> running its novel
> >>>>>>> chiplet technology on a 5nm process node. AMD's infinity
> >>>>>>> interconnects are
> >>>>>>> capable of massive I/O bandwidth. You can learn more about
> >>>>>>> it here:
> >>>>>>> https://www.amd.com/en/technologies/infinity-architecture.
> >>>>>>> Each socket
> >>>>>>> can have a maximum of 96 cores, but even if we use a
> >>>>>>> conservative 64 cores
> >>>>>>> per socket, that's a scale-out of 2048 cores. AMD also has
> >>>>>>> accelerators for
> >>>>>>> offload encryption/compression, etc.
> >>>>>>>
> >>>>>>> Over in Intel land, the Ice Lake server platform is not
> >>>>>>> quite as
> >>>>>>> impressive, but the QPI (Quick Path Interconnect) yet again
> >>>>>>> handles huge
> >>>>>>> bandwidth. Intel also has accelerators such as their QAT,
> >>>>>>> which can either
> >>>>>>> be on-die SoC or a PCIe card. It's not too dissimilar to
> >>>>>>> zEDC but with the
> >>>>>>> advantage that it supports all popular compression formats
> >>>>>>> and not just
> >>>>>>> DEFLATE. You can find more information here:
> >>>>>>> https://secure-web.cisco.com/106eZotagdf7kKDcODCHB0y6fobcoBSfXi-F6dt6aVmrM_xh65OkMAO0co4OMqsC11iG8y8bVqPj0hLtf91f3nSxbF_Q7Rj-x2OUbQhYqJ17IEAAnTFZzB-q8FUhSUc2InjUWT1x_3m_drKT50A1ekkdrQ9uf1V9-_zYatRAptJPQmxuJ5hUYTkg20bV1DGltk4PUsPoW6gpPEXOrynVv0Y0MX4cIX3FxDaqUeg0NhhtlIbQ2EE9xkJ583GD-WDFEsXqPRrfFsG-4za6TDZJD9O9QsyP3QfMCIqn9KO_nUiOrSZQzN5AFxXgQQKxksa1czeXv2rdyyIwv9Cm7S2aO_M3c_fsdMvId_KKVB3zKcSpgHGU2Hyx6XnAVGFbKLGNqP-dASX0SaaTpcg7EF-fX6TTONVDen2_rcIzFdkOVFDw/https%3A%2F%2Fwww.intel.com.au%2Fcontent%2Fwww%2Fau%2Fen%2Farchitecture-and-technology%2Fintel-quick-assist-technology-overview.html
> >>>>>>>
> >>>>>>>
> >>>>>>> .
> >>>>>>>
> >>>>>>> A more apples-to-apples comparison would be the HP
> >>>>>>> Superdome Flex, which
> >>>>>>> is a large shared memory system lashed together with NUMA
> >>>>>>> interconnects,
> >>>>>>> with a whopping 32 sockets and a maximum core count of 896
> >>>>>>> on a single
> >>>>>>> vertically integrated system. HP Enterprise has technology
> >>>>>>> such as nPars,
> >>>>>>> which is similar to PR/SM. They claim 99.999% availability
> >>>>>>> on a single
> >>>>>>> system and even beyond when clustered.
> >>>>>>>
> >>>>>>> On the Arm side, it gets even more interesting as the
> >>>>>>> hyperscalers and
> >>>>>>> cloud builders are building their own kit. Although this
> >>>>>>> technology is
> >>>>>>> almost certainly the growth area of non-x86 workloads, you
> >>>>>>> can find more
> >>>>>>> details about it here:
> >>>>>>> https://www.nextplatform.com/2023/05/18/ampere-gets-out-in-front-of-x86-with-192-core-siryn-ampereone/
> >>>>>>>
> >>>>>>>
> >>>>>>> .
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> ----------------------------------------------------------------------
> >>>>>>>
> >>>>>>>
> >>>>>>> For IBM-MAIN subscribe / signoff / archive access
> >>>>>>> instructions,
> >>>>>>> send email tolists...@listserv.ua.edu  with the message:
> >>>>>>> INFO IBM-MAIN
> >>>>>>>
> >>>>>> ----------------------------------------------------------------------
> >>>>>>
> >>>>>>
> >>>>>> For IBM-MAIN subscribe / signoff / archive access
> >>>>>> instructions,
> >>>>>> send email tolists...@listserv.ua.edu  with the message:
> >>>>>> INFO IBM-MAIN
> >>>>> ----------------------------------------------------------------------
> >>>>>
> >>>>>
> >>>>> For IBM-MAIN subscribe / signoff / archive access
> >>>>> instructions,
> >>>>> send email to lists...@listserv.ua.edu with the message: INFO
> >>>>> IBM-MAIN
> >>> ----------------------------------------------------------------------
> >>>
> >>>
> >>> For IBM-MAIN subscribe / signoff / archive access instructions,
> >>> send email to lists...@listserv.ua.edu with the message: INFO
> >>> IBM-MAIN
> >> --
> >> Regards,
> >> Steve Thompson
> >> VS Strategies LLC
> >> Westfield IN
> >> 972-983-9430 cell
> >>
> >> ----------------------------------------------------------------------
> >>
> >> For IBM-MAIN subscribe / signoff / archive access instructions,
> >> send email to lists...@listserv.ua.edu with the message: INFO
> >> IBM-MAIN
> >>
> >>
> >> ----------------------------------------------------------------------
> >>
> >> For IBM-MAIN subscribe / signoff / archive access instructions,
> >> send email to lists...@listserv.ua.edu with the message: INFO
> >> IBM-MAIN
> >
> > ----------------------------------------------------------------------
> >
> > For IBM-MAIN subscribe / signoff / archive access instructions,
> > send email to lists...@listserv.ua.edu with the message: INFO
> > IBM-MAIN
>
> --
> Regards,
> Steve Thompson
> VS Strategies LLC
> Westfield IN
> 972-983-9430 cell
>
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



-- 
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to