Hey Matt, 
Thanks for the useful info.

>into a smaller number of ethernet frames,
> reducing both latency and overhead.  Ideally, each
> block will fit into one ethernet frame, 
We are using 16K blocksize, so we cannot avoid
datagram splits with 9k frames - but certainly, sounds
better than with 1500 frames.
>so unless you're running one of a couple of
third-party gigabit cards, I think you're probably out
of luck. 
Will check with SysAdmins if the GigE we have
supports. Are there any risks/disadvantages of using
9k frames for interconnect?

Thanks,
-Ravi.

--- Matthew Zito <[EMAIL PROTECTED]> wrote:
> 
> Jumbo frames are the use of larger than normal MTU
> (Maximum Transmission
> Unit) settings on gigabit Ethernet links.  The
> traditional limit for
> Ethernet frames is 1500 bytes, which was fine for 10
> and 100 megabit
> Ethernet links.  With gigabit, however, since you
> lose a certain minimum
> amount of bandwidth to signaling overhead (preamble,
> postamble, header
> info, etc.) and that the Ethernet card has to do a
> certain minimum
> processing for each Ethernet frame it receives, a
> huge amount of CPU
> overhead can be spent on trying to fill a gigabit
> pipe.  The other
> problem is that if the host(s) are sending/receiving
> data larger than
> 1500 bytes, the data packet has to be fragmented
> into multiple, smaller
> packets, which then have to be reassembled on the
> far side.  Since this
> all has to be done on the host CPU rather than the
> Ethernet card, it
> increases both bus overhead and CPU time.
> 
> With jumbo frames, you use a >1500 byte MTU - the
> exact amount varies by
> implementation, but they're generally in the
> 9000-9200 byte range.
> That's a 6x improvement in the amount of data per
> ethernet frame, plus
> there's less reassembly.  Unfortunately, Sun never
> really embraced it as
> a technology, so unless you're running one of a
> couple of third-party
> gigabit cards, I think you're probably out of luck. 
> 
> 
> The specific relevance to RAC, which I somehow
> managed to mention, is
> that data blocks being shuttled 'tween nodes
> (depending on the
> blocksize) can be placed into a smaller number of
> ethernet frames,
> reducing both latency and overhead.  Ideally, each
> block will fit into
> one ethernet frame, but as always, YMMV.
> 
> Thanks,
> Matt
> 
> --
> Matthew Zito
> GridApp Systems
> Email: [EMAIL PROTECTED]
> Cell: 646-220-3551
> Phone: 212-358-8211 x 359
> http://www.gridapp.com
> 
> > -----Original Message-----
> > From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On 
> > Behalf Of Ravi Kulkarni
> > Sent: Thursday, July 10, 2003 2:49 PM
> > To: Multiple recipients of list ORACLE-L
> > Subject: RE: RAC system Calls
> > 
> > 
> > Matt,
> > What are jumbo frames? Are these assigning private
> > network IPs to cluster_interconnects parameter?
> > -Ravi.
> > 
> > 
> > --- Matthew Zito <[EMAIL PROTECTED]> wrote:
> > > 
> > > And are you using jumbo frames on your
> interconnect?
> > >  That can make a
> > > significant contribution to reducing overhead
> from a
> > > system standpoint.
> > > 
> > > Thanks,
> > > Matt
> > > 
> > > --
> > > Matthew Zito
> > > GridApp Systems
> > > Email: [EMAIL PROTECTED]
> > > Cell: 646-220-3551
> > > Phone: 212-358-8211 x 359
> > > http://www.gridapp.com
> > > 
> > > > -----Original Message-----
> > > > From: [EMAIL PROTECTED]
> > > [mailto:[EMAIL PROTECTED] On
> > > > Behalf Of K Gopalakrishnan
> > > > Sent: Thursday, July 10, 2003 11:44 AM
> > > > To: Multiple recipients of list ORACLE-L
> > > > Subject: RE: RAC system Calls
> > > > 
> > > > 
> > > > Ravi:
> > > > 
> > > > Do you have a statspack report? I would like
> to
> > > see that. But
> > > > in any case, 45% kernel is just too much?
> > > > 
> > > > BTW have you verified the private interconnect
> is
> > > used
> > > > for cache fusion transfer.. Make sure the
> cache
> > > fusion
> > > > is not going thru the public network.
> > > > 
> > > > 
> > > > 
> > > > Best Regards,
> > > > K Gopalakrishnan
> > > > 
> > > >  
> > > > 
> > > > 
> > > > -----Original Message-----
> > > > Ravi Kulkarni
> > > > Sent: Thursday, July 10, 2003 9:30 AM
> > > > To: Multiple recipients of list ORACLE-L
> > > > 
> > > > 
> > > > Hello List,
> > > > 
> > > > We are running Benchmark tests on Solaris
> 2-Node
> > > RAC.
> > > > Consistently noticed the following :
> > > > - Very high Kernel usage (averaging 45%) on
> TOP
> > > > - Statspack has "IPC Send Completion sync"
> waits
> > > (70%
> > > > Total ela time)
> > > > - On trussing top process, found Oracle to be
> > > issuing
> > > > huge number of "times" system calls in
> addition to
> > > 
> > > > read/writes(which I think are select/inserts).
> Has
> > > anyone
> > > > noticed this in your environment. I am
> guessing
> > > these to be
> > > > inter-instance pings, but could not get any
> hits
> > > in
> > > > Doc/Metalink to confirm this. "times" call is
> > > clocking lot of
> > > > CPU. Is this normal ?
> > > > Any pointers would be helpful ? If this is out
> of
> > > > context, is there a separate list for RAC?
> > > > 
> > > > Thanks,
> > > > Ravi.
>

__________________________________
Do you Yahoo!?
SBC Yahoo! DSL - Now only $29.95 per month!
http://sbc.yahoo.com
-- 
Please see the official ORACLE-L FAQ: http://www.orafaq.net
-- 
Author: Ravi Kulkarni
  INET: [EMAIL PROTECTED]

Fat City Network Services    -- 858-538-5051 http://www.fatcity.com
San Diego, California        -- Mailing list and web hosting services
---------------------------------------------------------------------
To REMOVE yourself from this mailing list, send an E-Mail message
to: [EMAIL PROTECTED] (note EXACT spelling of 'ListGuru') and in
the message BODY, include a line containing: UNSUB ORACLE-L
(or the name of mailing list you want to be removed from).  You may
also send the HELP command for other information (like subscribing).

Reply via email to