one more thing, I just noticed. the "iostat -x" itself has bug which display
huge %util.

      http://linuxcommand.org/man_pages/iostat1.html








On Mon, May 2, 2011 at 3:56 PM, Norman Maurer
<[email protected]>wrote:

> Hi there,
>
> james 3.0M3 tries to keep memory consuming as small as possible and
> prolly use the disk more frequently as 2.3.2. Could you post the
> output of "istat -x 1 20" when submitting the same workload to james
> 2.3.2 and 3.0-M3 ?
>
> Thanks,
> Norman
>
>
> 2011/5/2 kushal soy <[email protected]>:
> > Norman,
> >
> > while load testing for smtp server  on james 3.0M3 ; I took the " iostat
> -x
> > 5" as you mention  and it seems the %util is always 100% or more than
> that.
> > so it is clearly IO bottleneck.
> >
> > but I done the same test with james 2.3.2;  there was no IO bottleneck.
> >
> > what i feel is there may be some configuration issue which I am missing.
> >
> >
> >
> > regards,
> > kushal
> >
> >
> >
> >
> >
> >
> > On Thu, Apr 28, 2011 at 6:45 PM, Norman Maurer
> > <[email protected]> wrote:
> >>
> >> hi there,
> >>
> >> no the paramter should not be relevant. as I said checkout the spool
> >> threads in mailetcontainer.xml. It would also good to see some status
> >> like "iostat -x 1
> >> ", to see if the disk is the bootleneck etc. I can sool a way more
> >> messages the second here... (Even with the default config)
> >>
> >>
> >> Bye,
> >> Norman
> >>
> >> 2011/4/28 kushal soy <[email protected]>:
> >> >
> >> > is it related to activemq?
> >> > activemq create a 10 directory , [1,2,..10] for queuing  as per my
> >> > understanding in var/store/activemq/blob-transfer.
> >> > then i changed the sessionCaheSize to 20 in james-server-context.xml
> but
> >> > activemq always create 10 directory.
> >> >
> >> >  <bean id="jmsConnectionFactory"
> >> > class="org.springframework.jms.connection.CachingConnectionFactory">
> >> >        <property name="targetConnectionFactory"
> >> > ref="amqConnectionFactory"/>
> >> >        <property name="sessionCacheSize" value="10"/>
> >> >        <property name="cacheConsumers" value="false"/>
> >> >        <property name="cacheProducers" value="true"/>
> >> >     </bean>
> >> > can you please explain these parameters if relevant .
> >> >
> >> >
> >> >
> >> >
> >> > On Thu, Apr 28, 2011 at 4:39 PM, Norman Maurer
> >> > <[email protected]> wrote:
> >> >>
> >> >> Ok.. It depends on many things. For example how many smtp connections
> >> >> you use to feed in the email, how many spool threads, how many
> mailets
> >> >> etc.
> >> >>
> >> >> First of I would try to higher the spool threads. See
> >> >> mailetcontainer.xml
> >> >>
> >> >> Bye,
> >> >> Norman
> >> >>
> >> >>
> >> >> 2011/4/28 kushal soy <[email protected]>:
> >> >> > Hi
> >> >> >
> >> >> > when I am deposing the message from my client.
> >> >> > what it will be called; queuing?
> >> >> >
> >> >> > On Thu, Apr 28, 2011 at 4:34 PM, Norman Maurer
> >> >> > <[email protected]
> >> >> >> wrote:
> >> >> >
> >> >> >> Hi there,
> >> >> >>
> >> >> >> spooling or queuing ?
> >> >> >>
> >> >> >> Bye,
> >> >> >> Norman
> >> >> >>
> >> >> >>
> >> >> >> 2011/4/28 kushal soy <[email protected]>:
> >> >> >> > Hi
> >> >> >> > i was trying one of the nighty build of 3.0M3
> >> >> >> >
> >> >> >> >
> [james-server-container-spring-3.0-M3-20110425.003735-254-bin.tar.gz]
> >> >> >> >
> >> >> >> > I am not able to achieve spooling speed more than *3
> msg/second*.
> >> >> >> >
> >> >> >> > is this a issue?
> >> >> >> >
> >> >> >> > i am using default configuration as per now.
> >> >> >> >
> >> >> >> >
> >> >> >> >
> >> >> >> > thanks & regards,
> >> >> >> > kushal
> >> >> >> >
> >> >> >>
> >> >> >>
> >> >> >>
> ---------------------------------------------------------------------
> >> >> >> To unsubscribe, e-mail: [email protected]
> >> >> >> For additional commands, e-mail:
> [email protected]
> >> >> >>
> >> >> >>
> >> >> >
> >> >>
> >> >> ---------------------------------------------------------------------
> >> >> To unsubscribe, e-mail: [email protected]
> >> >> For additional commands, e-mail: [email protected]
> >> >>
> >> >
> >> >
> >
> >
>

Reply via email to