I could do that, Axton, but I wanted to test the increase in performance of
just the NextID blocking.  Unless there's something that says that none of
these new features will benefit us without enabling them all, I would like
to evaluate them individually and in smaller sets before I do them all.

Rick

On Wed, May 28, 2008 at 1:08 PM, Axton <[EMAIL PROTECTED]> wrote:

> What about performing the same test creating a series of entries on
> separate threads.  Then break down the results based on the thread
> count.
>
> Axton Grams
>
> On Wed, May 28, 2008 at 4:01 PM, Rick Cook <[EMAIL PROTECTED]> wrote:
> > ** I've been doing some testing to see how much this really helps
> > performance, and my preliminary numbers were surprising and
> disappointing.
> > NOTE:  I don't think a single sample is enough from which to draw a
> global
> > conclusion.  HOWEVER...I am concerned enough to ask some questions.
> >
> > I have two new servers, equal hardware, same OS (RHEL 5) and AR System
> 7.1
> > p2, same code, same DB version, same code and similar (but separate)
> > databases.
> >
> > I ran an Escalation that submits hundreds of records into a relatively
> small
> > form (perhaps 25 fields) that previously contained no records.  There was
> no
> > other load or user on either server.
> >
> > Server A is set up without the NextId blocking.
> > Server B is set up WITH the NextId blocking set for 100 at the server
> level
> > but NOT on the form itself, threaded escalations, and the Status History
> > update disabled for the form in question.
> >
> > I went through the SQL logs and tracked the time difference between each
> > "UPDATE arschema SET nextId = nextId + <1/100> WHERE schemaId = 475"
> entry.
> > The results?
> >
> > Server A: Each fetch of single NextIds  was separated by an average of
> .07
> > seconds, which is 7 seconds per hundred.
> >
> > Server B: Each fetch of 100 NextIds was separated by a mean value of 12.4
> > seconds per entry (hundred).  A second run showed an average of 12.8
> > seconds, so I'm fairly confident that's a good number.  The fastest was
> 5.3
> > seconds, the slowest almost 40 seconds.
> >
> > Then just to eliminate the possibility that the environments were the
> issue,
> > I turned on the NextId blocking on Server A to the same parameters I had
> set
> > for Server B.  Result?  Average of 8 seconds per hundred, though if I
> throw
> > out the first two gets (which were 11 sec. ea), the remaining runs
> average
> > around 7.25 seconds per hundred.  Even in a best-case scenario, it's
> still
> > slightly slower than doing it singly.
> >
> > The median value between the values in all three sets across two servers
> was
> > 8 seconds.  The mean value is 11 seconds.  Again, the time it takes to
> "get"
> > 100 NextId updates 1 at a time was 7 seconds per hundred.
> >
> > So the newer, "faster" feature actually appears no faster, and in some
> cases
> > slower, than the process it's supposed to have improved.
> >
> > Maybe it's not hitting the DB as often, but then why are we not seeing
> the
> > omission of 99 DB calls reflected in faster overall submit times at the
> AR
> > System level?  Am I doing something wrong?  Are my expectations
> > unreasonable?  Is there some data in a white paper or something that
> shows
> > empirically what improvements one should expect from deploying this new
> > functionality?
> >
> > Is anyone seeing improved performance because of this feature?  I don't
> see
> > it.
> >
> > Rick
> > __Platinum Sponsor: www.rmsportal.com ARSlist: "Where the Answers Are"
> > html___
>
>
> _______________________________________________________________________________
> UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
> Platinum Sponsor: www.rmsportal.com ARSlist: "Where the Answers Are"
>

_______________________________________________________________________________
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
Platinum Sponsor: www.rmsportal.com ARSlist: "Where the Answers Are"

Reply via email to