Re: Next ID Blocking = faster submits?

2008-05-28 Thread Axton
What about performing the same test creating a series of entries on
separate threads.  Then break down the results based on the thread
count.

Axton Grams

On Wed, May 28, 2008 at 4:01 PM, Rick Cook [EMAIL PROTECTED] wrote:
 ** I've been doing some testing to see how much this really helps
 performance, and my preliminary numbers were surprising and disappointing.
 NOTE:  I don't think a single sample is enough from which to draw a global
 conclusion.  HOWEVER...I am concerned enough to ask some questions.

 I have two new servers, equal hardware, same OS (RHEL 5) and AR System 7.1
 p2, same code, same DB version, same code and similar (but separate)
 databases.

 I ran an Escalation that submits hundreds of records into a relatively small
 form (perhaps 25 fields) that previously contained no records.  There was no
 other load or user on either server.

 Server A is set up without the NextId blocking.
 Server B is set up WITH the NextId blocking set for 100 at the server level
 but NOT on the form itself, threaded escalations, and the Status History
 update disabled for the form in question.

 I went through the SQL logs and tracked the time difference between each
 UPDATE arschema SET nextId = nextId + 1/100 WHERE schemaId = 475 entry.
 The results?

 Server A: Each fetch of single NextIds  was separated by an average of .07
 seconds, which is 7 seconds per hundred.

 Server B: Each fetch of 100 NextIds was separated by a mean value of 12.4
 seconds per entry (hundred).  A second run showed an average of 12.8
 seconds, so I'm fairly confident that's a good number.  The fastest was 5.3
 seconds, the slowest almost 40 seconds.

 Then just to eliminate the possibility that the environments were the issue,
 I turned on the NextId blocking on Server A to the same parameters I had set
 for Server B.  Result?  Average of 8 seconds per hundred, though if I throw
 out the first two gets (which were 11 sec. ea), the remaining runs average
 around 7.25 seconds per hundred.  Even in a best-case scenario, it's still
 slightly slower than doing it singly.

 The median value between the values in all three sets across two servers was
 8 seconds.  The mean value is 11 seconds.  Again, the time it takes to get
 100 NextId updates 1 at a time was 7 seconds per hundred.

 So the newer, faster feature actually appears no faster, and in some cases
 slower, than the process it's supposed to have improved.

 Maybe it's not hitting the DB as often, but then why are we not seeing the
 omission of 99 DB calls reflected in faster overall submit times at the AR
 System level?  Am I doing something wrong?  Are my expectations
 unreasonable?  Is there some data in a white paper or something that shows
 empirically what improvements one should expect from deploying this new
 functionality?

 Is anyone seeing improved performance because of this feature?  I don't see
 it.

 Rick
 __Platinum Sponsor: www.rmsportal.com ARSlist: Where the Answers Are
 html___

___
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
Platinum Sponsor: www.rmsportal.com ARSlist: Where the Answers Are


Re: Next ID Blocking = faster submits?

2008-05-28 Thread Rick Cook
I could do that, Axton, but I wanted to test the increase in performance of
just the NextID blocking.  Unless there's something that says that none of
these new features will benefit us without enabling them all, I would like
to evaluate them individually and in smaller sets before I do them all.

Rick

On Wed, May 28, 2008 at 1:08 PM, Axton [EMAIL PROTECTED] wrote:

 What about performing the same test creating a series of entries on
 separate threads.  Then break down the results based on the thread
 count.

 Axton Grams

 On Wed, May 28, 2008 at 4:01 PM, Rick Cook [EMAIL PROTECTED] wrote:
  ** I've been doing some testing to see how much this really helps
  performance, and my preliminary numbers were surprising and
 disappointing.
  NOTE:  I don't think a single sample is enough from which to draw a
 global
  conclusion.  HOWEVER...I am concerned enough to ask some questions.
 
  I have two new servers, equal hardware, same OS (RHEL 5) and AR System
 7.1
  p2, same code, same DB version, same code and similar (but separate)
  databases.
 
  I ran an Escalation that submits hundreds of records into a relatively
 small
  form (perhaps 25 fields) that previously contained no records.  There was
 no
  other load or user on either server.
 
  Server A is set up without the NextId blocking.
  Server B is set up WITH the NextId blocking set for 100 at the server
 level
  but NOT on the form itself, threaded escalations, and the Status History
  update disabled for the form in question.
 
  I went through the SQL logs and tracked the time difference between each
  UPDATE arschema SET nextId = nextId + 1/100 WHERE schemaId = 475
 entry.
  The results?
 
  Server A: Each fetch of single NextIds  was separated by an average of
 .07
  seconds, which is 7 seconds per hundred.
 
  Server B: Each fetch of 100 NextIds was separated by a mean value of 12.4
  seconds per entry (hundred).  A second run showed an average of 12.8
  seconds, so I'm fairly confident that's a good number.  The fastest was
 5.3
  seconds, the slowest almost 40 seconds.
 
  Then just to eliminate the possibility that the environments were the
 issue,
  I turned on the NextId blocking on Server A to the same parameters I had
 set
  for Server B.  Result?  Average of 8 seconds per hundred, though if I
 throw
  out the first two gets (which were 11 sec. ea), the remaining runs
 average
  around 7.25 seconds per hundred.  Even in a best-case scenario, it's
 still
  slightly slower than doing it singly.
 
  The median value between the values in all three sets across two servers
 was
  8 seconds.  The mean value is 11 seconds.  Again, the time it takes to
 get
  100 NextId updates 1 at a time was 7 seconds per hundred.
 
  So the newer, faster feature actually appears no faster, and in some
 cases
  slower, than the process it's supposed to have improved.
 
  Maybe it's not hitting the DB as often, but then why are we not seeing
 the
  omission of 99 DB calls reflected in faster overall submit times at the
 AR
  System level?  Am I doing something wrong?  Are my expectations
  unreasonable?  Is there some data in a white paper or something that
 shows
  empirically what improvements one should expect from deploying this new
  functionality?
 
  Is anyone seeing improved performance because of this feature?  I don't
 see
  it.
 
  Rick
  __Platinum Sponsor: www.rmsportal.com ARSlist: Where the Answers Are
  html___


 ___
 UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
 Platinum Sponsor: www.rmsportal.com ARSlist: Where the Answers Are


___
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
Platinum Sponsor: www.rmsportal.com ARSlist: Where the Answers Are


Re: Next ID Blocking = faster submits?

2008-05-28 Thread Andrew Fremont
I believe the improvement performance is for multiple concurrent users
(100+) submitting entries to the same form. This option may not work best
for single user.

AF.

On Wed, May 28, 2008 at 1:13 PM, Rick Cook [EMAIL PROTECTED] wrote:

 ** I could do that, Axton, but I wanted to test the increase in performance
 of just the NextID blocking.  Unless there's something that says that none
 of these new features will benefit us without enabling them all, I would
 like to evaluate them individually and in smaller sets before I do them all.

 Rick


 On Wed, May 28, 2008 at 1:08 PM, Axton [EMAIL PROTECTED] wrote:

 What about performing the same test creating a series of entries on
 separate threads.  Then break down the results based on the thread
 count.

 Axton Grams

 On Wed, May 28, 2008 at 4:01 PM, Rick Cook [EMAIL PROTECTED] wrote:
  ** I've been doing some testing to see how much this really helps
  performance, and my preliminary numbers were surprising and
 disappointing.
  NOTE:  I don't think a single sample is enough from which to draw a
 global
  conclusion.  HOWEVER...I am concerned enough to ask some questions.
 
  I have two new servers, equal hardware, same OS (RHEL 5) and AR System
 7.1
  p2, same code, same DB version, same code and similar (but separate)
  databases.
 
  I ran an Escalation that submits hundreds of records into a relatively
 small
  form (perhaps 25 fields) that previously contained no records.  There
 was no
  other load or user on either server.
 
  Server A is set up without the NextId blocking.
  Server B is set up WITH the NextId blocking set for 100 at the server
 level
  but NOT on the form itself, threaded escalations, and the Status History
  update disabled for the form in question.
 
  I went through the SQL logs and tracked the time difference between each
  UPDATE arschema SET nextId = nextId + 1/100 WHERE schemaId = 475
 entry.
  The results?
 
  Server A: Each fetch of single NextIds  was separated by an average of
 .07
  seconds, which is 7 seconds per hundred.
 
  Server B: Each fetch of 100 NextIds was separated by a mean value of
 12.4
  seconds per entry (hundred).  A second run showed an average of 12.8
  seconds, so I'm fairly confident that's a good number.  The fastest was
 5.3
  seconds, the slowest almost 40 seconds.
 
  Then just to eliminate the possibility that the environments were the
 issue,
  I turned on the NextId blocking on Server A to the same parameters I had
 set
  for Server B.  Result?  Average of 8 seconds per hundred, though if I
 throw
  out the first two gets (which were 11 sec. ea), the remaining runs
 average
  around 7.25 seconds per hundred.  Even in a best-case scenario, it's
 still
  slightly slower than doing it singly.
 
  The median value between the values in all three sets across two servers
 was
  8 seconds.  The mean value is 11 seconds.  Again, the time it takes to
 get
  100 NextId updates 1 at a time was 7 seconds per hundred.
 
  So the newer, faster feature actually appears no faster, and in some
 cases
  slower, than the process it's supposed to have improved.
 
  Maybe it's not hitting the DB as often, but then why are we not seeing
 the
  omission of 99 DB calls reflected in faster overall submit times at the
 AR
  System level?  Am I doing something wrong?  Are my expectations
  unreasonable?  Is there some data in a white paper or something that
 shows
  empirically what improvements one should expect from deploying this new
  functionality?
 
  Is anyone seeing improved performance because of this feature?  I don't
 see
  it.
 
  Rick
  __Platinum Sponsor: www.rmsportal.com ARSlist: Where the Answers Are
  html___


 ___
 UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
 Platinum Sponsor: www.rmsportal.com ARSlist: Where the Answers Are


 __Platinum Sponsor: www.rmsportal.com ARSlist: Where the Answers Are
 html___


___
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
Platinum Sponsor: www.rmsportal.com ARSlist: Where the Answers Are


Re: Next ID Blocking = faster submits?

2008-05-28 Thread Rick Cook
Maybe you're right, Chad.  Obviously, BMC did enough testing to know what
the break points are, and proved the value of this feature to the point of
developing it.  I would like to see more of that data, so that I can figure
out when/if I might see the benefit on my own system.

Rick

On Wed, May 28, 2008 at 1:21 PM, Hall Chad - chahal [EMAIL PROTECTED]
wrote:

 **

 I don't think you'll see the true benefit until you test it with several
 threads submitting records at the same time. That's when the contention on
 the arschema table will become a bottleneck. And as fast as that SQL call
 is, you may need LOTS of threads very quickly submitting LOTS of records.



 We saw problems with contention on this back in 6.3. The real problem was
 actually a very slow and poorly configured SAN for our database system,
 along with very fragmented tables. But the end result was long waits and
 blocked processes at the database while arschema was being updated that led
 to slow response or even timeouts for our end users while they submitted
 records. A bigger, better, faster SAN configuration took all of our problems
 away, but I'm confident NextID blocks would have helped us.



 I just upgraded to 7.0.1 and used NextID blocks on a few major forms, but
 unfortunately I didn't benchmark anything before or after.



 *Chad Hall*
 (501) 342-2650
   --

 *From:* Action Request System discussion list(ARSList) [mailto:
 [EMAIL PROTECTED] *On Behalf Of *Rick Cook
 *Sent:* Wednesday, May 28, 2008 3:14 PM
 *To:* arslist@ARSLIST.ORG
 *Subject:* Re: Next ID Blocking = faster submits?



 ** I could do that, Axton, but I wanted to test the increase in performance
 of just the NextID blocking.  Unless there's something that says that none
 of these new features will benefit us without enabling them all, I would
 like to evaluate them individually and in smaller sets before I do them all.

 Rick

 On Wed, May 28, 2008 at 1:08 PM, Axton [EMAIL PROTECTED] wrote:

 What about performing the same test creating a series of entries on
 separate threads.  Then break down the results based on the thread
 count.

 Axton Grams

 On Wed, May 28, 2008 at 4:01 PM, Rick Cook [EMAIL PROTECTED] wrote:
  ** I've been doing some testing to see how much this really helps

  performance, and my preliminary numbers were surprising and
 disappointing.
  NOTE:  I don't think a single sample is enough from which to draw a
 global
  conclusion.  HOWEVER...I am concerned enough to ask some questions.
 
  I have two new servers, equal hardware, same OS (RHEL 5) and AR System
 7.1
  p2, same code, same DB version, same code and similar (but separate)
  databases.
 
  I ran an Escalation that submits hundreds of records into a relatively
 small
  form (perhaps 25 fields) that previously contained no records.  There was
 no
  other load or user on either server.
 
  Server A is set up without the NextId blocking.
  Server B is set up WITH the NextId blocking set for 100 at the server
 level
  but NOT on the form itself, threaded escalations, and the Status History
  update disabled for the form in question.
 
  I went through the SQL logs and tracked the time difference between each
  UPDATE arschema SET nextId = nextId + 1/100 WHERE schemaId = 475
 entry.
  The results?
 
  Server A: Each fetch of single NextIds  was separated by an average of
 .07
  seconds, which is 7 seconds per hundred.
 
  Server B: Each fetch of 100 NextIds was separated by a mean value of 12.4
  seconds per entry (hundred).  A second run showed an average of 12.8
  seconds, so I'm fairly confident that's a good number.  The fastest was
 5.3
  seconds, the slowest almost 40 seconds.
 
  Then just to eliminate the possibility that the environments were the
 issue,
  I turned on the NextId blocking on Server A to the same parameters I had
 set
  for Server B.  Result?  Average of 8 seconds per hundred, though if I
 throw
  out the first two gets (which were 11 sec. ea), the remaining runs
 average
  around 7.25 seconds per hundred.  Even in a best-case scenario, it's
 still
  slightly slower than doing it singly.
 
  The median value between the values in all three sets across two servers
 was
  8 seconds.  The mean value is 11 seconds.  Again, the time it takes to
 get
  100 NextId updates 1 at a time was 7 seconds per hundred.
 
  So the newer, faster feature actually appears no faster, and in some
 cases
  slower, than the process it's supposed to have improved.
 
  Maybe it's not hitting the DB as often, but then why are we not seeing
 the
  omission of 99 DB calls reflected in faster overall submit times at the
 AR
  System level?  Am I doing something wrong?  Are my expectations
  unreasonable?  Is there some data in a white paper or something that
 shows
  empirically what improvements one should expect from deploying this new
  functionality?
 
  Is anyone seeing improved performance because of this feature?  I don't
 see
  it.
 
  Rick

  __Platinum

Re: Next ID Blocking = faster submits?

2008-05-28 Thread LJ Longwing
Rick,
As mentioned by Chad.  I have heard this 3rd person...and never experienced
it myself, but it's my understanding that the reason that the next-id block
feature was implemented was because when you are submitting millions of
records an hour (as sometimes happens with the larger shops) a single id at
a time isn't enough to ensure you get all of the IDs handed out in time...so
it's more of a contention issue than it was a performance enhancement...if
you have 30 threads, each with 100 ids, you will only have 30 calls to get
new ids instead of 3000 calls to get ids, that reduction in update calls to
that one table removes it as a bottleneck.

  _  

From: Action Request System discussion list(ARSList)
[mailto:[EMAIL PROTECTED] On Behalf Of Rick Cook
Sent: Wednesday, May 28, 2008 2:02 PM
To: arslist@ARSLIST.ORG
Subject: Next ID Blocking = faster submits?


** I've been doing some testing to see how much this really helps
performance, and my preliminary numbers were surprising and disappointing.
NOTE:  I don't think a single sample is enough from which to draw a global
conclusion.  HOWEVER...I am concerned enough to ask some questions.

I have two new servers, equal hardware, same OS (RHEL 5) and AR System 7.1
p2, same code, same DB version, same code and similar (but separate)
databases.

I ran an Escalation that submits hundreds of records into a relatively small
form (perhaps 25 fields) that previously contained no records.  There was no
other load or user on either server.

Server A is set up without the NextId blocking.
Server B is set up WITH the NextId blocking set for 100 at the server level
but NOT on the form itself, threaded escalations, and the Status History
update disabled for the form in question.

I went through the SQL logs and tracked the time difference between each
UPDATE arschema SET nextId = nextId + 1/100 WHERE schemaId = 475 entry.
The results?

Server A: Each fetch of single NextIds  was separated by an average of .07
seconds, which is 7 seconds per hundred.

Server B: Each fetch of 100 NextIds was separated by a mean value of 12.4
seconds per entry (hundred).  A second run showed an average of 12.8
seconds, so I'm fairly confident that's a good number.  The fastest was 5.3
seconds, the slowest almost 40 seconds.  

Then just to eliminate the possibility that the environments were the issue,
I turned on the NextId blocking on Server A to the same parameters I had set
for Server B.  Result?  Average of 8 seconds per hundred, though if I throw
out the first two gets (which were 11 sec. ea), the remaining runs average
around 7.25 seconds per hundred.  Even in a best-case scenario, it's still
slightly slower than doing it singly.

The median value between the values in all three sets across two servers was
8 seconds.  The mean value is 11 seconds.  Again, the time it takes to get
100 NextId updates 1 at a time was 7 seconds per hundred.

So the newer, faster feature actually appears no faster, and in some cases
slower, than the process it's supposed to have improved.

Maybe it's not hitting the DB as often, but then why are we not seeing the
omission of 99 DB calls reflected in faster overall submit times at the AR
System level?  Am I doing something wrong?  Are my expectations
unreasonable?  Is there some data in a white paper or something that shows
empirically what improvements one should expect from deploying this new
functionality?

Is anyone seeing improved performance because of this feature?  I don't see
it.

Rick
__Platinum Sponsor: www.rmsportal.com ARSlist: Where the Answers Are
html___ 

___
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
Platinum Sponsor: www.rmsportal.com ARSlist: Where the Answers Are


Re: Next ID Blocking = faster submits?

2008-05-28 Thread Rick Cook
OK, so it seems that I wasn't hitting the server hard enough to see a real
performance increase.  Why would I see a performance DECREASE?  Why wouldn't
the fact that I'm reducing the number of DB calls and locks show up as a
performance increase regardless of load, or at least be a push at lower load
levels?

Rick

On Wed, May 28, 2008 at 1:39 PM, LJ Longwing [EMAIL PROTECTED] wrote:

 ** Rick,
 As mentioned by Chad.  I have heard this 3rd person...and never experienced
 it myself, but it's my understanding that the reason that the next-id block
 feature was implemented was because when you are submitting millions of
 records an hour (as sometimes happens with the larger shops) a single id at
 a time isn't enough to ensure you get all of the IDs handed out in time...so
 it's more of a contention issue than it was a performance enhancement...if
 you have 30 threads, each with 100 ids, you will only have 30 calls to get
 new ids instead of 3000 calls to get ids, that reduction in update calls to
 that one table removes it as a bottleneck.

  --
 *From:* Action Request System discussion list(ARSList) [mailto:
 [EMAIL PROTECTED] *On Behalf Of *Rick Cook
 *Sent:* Wednesday, May 28, 2008 2:02 PM
 *To:* arslist@ARSLIST.ORG
 *Subject:* Next ID Blocking = faster submits?

 ** I've been doing some testing to see how much this really helps
 performance, and my preliminary numbers were surprising and disappointing.
 NOTE:  I don't think a single sample is enough from which to draw a global
 conclusion.  HOWEVER...I am concerned enough to ask some questions.


 I have two new servers, equal hardware, same OS (RHEL 5) and AR System 7.1
 p2, same code, same DB version, same code and similar (but separate)
 databases.

 I ran an Escalation that submits hundreds of records into a relatively
 small form (perhaps 25 fields) that previously contained no records.  There
 was no other load or user on either server.

 Server A is set up without the NextId blocking.
 Server B is set up WITH the NextId blocking set for 100 at the server level
 but NOT on the form itself, threaded escalations, and the Status History
 update disabled for the form in question.

 I went through the SQL logs and tracked the time difference between each
 UPDATE arschema SET nextId = nextId + 1/100 WHERE schemaId = 475 entry.
 The results?

 Server A: Each fetch of single NextIds  was separated by an average of .07
 seconds, which is 7 seconds per hundred.

 Server B: Each fetch of 100 NextIds was separated by a mean value of 12.4
 seconds per entry (hundred).  A second run showed an average of 12.8
 seconds, so I'm fairly confident that's a good number.  The fastest was 5.3
 seconds, the slowest almost 40 seconds.

 Then just to eliminate the possibility that the environments were the
 issue, I turned on the NextId blocking on Server A to the same parameters I
 had set for Server B.  Result?  Average of 8 seconds per hundred, though if
 I throw out the first two gets (which were 11 sec. ea), the remaining runs
 average around 7.25 seconds per hundred.  Even in a best-case scenario, it's
 still slightly slower than doing it singly.

 The median value between the values in all three sets across two servers
 was 8 seconds.  The mean value is 11 seconds.  Again, the time it takes to
 get 100 NextId updates 1 at a time was 7 seconds per hundred.

 So the newer, faster feature actually appears no faster, and in some
 cases slower, than the process it's supposed to have improved.

 Maybe it's not hitting the DB as often, but then why are we not seeing the
 omission of 99 DB calls reflected in faster overall submit times at the AR
 System level?  Am I doing something wrong?  Are my expectations
 unreasonable?  Is there some data in a white paper or something that shows
 empirically what improvements one should expect from deploying this new
 functionality?

 Is anyone seeing improved performance because of this feature?  I don't see
 it.

 Rick
 __Platinum Sponsor: www.rmsportal.com ARSlist: Where the Answers Are
 html___
 __Platinum Sponsor: www.rmsportal.com ARSlist: Where the Answers Are
 html___


___
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
Platinum Sponsor: www.rmsportal.com ARSlist: Where the Answers Are


Re: Next ID Blocking = faster submits?

2008-05-28 Thread Hall Chad - chahal
You really only have one sample on Server A without NextID blocks, so
perhaps a few more samples there would show its really the same as when
you used NextID blocks on that server. Which would indicate you weren't
bumping the bottleneck that this alleviates. I suspect something is
different on Server B that caused it to be significantly slower, so you
might try several tests there without NextID blocks.

 

 

Chad Hall  
(501) 342-2650



From: Action Request System discussion list(ARSList)
[mailto:[EMAIL PROTECTED] On Behalf Of Rick Cook
Sent: Wednesday, May 28, 2008 3:51 PM
To: arslist@ARSLIST.ORG
Subject: Re: Next ID Blocking = faster submits?

 

** OK, so it seems that I wasn't hitting the server hard enough to see a
real performance increase.  Why would I see a performance DECREASE?  Why
wouldn't the fact that I'm reducing the number of DB calls and locks
show up as a performance increase regardless of load, or at least be a
push at lower load levels?

Rick

On Wed, May 28, 2008 at 1:39 PM, LJ Longwing [EMAIL PROTECTED]
wrote:

** 

Rick,

As mentioned by Chad.  I have heard this 3rd person...and never
experienced it myself, but it's my understanding that the reason that
the next-id block feature was implemented was because when you are
submitting millions of records an hour (as sometimes happens with the
larger shops) a single id at a time isn't enough to ensure you get all
of the IDs handed out in time...so it's more of a contention issue than
it was a performance enhancement...if you have 30 threads, each with 100
ids, you will only have 30 calls to get new ids instead of 3000 calls to
get ids, that reduction in update calls to that one table removes it as
a bottleneck.

 



From: Action Request System discussion list(ARSList)
[mailto:[EMAIL PROTECTED] On Behalf Of Rick Cook

Sent: Wednesday, May 28, 2008 2:02 PM
To: arslist@ARSLIST.ORG
Subject: Next ID Blocking = faster submits?

** I've been doing some testing to see how much this really helps
performance, and my preliminary numbers were surprising and
disappointing.  NOTE:  I don't think a single sample is enough from
which to draw a global conclusion.  HOWEVER...I am concerned enough to
ask some questions.



I have two new servers, equal hardware, same OS (RHEL 5) and AR System
7.1 p2, same code, same DB version, same code and similar (but separate)
databases.

I ran an Escalation that submits hundreds of records into a relatively
small form (perhaps 25 fields) that previously contained no records.
There was no other load or user on either server.

Server A is set up without the NextId blocking.
Server B is set up WITH the NextId blocking set for 100 at the server
level but NOT on the form itself, threaded escalations, and the Status
History update disabled for the form in question.

I went through the SQL logs and tracked the time difference between each
UPDATE arschema SET nextId = nextId + 1/100 WHERE schemaId = 475
entry.  The results?

Server A: Each fetch of single NextIds  was separated by an average of
.07 seconds, which is 7 seconds per hundred.

Server B: Each fetch of 100 NextIds was separated by a mean value of
12.4 seconds per entry (hundred).  A second run showed an average of
12.8 seconds, so I'm fairly confident that's a good number.  The fastest
was 5.3 seconds, the slowest almost 40 seconds.  

Then just to eliminate the possibility that the environments were the
issue, I turned on the NextId blocking on Server A to the same
parameters I had set for Server B.  Result?  Average of 8 seconds per
hundred, though if I throw out the first two gets (which were 11 sec.
ea), the remaining runs average around 7.25 seconds per hundred.  Even
in a best-case scenario, it's still slightly slower than doing it
singly.

The median value between the values in all three sets across two servers
was 8 seconds.  The mean value is 11 seconds.  Again, the time it takes
to get 100 NextId updates 1 at a time was 7 seconds per hundred.

So the newer, faster feature actually appears no faster, and in some
cases slower, than the process it's supposed to have improved.

Maybe it's not hitting the DB as often, but then why are we not seeing
the omission of 99 DB calls reflected in faster overall submit times at
the AR System level?  Am I doing something wrong?  Are my expectations
unreasonable?  Is there some data in a white paper or something that
shows empirically what improvements one should expect from deploying
this new functionality?

Is anyone seeing improved performance because of this feature?  I don't
see it.

Rick

__Platinum Sponsor: www.rmsportal.com ARSlist: Where the Answers Are
html___ 

__Platinum Sponsor: www.rmsportal.com ARSlist: Where the Answers Are
html___ 


__Platinum Sponsor: www.rmsportal.com ARSlist: Where the Answers Are
html___ 

***
The information contained in this communication

Re: Next ID Blocking = faster submits?

2008-05-28 Thread LJ Longwing
If I had to guess...and this is purely a guess...it's possible that the
additional overhead of managing the list with 100 ids in it caused the
retrieval to be slower...but that slow down would be insignificant in the
circumstances that would cause this table to be a bottleneck.

  _  

From: Action Request System discussion list(ARSList)
[mailto:[EMAIL PROTECTED] On Behalf Of Rick Cook
Sent: Wednesday, May 28, 2008 2:51 PM
To: arslist@ARSLIST.ORG
Subject: Re: Next ID Blocking = faster submits?


** OK, so it seems that I wasn't hitting the server hard enough to see a
real performance increase.  Why would I see a performance DECREASE?  Why
wouldn't the fact that I'm reducing the number of DB calls and locks show up
as a performance increase regardless of load, or at least be a push at lower
load levels?

Rick


On Wed, May 28, 2008 at 1:39 PM, LJ Longwing [EMAIL PROTECTED] wrote:


** 
Rick,
As mentioned by Chad.  I have heard this 3rd person...and never experienced
it myself, but it's my understanding that the reason that the next-id block
feature was implemented was because when you are submitting millions of
records an hour (as sometimes happens with the larger shops) a single id at
a time isn't enough to ensure you get all of the IDs handed out in time...so
it's more of a contention issue than it was a performance enhancement...if
you have 30 threads, each with 100 ids, you will only have 30 calls to get
new ids instead of 3000 calls to get ids, that reduction in update calls to
that one table removes it as a bottleneck.

  _  


From: Action Request System discussion list(ARSList)
[mailto:[EMAIL PROTECTED] On Behalf Of Rick Cook

Sent: Wednesday, May 28, 2008 2:02 PM
To: arslist@ARSLIST.ORG
Subject: Next ID Blocking = faster submits?


** I've been doing some testing to see how much this really helps
performance, and my preliminary numbers were surprising and disappointing.
NOTE:  I don't think a single sample is enough from which to draw a global
conclusion.  HOWEVER...I am concerned enough to ask some questions. 


I have two new servers, equal hardware, same OS (RHEL 5) and AR System 7.1
p2, same code, same DB version, same code and similar (but separate)
databases.

I ran an Escalation that submits hundreds of records into a relatively small
form (perhaps 25 fields) that previously contained no records.  There was no
other load or user on either server.

Server A is set up without the NextId blocking.
Server B is set up WITH the NextId blocking set for 100 at the server level
but NOT on the form itself, threaded escalations, and the Status History
update disabled for the form in question.

I went through the SQL logs and tracked the time difference between each
UPDATE arschema SET nextId = nextId + 1/100 WHERE schemaId = 475 entry.
The results?

Server A: Each fetch of single NextIds  was separated by an average of .07
seconds, which is 7 seconds per hundred.

Server B: Each fetch of 100 NextIds was separated by a mean value of 12.4
seconds per entry (hundred).  A second run showed an average of 12.8
seconds, so I'm fairly confident that's a good number.  The fastest was 5.3
seconds, the slowest almost 40 seconds.  

Then just to eliminate the possibility that the environments were the issue,
I turned on the NextId blocking on Server A to the same parameters I had set
for Server B.  Result?  Average of 8 seconds per hundred, though if I throw
out the first two gets (which were 11 sec. ea), the remaining runs average
around 7.25 seconds per hundred.  Even in a best-case scenario, it's still
slightly slower than doing it singly.

The median value between the values in all three sets across two servers was
8 seconds.  The mean value is 11 seconds.  Again, the time it takes to get
100 NextId updates 1 at a time was 7 seconds per hundred.

So the newer, faster feature actually appears no faster, and in some cases
slower, than the process it's supposed to have improved.

Maybe it's not hitting the DB as often, but then why are we not seeing the
omission of 99 DB calls reflected in faster overall submit times at the AR
System level?  Am I doing something wrong?  Are my expectations
unreasonable?  Is there some data in a white paper or something that shows
empirically what improvements one should expect from deploying this new
functionality?

Is anyone seeing improved performance because of this feature?  I don't see
it.

Rick

__Platinum Sponsor: www.rmsportal.com ARSlist: Where the Answers Are
html___ 
__Platinum Sponsor: www.rmsportal.com ARSlist: Where the Answers Are
html___ 


__Platinum Sponsor: www.rmsportal.com ARSlist: Where the Answers Are
html___ 

___
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
Platinum Sponsor: www.rmsportal.com ARSlist: Where the Answers Are


Re: Next ID Blocking = faster submits?

2008-05-28 Thread Joe D'Souza
Recently I had used 500 as a block when creating about 32 K records in the
CTM:People form and User form after using AIE to connect to a PS database. I
didn't really see a significant difference in the processing time when not
using the Next ID blocking.. In both cases it took an average of about 35
minutes to run.

I then reverted back to the default setting of 0 as I preferred a sequential
request ID consistent to the create date..

But like LJ said over a million records - maybe it would make a difference..

Joe
  -Original Message-
  From: Action Request System discussion list(ARSList)
[mailto:[EMAIL PROTECTED] Behalf Of LJ Longwing
  Sent: Wednesday, May 28, 2008 4:39 PM
  To: arslist@ARSLIST.ORG
  Subject: Re: Next ID Blocking = faster submits?


  **
  Rick,
  As mentioned by Chad.  I have heard this 3rd person...and never
experienced it myself, but it's my understanding that the reason that the
next-id block feature was implemented was because when you are submitting
millions of records an hour (as sometimes happens with the larger shops) a
single id at a time isn't enough to ensure you get all of the IDs handed out
in time...so it's more of a contention issue than it was a performance
enhancement...if you have 30 threads, each with 100 ids, you will only have
30 calls to get new ids instead of 3000 calls to get ids, that reduction in
update calls to that one table removes it as a bottleneck.




--
  From: Action Request System discussion list(ARSList)
[mailto:[EMAIL PROTECTED] On Behalf Of Rick Cook
  Sent: Wednesday, May 28, 2008 2:02 PM
  To: arslist@ARSLIST.ORG
  Subject: Next ID Blocking = faster submits?


  ** I've been doing some testing to see how much this really helps
performance, and my preliminary numbers were surprising and disappointing.
NOTE:  I don't think a single sample is enough from which to draw a global
conclusion.  HOWEVER...I am concerned enough to ask some questions.

  I have two new servers, equal hardware, same OS (RHEL 5) and AR System 7.1
p2, same code, same DB version, same code and similar (but separate)
databases.

  I ran an Escalation that submits hundreds of records into a relatively
small form (perhaps 25 fields) that previously contained no records.  There
was no other load or user on either server.

  Server A is set up without the NextId blocking.
  Server B is set up WITH the NextId blocking set for 100 at the server
level but NOT on the form itself, threaded escalations, and the Status
History update disabled for the form in question.

  I went through the SQL logs and tracked the time difference between each
UPDATE arschema SET nextId = nextId + 1/100 WHERE schemaId = 475 entry.
The results?

  Server A: Each fetch of single NextIds  was separated by an average of .07
seconds, which is 7 seconds per hundred.

  Server B: Each fetch of 100 NextIds was separated by a mean value of 12.4
seconds per entry (hundred).  A second run showed an average of 12.8
seconds, so I'm fairly confident that's a good number.  The fastest was 5.3
seconds, the slowest almost 40 seconds.

  Then just to eliminate the possibility that the environments were the
issue, I turned on the NextId blocking on Server A to the same parameters I
had set for Server B.  Result?  Average of 8 seconds per hundred, though if
I throw out the first two gets (which were 11 sec. ea), the remaining runs
average around 7.25 seconds per hundred.  Even in a best-case scenario, it's
still slightly slower than doing it singly.

  The median value between the values in all three sets across two servers
was 8 seconds.  The mean value is 11 seconds.  Again, the time it takes to
get 100 NextId updates 1 at a time was 7 seconds per hundred.

  So the newer, faster feature actually appears no faster, and in some
cases slower, than the process it's supposed to have improved.

  Maybe it's not hitting the DB as often, but then why are we not seeing the
omission of 99 DB calls reflected in faster overall submit times at the AR
System level?  Am I doing something wrong?  Are my expectations
unreasonable?  Is there some data in a white paper or something that shows
empirically what improvements one should expect from deploying this new
functionality?

  Is anyone seeing improved performance because of this feature?  I don't
see it.

  Rick
No virus found in this outgoing message.
Checked by AVG.
Version: 7.5.524 / Virus Database: 269.24.1/1468 - Release Date: 5/26/2008
3:23 PM

___
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
Platinum Sponsor: www.rmsportal.com ARSlist: Where the Answers Are