Re: Degrading performance on large updates and inserts

2007-11-07 Thread Jarl Grøneng
Be sure you have turned off aduit trail on the ctm:people form.
Default it does audit trail.

--
Jarl

On Nov 7, 2007 4:06 PM, William Rentfrow <[EMAIL PROTECTED]> wrote:
> **
>
> We are pushing about 800,000 records into the CTM:People form.  This is an
> initial data load - updates will be smaller.
>
> To do this we've imported the records into a staging form to format the data
> properly.  Then an escalation kicks off and marks every record's status to
> "Processed" from "New".  A filter fires on the change of status and pushes
> data to the CTM:People form.  This filter does a match on our staging form
> EID = CTM:People.Corporate ID which is an indexed field in the CTM:People
> form.
>
> With the base product I had exceedingly bad import times - about 1.5 seconds
> per record (and this is on a pretty decent Sun box with a separate Oracle
> server also on a Sun box).  So I added a filter on CTM:People that said if
> $USER$="AR_ESCALATOR" then Goto 1000.  Essentially I skipped all the OOB
> validation (which I'd already done in the staging form out of necessity).
> This got the record import speed up to about 15 records/second (which is
> STILL far too slow IMHO).
>
> As the 800,000 records imported the speed decreased.  At the end we averages
> 6.92 records/second pushed to CTM:People and it took ~31.5 hours.
>
> Also, we initially left one piece of data out of the staging form - so we
> added it back in and ran an update push to CTM:People.  This was faster but
> had the same symptom - I started out getting 15/second updates and now
> (nearly 24 hours later) I'm averaging 9.5 updates/second.
>
> My questions
>
> 1.) Why would performance degrade for an update?  I can see it for the
> insert because the tables/indexes/paging/etc are growing - but on an update
> that should be fairly static.
> 2.) Any ideas why this SO slow?  These are big enterprise boxes.   I have
> appropriate ports/queues configured, etc.  The arserverd process is not
> chewing up all the clock time so obviously it's not taxed.  And barely
> anyone is on this box at all.
>
> Also
>
> 3.) If this is still running and I turn on server side SQL logging why don't
> I see any of the transactions?  Especially when server filter logging shows
> them still processing.
>
> William Rentfrow, Principal Consultant
> [EMAIL PROTECTED]
> C 701-306-6157
> O 952-432-0227
>   __20060125___This posting was submitted with HTML in
> it___

___
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
Platinum Sponsor: www.rmsportal.com ARSlist: "Where the Answers Are"


Re: Degrading performance on large updates and inserts

2007-11-07 Thread Joe D'Souza
Disabling indexes may have a drawback though if you have any workflow set to
fire on MERGE that checks for existing data in the current form. If this
search is against an indexed field, you would be better off leaving the
index as is, if that bit of workflow is necessary during import to create
some relationships or whatever..

If you are using one of the BMC utilities to do this mass import, I'm pretty
certain there is at least one filter that performs such a search..

Joe D'Souza
  -Original Message-
  From: Action Request System discussion list(ARSList)
[mailto:[EMAIL PROTECTED] Behalf Of Rick Cook
  Sent: Wednesday, November 07, 2007 10:28 AM
  To: arslist@ARSLIST.ORG
  Subject: Re: Degrading performance on large updates and inserts


  **
  William, if you haven't already, investigate the pre-caching of Entry-IDs.
The sweet spot seems to be around 100 at a time, and it really does help.
Also, large data imports are sped up by temporarily disabling the indexes on
the import form.  Make sure that your data doesn't violate any unique ones,
or you will have a nasty surprise when you add them back in.

  Rick

--
  From: Action Request System discussion list(ARSList)
[mailto:[EMAIL PROTECTED] On Behalf Of William Rentfrow
  Sent: Wednesday, November 07, 2007 7:06 AM
  To: arslist@ARSLIST.ORG
  Subject: Degrading performance on large updates and inserts


  **
  We are pushing about 800,000 records into the CTM:People form.  This is an
initial data load - updates will be smaller.

  To do this we've imported the records into a staging form to format the
data properly.  Then an escalation kicks off and marks every record's status
to "Processed" from "New".  A filter fires on the change of status and
pushes data to the CTM:People form.  This filter does a match on our staging
form EID = CTM:People.Corporate ID which is an indexed field in the
CTM:People form.

  With the base product I had exceedingly bad import times - about 1.5
seconds per record (and this is on a pretty decent Sun box with a separate
Oracle server also on a Sun box).  So I added a filter on CTM:People that
said if $USER$="AR_ESCALATOR" then Goto 1000.  Essentially I skipped all the
OOB validation (which I'd already done in the staging form out of
necessity).  This got the record import speed up to about 15 records/second
(which is STILL far too slow IMHO).

  As the 800,000 records imported the speed decreased.  At the end we
averages 6.92 records/second pushed to CTM:People and it took ~31.5 hours.

  Also, we initially left one piece of data out of the staging form - so we
added it back in and ran an update push to CTM:People.  This was faster but
had the same symptom - I started out getting 15/second updates and now
(nearly 24 hours later) I'm averaging 9.5 updates/second.

  My questions

  1.) Why would performance degrade for an update?  I can see it for the
insert because the tables/indexes/paging/etc are growing - but on an update
that should be fairly static.
  2.) Any ideas why this SO slow?  These are big enterprise boxes.   I have
appropriate ports/queues configured, etc.  The arserverd process is not
chewing up all the clock time so obviously it's not taxed.  And barely
anyone is on this box at all.

  Also

  3.) If this is still running and I turn on server side SQL logging why
don't I see any of the transactions?  Especially when server filter logging
shows them still processing.

  William Rentfrow, Principal Consultant
  [EMAIL PROTECTED]
  C 701-306-6157
  O 952-432-0227
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.5.503 / Virus Database: 269.15.24/1115 - Release Date: 11/7/2007
9:21 AM

___
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
Platinum Sponsor: www.rmsportal.com ARSlist: "Where the Answers Are"


Re: Degrading performance on large updates and inserts

2007-11-07 Thread Axton
Performance can degrade due to things on the db.  If you are using
Oracle, your shared pool (sql area) will start out empty; things will
move fast at this point.  As you execute more and more dynamic sql,
the shared pool (sql area) will fill up, forcing the database to purge
as needed, thus adding additional overhead.  You can address this
specific problem, to some degree by setting the cursor_sharing to
SIMILAR or FORCE from EXACT.

So slow - it's remedy, the escalation engine is single threaded, there
are lots of interactions that happen for what is seemingly a single
transaction (create a remedy record) - e.g., check permissions of
forms, fields; pull meta-data for forms/fields, ...

Axton Grams

On Nov 7, 2007 10:06 AM, William Rentfrow <[EMAIL PROTECTED]> wrote:
> **
>
> We are pushing about 800,000 records into the CTM:People form.  This is an
> initial data load - updates will be smaller.
>
> To do this we've imported the records into a staging form to format the data
> properly.  Then an escalation kicks off and marks every record's status to
> "Processed" from "New".  A filter fires on the change of status and pushes
> data to the CTM:People form.  This filter does a match on our staging form
> EID = CTM:People.Corporate ID which is an indexed field in the CTM:People
> form.
>
> With the base product I had exceedingly bad import times - about 1.5 seconds
> per record (and this is on a pretty decent Sun box with a separate Oracle
> server also on a Sun box).  So I added a filter on CTM:People that said if
> $USER$="AR_ESCALATOR" then Goto 1000.  Essentially I skipped all the OOB
> validation (which I'd already done in the staging form out of necessity).
> This got the record import speed up to about 15 records/second (which is
> STILL far too slow IMHO).
>
> As the 800,000 records imported the speed decreased.  At the end we averages
> 6.92 records/second pushed to CTM:People and it took ~31.5 hours.
>
> Also, we initially left one piece of data out of the staging form - so we
> added it back in and ran an update push to CTM:People.  This was faster but
> had the same symptom - I started out getting 15/second updates and now
> (nearly 24 hours later) I'm averaging 9.5 updates/second.
>
> My questions
>
> 1.) Why would performance degrade for an update?  I can see it for the
> insert because the tables/indexes/paging/etc are growing - but on an update
> that should be fairly static.
> 2.) Any ideas why this SO slow?  These are big enterprise boxes.   I have
> appropriate ports/queues configured, etc.  The arserverd process is not
> chewing up all the clock time so obviously it's not taxed.  And barely
> anyone is on this box at all.
>
> Also
>
> 3.) If this is still running and I turn on server side SQL logging why don't
> I see any of the transactions?  Especially when server filter logging shows
> them still processing.
>
> William Rentfrow, Principal Consultant
> [EMAIL PROTECTED]
> C 701-306-6157
> O 952-432-0227
>   __20060125___This posting was submitted with HTML in
> it___

___
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
Platinum Sponsor: www.rmsportal.com ARSlist: "Where the Answers Are"


Re: Degrading performance on large updates and inserts

2007-11-07 Thread Rick Cook
William, if you haven't already, investigate the pre-caching of Entry-IDs.
The sweet spot seems to be around 100 at a time, and it really does help.
Also, large data imports are sped up by temporarily disabling the indexes on
the import form.  Make sure that your data doesn't violate any unique ones,
or you will have a nasty surprise when you add them back in.
 
Rick 
  _  

From: Action Request System discussion list(ARSList)
[mailto:[EMAIL PROTECTED] On Behalf Of William Rentfrow
Sent: Wednesday, November 07, 2007 7:06 AM
To: arslist@ARSLIST.ORG
Subject: Degrading performance on large updates and inserts


** 
We are pushing about 800,000 records into the CTM:People form.  This is an
initial data load - updates will be smaller.
 
To do this we've imported the records into a staging form to format the data
properly.  Then an escalation kicks off and marks every record's status to
"Processed" from "New".  A filter fires on the change of status and pushes
data to the CTM:People form.  This filter does a match on our staging form
EID = CTM:People.Corporate ID which is an indexed field in the CTM:People
form.
 
With the base product I had exceedingly bad import times - about 1.5 seconds
per record (and this is on a pretty decent Sun box with a separate Oracle
server also on a Sun box).  So I added a filter on CTM:People that said if
$USER$="AR_ESCALATOR" then Goto 1000.  Essentially I skipped all the OOB
validation (which I'd already done in the staging form out of necessity).
This got the record import speed up to about 15 records/second (which is
STILL far too slow IMHO).
 
As the 800,000 records imported the speed decreased.  At the end we averages
6.92 records/second pushed to CTM:People and it took ~31.5 hours.
 
Also, we initially left one piece of data out of the staging form - so we
added it back in and ran an update push to CTM:People.  This was faster but
had the same symptom - I started out getting 15/second updates and now
(nearly 24 hours later) I'm averaging 9.5 updates/second.
 
My questions
 
1.) Why would performance degrade for an update?  I can see it for the
insert because the tables/indexes/paging/etc are growing - but on an update
that should be fairly static.
2.) Any ideas why this SO slow?  These are big enterprise boxes.   I have
appropriate ports/queues configured, etc.  The arserverd process is not
chewing up all the clock time so obviously it's not taxed.  And barely
anyone is on this box at all.
 
Also
 
3.) If this is still running and I turn on server side SQL logging why don't
I see any of the transactions?  Especially when server filter logging shows
them still processing.
 
William Rentfrow, Principal Consultant
[EMAIL PROTECTED]
C 701-306-6157
O 952-432-0227
 
__20060125___This posting was submitted with HTML in
it___ 

___
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
Platinum Sponsor: www.rmsportal.com ARSlist: "Where the Answers Are"