hi maverick,
 
first of all ttscommit shd be called at the last of complete update becuase it takes much more memory rather than the runtime memory which you are talking ab't. The index should be the one which shd consist the selection criteria field, and if is cluster one it is the best situation. If the cashing is properly enabled than you will get optimum speed.
 
Feel free to ask your furhter qeries if any.
 
bye

Maverick <[EMAIL PROTECTED]> wrote:
The process consist of a loop, doing updation on a table on every record conditionally. And this is done several times. And table consist of million of records.
 
----- Original Message -----
Sent: Tuesday, January 03, 2006 4:35 AM
Subject: RE: [Axapta-Knowledge-Village] Re: Performance

 
Hello Maverick,
 
Without more details on what you are doing, and how you have implemented the task, it is very difficult to give you any extra information.
 
 
Barry.
 
 

From: Axapta[EMAIL PROTECTED] [mailto:Axapta[EMAIL PROTECTED]] On Behalf Of Maverick
Sent: Monday, 26 December 2005 9:10 PM
To: Axapta[EMAIL PROTECTED]
Subject: Re: [Axapta-Knowledge-Village] Re: Performance
 
Hi All,
I am also facing the same problem, I have some code which is processing thousands of update on the table. There is only one process doing all the updates. So nothing to be worry for several processes doing updates on the same table.
 
What are the ways to optimize the process. Please do tell me all the points to be taken care in the CODE,TABLE and in the SQL.
 
Like:
When should TTSCOMMIT be done. If it is done after every update it will take too much time. And if it is after thousands of update, it takes runtime memory.
 
What about Index, should it be primary OR cluster.
 
What properties of SQL Server should be set for better updation.
 
If you would like to put light on any other point, it willl be very helpful.
 
Thanks
Maverick
----- Original Message -----
Sent: Thursday, December 22, 2005 6:30 AM
Subject: RE: [Axapta-Knowledge-Village] Re: Performance
 
Another thing I want to add just incase it was not taken into consideration. You should but you OS, data file and log file on seperate volumes. It is recommended to put the OS on a mirrored volume (2 hard drives in a RAID 1 configuration), the log file on another mirrored volume and the data file on a RAID 0+1 volume (Better over RAID 5 for write intensive operations). This procedure will enahace performance.

"Bayliss, Barry" <[EMAIL PROTECTED]> wrote:

Yes, was referring to the sql transaction log.


Can you post your code?


Barry.

-----Original Message-----
From: Axapta[EMAIL PROTECTED]
[mailto:Axapta[EMAIL PROTECTED]] On Behalf Of Stefan
Capiau
Sent: Wednesday, 21 December 2005 6:19 PM
To: Axapta[EMAIL PROTECTED]
Subject: [Axapta-Knowledge-Village] Re: Performance

HI Barry,

Are you talking about the sql transaction log, the commitment control
in Axapta ?

We have done tests with putting commitment round the entire process
and doing a commit after each 500 records. No big change in
performance.

Regards,
Stefan



--- In Axapta[EMAIL PROTECTED], "Bayliss, Barry"
<[EMAIL PROTECTED]> wrote:
>
> Hello Stefan,
>
>
> I have previously come across a problem with a piece of code running
> slower like this.  Turns out the problem was related to the size of
the
> transaction log generated by the process being run. 
>
>
> Have you placed the transaction log around the entire batch, or each
> block of the batch?
>
>
>
> Barry.
>
>
> -----Original Message-----
> From: Axapta[EMAIL PROTECTED]
> [mailto:Axapta[EMAIL PROTECTED]] On Behalf Of
Stefan
> Capiau
> Sent: Wednesday, 21 December 2005 7:19 AM
> To: Axapta[EMAIL PROTECTED]
> Subject: [Axapta-Knowledge-Village] Re: Performance
>
> Thanks for the reply!
>
> We are running this process at night, and no other processes are
> active at that moment.
>
> The strange thing is that when we divide the process into batches
of
> a couple of thousand records to process, it doesn't matter which
> batch we process first. The first one launched is always fast (just
a
> couple of minutes) the last one takes up to hours.
> We also looked at the disk queue time, but saw no strange things.
> Somehow it looks like the transactions are kept in memory and are
not
> released after the process forcing the system to start swapping to
> disk.
>
> At this time we can manage these runs by carefully planning the
> execution time, but in the near future we are also planning on
> uploading the complete customer database (+3000.000 !) customers.
As
> it looks like now the system wil not be able to process these
number
> of transactions.
>
> During the next couple of days we will also try to closely monitor
> the sql server and see if changing settings will affect the
process.
> We also did a huge job on code improvements for performance. I am
> convinced it always can be better but I don't thing we will gain
much
> at this stage by changing the code.
>
> All extra hints are very much appreciated !
>
> Regards,
> Stefan
>
>
>
>
>
>
>
> --- In Axapta[EMAIL PROTECTED], Varden Morris
> <[EMAIL PROTECTED]> wrote:
> >
> >   Hi Stefan
> >   
> >   There are countless ways in which performance may be improved
but
> it depends on your specific situation.
> >   
> >   If you are selecting records for update in a transaction you
will
> have objects (table, page. Record) locks for the tables within the
> transaction. It looks like you are running several clients that are
> competing for the same tables and resources. The first client that
> gets hold of the tables holds them for a while until the process in
> finished on that client then it releases them to the next client in
> the queue and so on until the entire process is finished. That
> explains why the jobs are not executed in parallel.
> >   
> >   In a case like this I should think that using one powerful
server
> (Batch Server) to process the entire job would give better
> performance. When you use several clients this will lead to a lot
of
> network trafficking that will reduce performance plus clients are
> waiting in line on each other. With a powerful server that is in
the
> area of the database sever with high speed access to the database
you
> can gain a lot of performance using a 2-tier client or 3-tier fat
> client that will eliminate the latency between the client and AOS
(if
> you are using an AOS).
> >   
> >   You should also note that processing will be as slow as the
> slowest processor in the batch and database servers in this case
plus
> the process will also be impacted by the speed of the disks in both
> computers. If the disk queue length for either computer is above 2
> you will have to upgrade the disk to a faster one to gain better
> performance. If the memory in any server is not adequate you will
> also end up with a lot of paging which is another downfall.
> >   
> >   Tuning your code is also another possibility to gain some
> performance. This may also lead to the possibility of you being
able
> to run more than one batch servers in parallel with minimum object
> (table, page. Record) locks.
> >   
> >   You should also ensure that your SQL Server is tuned for
optimum
> performance. If you are using more than 2G of memory you should
> enable AWE for SQL otherwise the server will only see and use 2G.
> >   
> >   Hope this helps in some way.
> >   
> >  
> > Stefan Capiau <[EMAIL PROTECTED]> wrote:
> >   Thanks James,
> >
> > We have tried fooling around with the caching but no effect.
> > The process does the following, reading all customers, looking at
> the
> > transactions (new table, mixture of financial and logistical
data)
> > making a balance on the transactions and generating an opening
> > transaction in a new period.
> >
> > During this process we do a lot of reading and updating/writing
on
> > the same table. Can this be a problem ?
> >
> > Regards,
> > Stefan
> >
> > --- In Axapta[EMAIL PROTECTED], "James Flavell"
> > <[EMAIL PROTECTED]> wrote:
> > >
> > > My first guess would be the SQL locking escalating to a full
lock
> or
> > > something like that but I am no expert...which tables are you
> > > updating/inserting?
> > >
> > > Also no idea but could it be at the start data is read into RAM
> but
> > later on
> > > when RAM is exhausted disk IO gets more and so things slow
down? 
> > Maybe
> > > there is some caching setting you can tone down when running
this
> > job?
> > >
> > >
> > >
> > >
> > > -----Original Message-----
> > > From: Axapta[EMAIL PROTECTED]
> > > [mailto:Axapta[EMAIL PROTECTED]] On Behalf Of
> > Stefan Capiau
> > > Sent: 20 December 2005 20:43
> > > To: Axapta[EMAIL PROTECTED]
> > > Subject: [Axapta-Knowledge-Village] Performance
> > >
> > >
> > > Hi all,
> > >
> > > Each month we are running a procedure which is looking at all
> > customer and
> > > their transactions. We do updates and inserts in the selected
> > tables.
> > >
> > > The database contains at this moment +300.000 customers and
over
> > 2.000.000
> > > transactions on the customers.
> > >
> > > The problem we are facing at this moment is that the job is
> taking
> > more and
> > > more time (exponential) as the customer database grows.
> > >
> > > dividing the job into parts of e.g. 25.000 customers gives the
> > following
> > > results :
> > >
> > > The first 25.000 customers are processed in less than 3-4
minutes
> > The last
> > > 25.000 customers are processed in over 2 hours !!!
> > >
> > > We are using an SQL2000 server.
> > >
> > > At this moment the complete process is taking over 12 hours to
> > complete
> > > wheras it should be done withing 30-45 minutes (if we look at 
> the
> > first
> > > group of customers)
> > >
> > > We already figured out that this is client independed. If we
run
> 20
> > batches
> > > of customers on 20 clients, the first clients takes a couple of
> > minutes
> > > whereas the last takes over 2 hours.
> > >
> > > Is this sounding familiar to somebody ?
> > >
> > >
> > > Stefan
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > Sharing the knowledge on Axapta.
> > > Yahoo! Groups Links
> > >
> >
> >
> >
> >
> >
> >
> > Sharing the knowledge on Axapta.
> >
> >
> >
> >   SPONSORED LINKS
> >         Business finance course   Business to business finance  
> Small business finance     Business finance consultant   Business
> finance magazine   Business finance schools
> >    
> > ---------------------------------
> >   YAHOO! GROUPS LINKS
> >
> >    
> >     Visit your group "Axapta-Knowledge-Village" on the web.
> >    
> >     To unsubscribe from this group, send an email to:
> >  Axapta[EMAIL PROTECTED]
> >    
> >     Your use of Yahoo! Groups is subject to the Yahoo! Terms of
> Service.
> >
> >    
> > ---------------------------------
> >  
> >
> >
> >
> >
> >
> > Varden Morris
> > Senior Developer
> > WellPoint Systems Inc.
> >
> > (403) 444-5848 direct
> > (403) 444-3900 main
> > (403) 615-1604  cell
> > www.wellpoint.ca
> > [EMAIL PROTECTED]
> >
> >
> > __________________________________________________
> > Do You Yahoo!?
> > Tired of spam?  Yahoo! Mail has the best spam protection around
> > http://mail.yahoo.com
> >
>
>
>
>
>
>
>
> Sharing the knowledge on Axapta.
> Yahoo! Groups Links
>







Sharing the knowledge on Axapta.
Yahoo! Groups Links





 
 
Varden Morris
Senior Developer
WellPoint Systems Inc.

(403) 444-5848 direct
(403) 444-3900 main
(403) 615-1604  cell
www.wellpoint.ca
[EMAIL PROTECTED]
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com

Send instant messages to your online friends http://in.messenger.yahoo.com

Send instant messages to your online friends http://in.messenger.yahoo.com

Sharing the knowledge on Axapta.
Visit www.frappr.com/axapta for axapta friends.




YAHOO! GROUPS LINKS




Reply via email to