That's the thing....the table itself isn't set to chunk....the chunking is
happening at the server and affects ALL tables the server needs to read.
The table in question has 4 columns, all that need to at some point have a
colsum done on them.  Based on my reading of the server side table chunking,
it's designed to keep the memory usage down by handling x records at a
time...but in this case that's not what it does....it simply causes the
server to increase the chunk size by that amount until it's dealing with the
entire table....and it does this automatically when you ask for a
colsum....it knows that all you want is a sum of the values, yet it does
massively more work than is necessary to get the information and ends up
taking massively more time than necessary....thus I was looking for Doug's
view on the subject as he seems to always have an eloquent response to such
design questions.

-----Original Message-----
From: Action Request System discussion list(ARSList)
[mailto:arsl...@arslist.org] On Behalf Of Misi Mladoniczky
Sent: Monday, April 19, 2010 11:11 PM
To: arslist@ARSLIST.ORG
Subject: Re: COLSUM

Hi,

I seems to be doing what you asked it to do.

The way both client and FLTR tables work, is by filling the table with
data (in memory).

If you tell it to chunk, it apparently reads it that way.

Create a new table-field without chinking, with a single column, and do
the COLSUM there instead.

        Best Regards - Misi, RRR AB, http://www.rrr.se

Products from RRR Scandinavia:
* RRR|License - Not enough Remedy licenses? Save money by optimizing.
* RRR|Log - Performance issues or elusive bugs? Analyze your Remedy logs.
Find these products, and many free tools and utilities, at http://rrr.se.

> Doug,
> Since you have been replying to list posts more frequently, I figured I
> would ask you a question real quick.
>
> We have several pieces of workflow that use filter side tables and COLSUM.
> In situations where there are more records in the table than the 'Server
> Table Field Chunk Size', I have noticed that it does a 'select first x' on
> the table columns, and it does it again till it finds a number higher than
> the number of records returned.  In our situation we had the setting at
> 1000
> and had over 50K records in the table...so the SQL log shows select first
> 1000, then 2000, 3000, 4000, etc till it hits a value above the count in
> the
> table.  This of course kills our performance when we are just doing a sum
> on
> a column....so I converted them to direct SQL doing SUM() and aren't
> having
> problems anymore.  Why was this done as a method to perform COLSUM?....it
> doesn't seem efficient to me at all.
>
>
____________________________________________________________________________
___
> UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
> attend wwrug10 www.wwrug.com ARSlist: "Where the Answers Are"
>
> --
> This message was scanned by ESVA and is believed to be clean.
>

____________________________________________________________________________
___
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
attend wwrug10 www.wwrug.com ARSlist: "Where the Answers Are"

_______________________________________________________________________________
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
attend wwrug10 www.wwrug.com ARSlist: "Where the Answers Are"

Reply via email to