[SQL] selecting records X minutes apart
I have a table that, at a minimum, has ID and timestamp columns. Records are inserted into with random IDs and timestamps. Duplicate IDs are allowed. I want to select records grouped by ID, ordered by timestamp that are X minutes apart. In this case X is 5. Note, the intervals are not X minute wall clock intervals, they are X minute intervals from the last accepted record, per-id. For instance here is some sample input data: ID TS (HH:MM) --- 0 20:00 1 20:03 1 20:04 0 20:05 1 20:05 0 20:08 1 20:09 0 20:10 I'd want the select to return: ID TS (HH:MM) --- 0 20:00 0 20:05 0 20:10 1 20:03 1 20:09 Does my question make sense? Thanks in advance, Wayne -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] selecting records X minutes apart
The TS column type is actually a timestamp with out timezone and yes I want to take seconds into account so both of your entries would be included in the result. On Fri, Jun 03, 2011 at 06:01:53PM -0700, Kevin Crain wrote: > Will you be using a full timestamp with that or are you only concerned > about hours and minutes? If you want a full timestamp do you care > about the seconds? For example, do you want to be able to do this for > '2011-06-01 23:59:04' and '2011-06-02 00:04:04'? > > On Fri, Jun 3, 2011 at 12:52 PM, wrote: > > I have a table that, at a minimum, has ID and timestamp columns. Records > > are inserted into with random IDs and timestamps. Duplicate IDs are > > allowed. > > > > I want to select records grouped by ID, ordered by timestamp that are X > > minutes > > apart. In this case X is 5. > > > > Note, the intervals are not X minute wall clock intervals, they are X minute > > intervals from the last accepted record, per-id. > > > > For instance here is some sample input data: > > > > ID TS (HH:MM) > > --- > > 0 20:00 > > 1 20:03 > > 1 20:04 > > 0 20:05 > > 1 20:05 > > 0 20:08 > > 1 20:09 > > 0 20:10 > > > > I'd want the select to return: > > > > ID TS (HH:MM) > > --- > > 0 20:00 > > 0 20:05 > > 0 20:10 > > 1 20:03 > > 1 20:09 > > > > > > Does my question make sense? > > > > Thanks in advance, > > Wayne > > > > -- > > Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) > > To make changes to your subscription: > > http://www.postgresql.org/mailpref/pgsql-sql > > -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] selecting records X minutes apart
On Sat, Jun 04, 2011 at 11:45:08AM +, Jasen Betts wrote: > On 2011-06-03, lists-pg...@useunix.net wrote: > > > > ID TS (HH:MM) > > --- > > 0 20:00 > > 0 20:05 > > 0 20:10 > > 1 20:03 > > 1 20:09 > > > > > > Does my question make sense? > > no, why is (1,20:04) excluded, but (0,20:05) included? > both records are 5 minutes from the newest. Jasen, (1,20:04) is excluded because it's timestamp is less than 5 minutes from the previous record with the same ID (1,20:03), (0,20:05) is included for the opposite reason. Let me restate my requirement again with a little more detail. I want to select records grouped by ID, ordered by timestamp, in ascending order so I'm starting with the oldest, that are at least X minutes apart. I hope that helps. Thanks again, Wayne -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] selecting records X minutes apart
Did you mean WHERE in place of your first AND? If so I already had something like this but it only returns one set, the oldest group of entries for each ID. On Sat, Jun 04, 2011 at 01:09:39PM -0700, Richard Broersma wrote: > On Sat, Jun 4, 2011 at 12:15 PM, wrote: > > I want to > > select records grouped by ID, ordered by timestamp, in ascending order so > > I'm > > starting with the oldest, that are at least X minutes apart. > > > Here my guess: > > SELECT id, ts > FROM Yourtable AS A >AND NOT EXISTS ( SELECT * > FROM Yourtable AS B > WHERE B.id = A.id >AND B.ts > A.ts - INTERVAL '5 MINUTES' >AND B.tx < A.ts ) > > ORDER BY id, ts; > > -- > Regards, > Richard Broersma Jr. -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] selecting records X minutes apart
Let's a take a look at just the input set for ID 0. 0 20:00 0 20:05 0 20:08 0 20:10 I want records, starting from the oldest record (20:00), that are at least 5 minutes apart. So 20:00, 20:05, 20:10 but 20:08 - 20:05 is only 3 minutes so it is to be ignored. I was hoping to do this with a single SQL query that renders good runtime performance but it may not possible. But I'm by no means proficient in SQL. On Sat, Jun 04, 2011 at 05:51:18PM -0700, Kevin Crain wrote: > Why is (0,20:10) listed in your expected results when there is a (0,20:08)? > > > On Fri, Jun 3, 2011 at 12:52 PM, wrote: > > I have a table that, at a minimum, has ID and timestamp columns. Records > > are inserted into with random IDs and timestamps. Duplicate IDs are > > allowed. > > > > I want to select records grouped by ID, ordered by timestamp that are X > > minutes > > apart. In this case X is 5. > > > > Note, the intervals are not X minute wall clock intervals, they are X minute > > intervals from the last accepted record, per-id. > > > > For instance here is some sample input data: > > > > ID TS (HH:MM) > > --- > > 0 20:00 > > 1 20:03 > > 1 20:04 > > 0 20:05 > > 1 20:05 > > 0 20:08 > > 1 20:09 > > 0 20:10 > > > > I'd want the select to return: > > > > ID TS (HH:MM) > > --- > > 0 20:00 > > 0 20:05 > > 0 20:10 > > 1 20:03 > > 1 20:09 > > > > > > Does my question make sense? > > > > Thanks in advance, > > Wayne > > > > -- > > Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) > > To make changes to your subscription: > > http://www.postgresql.org/mailpref/pgsql-sql > > > > -- > Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-sql -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] selecting records X minutes apart
Thank you all who replied!! It looks like Sugawara's recursive solution does the trick. Unfortunately performance is quite poor for the sample dataset I'm working with which is a table of about 5 records. Indeed, there are indexes applied to the table. I believe the recursive select is being executed a great number of times causing the first part of the query to take a long time. The fastest solution I've come up with is a plpgsql procedure the loops over a select where the result is ordered by (id,tstamp) and examines the tstamp values and only returns rows that meet the interval criteria. This technique takes roughly 2 seconds to filter out records over my 5 record sample set which is acceptable but not nearly as elegant as a single SQL statement. Again, thank you for all the replies. Wayne On Sun, Jun 05, 2011 at 08:52:30PM +0900, Masaru Sugawara wrote: > On Fri, 3 Jun 2011 15:52:53 -0400 > lists-pg...@useunix.net wrote: > > > I also think you might want to use WITH RECURSIVE clause. > This SQL searches the case of an interval of 5 minutes or more, > and sets a relationship between a parent to its child. > > > CREATE TABLE tbl(id integer, ts time) ; > INSERT INTO tbl VALUES > (0, '20:00'), > (0, '20:05'), > (0, '20:08'), > (0, '20:10'), > (0, '20:11'), > (1, '20:03'), > (1, '20:04'), > (1, '20:05'), > (1, '20:09'), > (1, '20:16'); > > SELECT * FROM tbl; > > > -- > WITH RECURSIVE rec(id , ts_p, ts_c) AS ( > SELECT a1.id, min(a1.ts), min(b1.ts) >FROM tbl AS a1, tbl AS b1 >WHERE a1.id=b1.id AND a1.ts + interval'5 minute' <= b1.ts >GROUP BY a1.id > UNION ALL > SELECT t2.id, t2.ts_p, t2.ts_c >FROM rec AS t1 INNER JOIN >(SELECT a2.id, a2.ts as ts_p, min(b2.ts) AS ts_c >FROM tbl AS a2, tbl AS b2 >WHERE a2.id = b2.id AND a2.ts + interval'5 minute' <= b2.ts >GROUP BY a2.id, a2.ts > UNION ALL > SELECT a3.id, a3.ts, null >FROM tbl AS a3 > ) AS t2 ON t1.id = t2.id AND t1.ts_c=t2.ts_p > ) > SELECT DISTINCT id, ts_p AS ts FROM rec > ORDER BY 1,2; > > > > > > > > I have a table that, at a minimum, has ID and timestamp columns. Records > > are inserted into with random IDs and timestamps. Duplicate IDs are > > allowed. > > > > I want to select records grouped by ID, ordered by timestamp that are X > > minutes > > apart. In this case X is 5. > > > > Note, the intervals are not X minute wall clock intervals, they are X minute > > intervals from the last accepted record, per-id. > > > > For instance here is some sample input data: > > > > ID TS (HH:MM) > > --- > > 0 20:00 > > 1 20:03 > > 1 20:04 > > 0 20:05 > > 1 20:05 > > 0 20:08 > > 1 20:09 > > 0 20:10 > > > > I'd want the select to return: > > > > ID TS (HH:MM) > > --- > > 0 20:00 > > 0 20:05 > > 0 20:10 > > 1 20:03 > > 1 20:09 > > > > > > Does my question make sense? > > > > Thanks in advance, > > Wayne > > > > -- > > Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) > > To make changes to your subscription: > > http://www.postgresql.org/mailpref/pgsql-sql > > > > > -- > Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-sql -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Usage of function retruning record in query
Sorry, I don't have a useful answer but I have a similar question. Along these same lines how does one access the discreet x,y components of type 'point'? On Mon, Jul 04, 2011 at 06:59:49AM -0700, gmb wrote: > > Harald Fuchs-10 wrote: > > In article <1309762075448-4549140.p...@n5.nabble.com>,gmb > >writes: > > SELECT itemid, (calcvalues(itemid)).* FROM itemlist > > > > Thanks for the feedback, Harald. > > How about specifying different aliases to the resulting values? > This will be handy when I use the same function multiple times in the same > query. > (the function will take another input parameters used in the calculations) > > E.g.: > SELECT itemid, (calcvalues(itemid, '2011-06-06')).*, (calcvalues(itemid, > '2011-06-07')).* FROM itemlist; > > itemid | calcval1 | calcval2 | calcval1 | calcval2 > +--+--+--+-- > 4 | 0.67 | 10.00| 0.64 | 65.23 > 5 | 1.55 | 45.00| 1.23 | 23.25 > 6 | 3.60 | 69.00| 2.98 | 62.66 > How will I manage unique column names for this output? > > -- > View this message in context: > http://postgresql.1045698.n5.nabble.com/Usage-of-function-retruning-record-in-query-tp4549140p4550092.html > Sent from the PostgreSQL - sql mailing list archive at Nabble.com. > > -- > Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-sql -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Usage of function retruning record in query
Never mind, turns out you can do it with the array subscript operator. I stumbled on to this by chance. I don't know if this is in the documentation somewhere and I perhaps missed it? On Mon, Jul 04, 2011 at 02:31:52PM -0400, lists-pg...@useunix.net wrote: > Sorry, I don't have a useful answer but I have a similar question. > > Along these same lines how does one access the discreet x,y components > of type 'point'? > > > On Mon, Jul 04, 2011 at 06:59:49AM -0700, gmb wrote: > > > > Harald Fuchs-10 wrote: > > > In article <1309762075448-4549140.p...@n5.nabble.com>,gmb > > >writes: > > > SELECT itemid, (calcvalues(itemid)).* FROM itemlist > > > > > > > Thanks for the feedback, Harald. > > > > How about specifying different aliases to the resulting values? > > This will be handy when I use the same function multiple times in the same > > query. > > (the function will take another input parameters used in the calculations) > > > > E.g.: > > SELECT itemid, (calcvalues(itemid, '2011-06-06')).*, (calcvalues(itemid, > > '2011-06-07')).* FROM itemlist; > > > > itemid | calcval1 | calcval2 | calcval1 | calcval2 > > +--+--+--+-- > > 4 | 0.67 | 10.00| 0.64 | 65.23 > > 5 | 1.55 | 45.00| 1.23 | 23.25 > > 6 | 3.60 | 69.00| 2.98 | 62.66 > > How will I manage unique column names for this output? > > > > -- > > View this message in context: > > http://postgresql.1045698.n5.nabble.com/Usage-of-function-retruning-record-in-query-tp4549140p4550092.html > > Sent from the PostgreSQL - sql mailing list archive at Nabble.com. > > > > -- > > Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) > > To make changes to your subscription: > > http://www.postgresql.org/mailpref/pgsql-sql > > -- > Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-sql -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] overload
I'm have the same situation with large tables. Take a look at using a cursor to fetch several thousand rows at a time. I presume what's happening is that perl is attempting to create a massive list/array in memory. If you use a cursor the list should only contain X number of rows where X in the number specified at each fetch execution. You'll need to define the cursor inside a transaction block. - begin transaction - define the cursor - fetch rows from cursor - while row count from previous step > 0, execute previous step - terminate transaction Or you could use plpgsql instead of plperl, FOR loops over result sets in plpgsql implicitly use cursors... it's just a little less code. Hope that helps, Wayne On Tue, Jul 05, 2011 at 10:29:03PM +0200, Viktor Bojovi?? wrote: > Hi, > while reading 20GB table through PL/PERL function , it constantly grows in > RAM. > I wanted to ask you which is the best way to read table inside that > function without such memory consumption. > Thanks in advance > > Code is here: > > CREATE FUNCTION pattern_counter("patLength" integer) > RETURNS varchar AS > $BODY$ > my $rv = spi_exec_query("select sequence from entry"); > my $rowCount = $rv->{processed}; > my $patLen = $_[0]; > my $patt = ''; > my %patterns=(); > foreach my $rn (0 .. $rowCount -1){ > my $row = $rv->{rows}[$rn]; > my $seq = $row->{sequence}; > for (my $x = 1;$x<=length($seq) - $patLen;$x++){ > $patt=substr($seq,$x,$patLen); > if (! defined $patterns{$patt}) { > $patterns{$patt}=1; > }else{ > $patterns{$patt}++; > } > } > } > foreach $patt (keys %patterns){ > my $sql="insert into patterns values('".$patt."',".$patterns{$patt}.")"; > spi_exec_query($sql); > } > return ''; > $BODY$ > LANGUAGE plperl VOLATILE > COST 100; > > > > -- > --- > Viktor Bojovi?? > --- > Wherever I go, Murphy goes with me -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] overload
Hi Viktor, I'm not sure what your requirements are in terms of performance and stability of the your result set. See Pavel's response. A cursor issues a single query and renders a single result set. The result set is static, the cursor just gives you finer control/performance when retrieving rows from the set. Using a transaction will also render better performance when %patterns contains a large number of keys/values, insert all of them in one transaction, the same one you opened for the cursor. Your method issues many queries and will take longer for each successive query. And the number of queries will increase as table size increases. It could also return duplicate rows and/or missed rows due to other transactions completing between your select query. If you can tolerate the above issues then so be it, if not you really should look at cursors. Also there might be a bug in your code if you delete entries from 'entry'. Your depending on $rowCountAll to remain static which is not the case if you ever delete entries. You can fix this by skipping the "select count(1)" step and just breaking your loop when less then $windowSize entries are returned from the "select sequence.." query. Wayne On Fri, Jul 08, 2011 at 08:55:36PM +0200, Viktor Bojovi?? wrote: > Thanx Wayne, > at the end i did it that way and it works. > The code is below. > > CREATE FUNCTION pattern_counter1("patLength" integer) RETURNS character > varying > LANGUAGE plperl > AS $_X$ > my $rvCnt = spi_exec_query("select count(1) as cnt from entry"); > #my $rowCountAll = $rvCnt->{processed}; > my $row = $rvCnt->{rows}[0]; > my $rowCountAll = $row->{cnt}; > my $windowSize = 50; > my %patterns=(); > for (my $p=0;$p<$rowCountAll;$p+=$windowSize){ > my $sql="select sequence from entry limit $windowSize offset $p"; > > my $rv = spi_exec_query($sql); > my $rowCount = $rv->{processed}; > my $patLen = $_[0]; > my $patt = ''; > > foreach my $rn (0 .. $rowCount -1){ > my $row = $rv->{rows}[$rn]; > my $seq = $row->{sequence}; > for (my $x = 1;$x<=length($seq) - $patLen;$x++){ > $patt=substr($seq,$x,$patLen); > if (! defined $patterns{$patt}) { > $patterns{$patt}=1; > }else{ > $patterns{$patt}++; > } > } > } > } > > foreach $patt (keys %patterns){ > my $sql="insert into patterns values('".$patt."',".$patterns{$patt}.")"; > spi_exec_query($sql); > } > return $tmp; > $_X$; > > > On Fri, Jul 8, 2011 at 8:50 PM, wrote: > > > I'm have the same situation with large tables. Take a look at using a > > cursor to fetch several thousand rows at a time. I presume what's > > happening is that perl is attempting to create a massive list/array in > > memory. If you use a cursor the list should only contain X number of > > rows where X in the number specified at each fetch execution. You'll > > need to define the cursor inside a transaction block. > > > > - begin transaction > > - define the cursor > > - fetch rows from cursor > > - while row count from previous step > 0, execute previous step > > - terminate transaction > > > > Or you could use plpgsql instead of plperl, FOR loops over result sets in > > plpgsql implicitly use cursors... it's just a little less code. > > > > Hope that helps, > > Wayne > > > > On Tue, Jul 05, 2011 at 10:29:03PM +0200, Viktor Bojovi?? wrote: > > > Hi, > > > while reading 20GB table through PL/PERL function , it constantly grows > > in > > > RAM. > > > I wanted to ask you which is the best way to read table inside that > > > function without such memory consumption. > > > Thanks in advance > > > > > > Code is here: > > > > > > CREATE FUNCTION pattern_counter("patLength" integer) > > > RETURNS varchar AS > > > $BODY$ > > > my $rv = spi_exec_query("select sequence from entry"); > > > my $rowCount = $rv->{processed}; > > > my $patLen = $_[0]; > > > my $patt = ''; > > > my %patterns=(); > > > foreach my $rn (0 .. $rowCount -1){ > > > my $row = $rv->{rows}[$rn]; > > > my $seq = $row->{sequence}; > > > for (my $x = 1;$x<=length($seq) - $patLen;$x++){ > > > $patt=substr($seq,$x,$patLen); > > > if (! defined $patterns{$patt}) { > > > $patterns{$patt}=1; > > > }else{ > > > $patterns{$patt}++; > > > } > > > } > > > } > > > foreach $patt (keys %patterns){ > > > my $sql="insert into patterns > > values('".$patt."',".$patterns{$patt}.")"; > > > spi_exec_query($sql); > > > } > > > return ''; > > > $BODY$ > > > LANGUAGE plperl VOLATILE > > > COST 100; > > > > > > > > > > > > -- > > > --- > > > Viktor Bojovi?? > > > --- > > > Wherever I go, Murphy goes with me > > > > > > -- > --- > Viktor Bojovi?? > --- > Wherever I go, Murphy goes with me -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pg
[SQL] INSERT waiting under heavy load
After digging through all the discussions of "INSERT waiting" problems I am still not clear about the concensus about solving it. I am running ration 6:1 SELECT:INSERT (insert fires up an UPDATE trigger that hits a column in a table holding keys used by SELECT). I am looking at doing about 2,000 INSERT/UPDATE per second, with possible peaks at 10,000 INSERT/UPDATE per second (i.e. 60,000 SELECTs). (table 1) The table holding primary keys is expected to grow to around 10,000 rows. This is the table that gets 50% of SELECTs and 100% of UPDATES. This is the owner status table. It is optimized so with a single SELECT against this table all information needed for real-time clients would be accessible. (table 2) The 2nd number of rows in the second table is expected to be around 100 times the number of rows in the 1st table. Each entry in this table has uses first table's column as a foreign key to avoid unlinked entries. It also has foreign key dependecies to some other tables that for the purpose of the application are never updated. This table gets the other 50% of SELECTs. (table 3) Finally, the 3rd table (audit log) is expected to have arbitraty number of entries (measured in millions). It gets virtually no SELECT activity in the mornal operations. If the data from this table is needed, a snapshot of this table gets pulled into a different table (most likely on a different database) and gets processed there. The table gets cleaned up at specific intervals using DROP TABLE/CREATE TABLE sequence. It is guaraneed that when the management application (non-real time) performs DROP TABLE/CREATE table combination. The only thing that I do not particulary like is that every INSERT into this table has to adjust a counter column in a corresponding row of the (table1) via (table3->table2->table1) path. The server is configured to accept about 200 connections from clients. The problem is that after first couple of hours of working normally, as the table (3) grows, the backend indicates that more and more INSERTs into table 3 are held up in the "INSERT waiting" state. It happens even when table 1 contains only one row, table 2 contains 4 rows. Is there anything that can be done to diagnose why "INSERT waiting" state becomes so prevalent? Would pulling the counter from table 1 into a table (4) that contains only reference to appropriate table (1) row and counter value make it better? Thanks, Alex ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] INSERT waiting under heavy load
> > After digging through all the discussions of "INSERT waiting" problems I am > > still not clear about the concensus about solving it. > > ... > > The only thing that I do not particulary like is that every INSERT > > into this table has to adjust a counter column in a corresponding row of the > > (table1) via (table3->table2->table1) path. > > Well, if there are only a few rows in table1, then this design is > inherently going to lose big. Any two transactions trying to update the > same table1 row are going to conflict and one will have to wait for the > other to complete. Rethink the need for those counters. I appreciate that it is most likely not the best design though i expect reasonable distribution of UPDATE hits against the first table when the number of rows increases. What I do not understand is this: if the problem is caused by the the acquire lock->modify column->release lock on the table 1, then why does it increase significantly increase as the number of entries in the table 3 grows? The simulation maintains pretty much constant rate of new requests coming to table 3. Alex ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[SQL] stored procedures for complex SELECTs
Are there performance advantages that can be achieved by wrapping a complex SELECT into a stored procedure? Alex ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[SQL] executing external command
Is there a way to execute an external i.e. system command from inside a pl/pgsql function? Alex ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[SQL] How to find entries missing in 2nd table?
Hi, I realize I probably lost my marbles but I've been having a god awful time with a single query: control: controller_id pk; datapack: controller_id fk; I need to get all entries from the table control that are not listed in datapack. Thanks, Alex ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster