Genius. I knew there must be a way of tricking Mysql into applying a having
statement and hence doing the whole query in a non join based statement.
I have found the variable features of mysql most useful for a query of the
type, select the top x rows per group. i.e. a limit per group. This is
perfect if you are doing any statistical calculations which involves
gathering the top 100 (or n) rows per id.

Richard


----- Original Message -----
From: "Kevin Fries" <[EMAIL PROTECTED]>
To: "'Richard Clarke'" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Friday, May 31, 2002 6:07 PM
Subject: RE: Finding holes in autoinc sequences


> Richard,
>
> Say, that's really good!
> I'm still not used to taking advantage of those variables.  Clever.
> Also, I think you can remove all but the rows showing gaps by adding a
group
> by and having clause...
>
> select id,@b:=IF(id=@a+1,id,concat(@a+1,"..",id-1)) range,@a:=id,
> @flag:=IF(@b=id,id,"MISSING")
> from seq
> group by id
> having id <> range;
>
>
> > -----Original Message-----
> > From: Richard Clarke [mailto:[EMAIL PROTECTED]]
> > Sent: Thursday, May 30, 2002 7:48 PM
> > To: [EMAIL PROTECTED]
> > Subject: Re: Finding holes in autoinc sequences
> >
> >
> > Or another approach could be (using same example tables)
> >
> > mysql> set @a:=0; set @b:=0; set @c:=0;
> > mysql> select
> > id,@b:=IF(id=@a+1,id,concat(@a+1,"..",id-1)),@a:=id,@flag:=IF(
> > @b=id,id,"MISS
> > ING") from seq;
> > +----+-------------------------------------------+--------+---
> > --------------
> > --------------+
> > | id | @b:=IF(id=@a+1,id,concat(@a+1,"..",id-1)) | @a:=id |
> > @flag:=IF(@b=id,id,"MISSING") |
> > +----+-------------------------------------------+--------+---
> > --------------
> > --------------+
> > |  1 | 1                                         |      1 | 1
> > |
> > |  2 | 2                                         |      2 | 2
> > |
> > |  3 | 3                                         |      3 | 3
> > |
> > |  4 | 4                                         |      4 | 4
> > |
> > |  5 | 5                                         |      5 | 5
> > |
> > |  6 | 6                                         |      6 | 6
> > |
> > |  7 | 7                                         |      7 | 7
> > |
> > |  8 | 8                                         |      8 | 8
> > |
> > |  9 | 9                                         |      9 | 9
> > |
> > | 12 | 10..11                                    |     12 | MISSING
> > |
> > | 13 | 13                                        |     13 | 13
> > |
> > | 14 | 14                                        |     14 | 14
> > |
> > | 15 | 15                                        |     15 | 15
> > |
> > | 16 | 16                                        |     16 | 16
> > |
> > | 17 | 17                                        |     17 | 17
> > |
> > | 18 | 18                                        |     18 | 18
> > |
> > | 19 | 19                                        |     19 | 19
> > |
> > | 22 | 20..21                                    |     22 | MISSING
> > |
> > | 24 | 23..23                                    |     24 | MISSING
> > |
> > | 25 | 25                                        |     25 | 25
> > |
> > | 26 | 26                                        |     26 | 26
> > |
> > | 27 | 27                                        |     27 | 27
> > |
> > | 28 | 28                                        |     28 | 28
> > |
> > | 29 | 29                                        |     29 | 29
> > |
> > +----+-------------------------------------------+--------+---
> > --------------
> > --------------+
> > 24 rows in set (0.00 sec)
> >
> > To get the MISSING entries you would need to create a
> > temporary table from
> > that query then select where the @flag column = "MISSING".
> > Maybe this is more efficient than the join previously suggested.
> > Maybe mysql team would let HAVING be applied to the
> > 'variably' created rows
> > you could then add having flag_alias = "MISSING" project out
> > just the rows
> > that are missing (saving the need for temporary tables).
> >
> > you could also plug the 10..11 20..21 etc statements into a
> > perl foreach
> > loop (if you were using perl) to automatically create the
> > inner numbers.
> >
> > Ric
> >
> > p.s. maybe there is a way of tricking it into using a having
> > statement to
> > project out the "MISSING" rows.
> >
> >
> > ----- Original Message -----
> > From: "Kevin Fries" <[EMAIL PROTECTED]>
> > To: "'mos'" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
> > Sent: Thursday, May 30, 2002 10:56 PM
> > Subject: RE: Finding holes in autoinc sequences
> >
> >
> > > A simple and fast way to find the gaps is to use a self
> > LEFT JOIN, such
> > as:
> > >
> > > select a.id from seq a left join seq b on a.id + 1 = b.id
> > > where b.id is null;
> > >
> > >
> > > The result set will show any row in seq where there was no
> > record with ID
> > > one greater than that record's value.
> > > You'll get back the last row, and any rows which don't have
> > a next-higher
> > > neighbor.
> > >
> > > To get my example above to work, use:
> > >
> > > create table seq ( id int auto_increment not null primary key);
> > >
> > > insert into seq values();  ** repeat to get rows 1..30. **
> > >
> > > delete from seq where id between 10 and 11;
> > > delete from seq where id between 20 and 21;
> > > delete from seq where id = 23;
> > >
> > > select a.id from seq a left join seq b on a.id + 1 = b.id
> > > where b.id is null;
> > >
> > > +----+------+
> > > | id | id   |
> > > +----+------+
> > > |  9 | NULL |
> > > | 19 | NULL |
> > > | 22 | NULL |
> > > | 30 | NULL |
> > > +----+------+
> > >
> > > Note that this doesn't show where the gaps end.  You can
> > further enhance
> > the
> > > query by looking backward as well...getting fancy output:
> > >
> > > select a.id, case when c.id is null and b.id is null then
> > '<>' else (case
> > > when c.id is null then '<' else  (case when b.id is null
> > then '>' else ''
> > > end) end) end gaps
> > > from seq a left join seq b on a.id + 1 = b.id left join seq
> > c on a.id - 1
> > =
> > > c.id
> > > where b.id is null or c.id is null;
> > >
> > > +----+------+
> > > | id | gaps |
> > > +----+------+
> > > |  1 | <    |
> > > |  9 | >    |
> > > | 12 | <    |
> > > | 19 | >    |
> > > | 22 | <>   |
> > > | 24 | <    |
> > > | 30 | >    |
> > > +----+------+
> > >
> > > Note here that there's a gap between 9 and 12, between 19
> > and 22, and
> > > between 22 and 24.
> > > There's also a gap before 1, and one after 30, but this
> > just tells us
> > where
> > > the range ends.
> > >
> > > nulled Outer joins are very handy.
> > >
> > > Kevin Fries
> > >
> > >
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: mos [mailto:[EMAIL PROTECTED]]
> > > > Sent: Thursday, May 30, 2002 1:18 PM
> > > > To: [EMAIL PROTECTED]
> > > > Subject: Finding holes in autoinc sequences
> > > >
> > > >
> > > > I'm going to be tackling this problem in a few days and I
> > > > wanted to bounce
> > > > it off of a few MySQL heads first to see if it generates any
> > > > ideas. (or
> > > > sparks?<g>)
> > > >
> > > > Here's is the problem. I have an auto-inc column and rows
> > > > will get deleted
> > > > from the table and of course it will create a hole in the
> > > > sequence which is
> > > > fine. But I want to track which rows have been deleted by
> > finding the
> > > > holes.  (I will probably keep track of the deleted rows
> > as they get
> > > > deleted, but occasionally I will need to verify this by
> > > > scanning the table.)
> > > >
> > > > Example:
> > > >
> > > > Original squence of Rcd_Id: 1,2,3,4,5,6,7,8,9,10
> > > >
> > > > After deleting rows 5 and 9 we get:
> > > > Rcd_Id: 1,2,3,4,6,7,8,10
> > > >
> > > > Now is there any SQL statement that I can use to quickly and
> > > > efficiently
> > > > find the 2 missing rows in this example? The only thing I've
> > > > come up with
> > > > is to write a PHP program to loop through the Rcd_Id's in
> > > > order and see
> > > > which ones are missing (the query would fetch 10k rows at a
> > > > time so it
> > > > doesn't consume too much memory). The table could get rather
> > > > large (>1m
> > > > rows) and I need something that doesn't consume a lot of
> > > > memory or time.
> > > > Does anyone have any ideas? TIA
> > > >
> > > > Mike
> > > >
> > > >
> > >
> > >
> > >
> > ---------------------------------------------------------------------
> > > Before posting, please check:
> > >    http://www.mysql.com/manual.php   (the manual)
> > >    http://lists.mysql.com/           (the list archive)
> > >
> > > To request this thread, e-mail <[EMAIL PROTECTED]>
> > > To unsubscribe, e-mail
> > <[EMAIL PROTECTED]>
> > > Trouble unsubscribing? Try:
> http://lists.mysql.com/php/unsubscribe.php
> >
>
>


---------------------------------------------------------------------
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/           (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

Reply via email to