[SQL] FIFO Queue Problems

2002-11-01 Thread Chris Gamache
I'm having a race condition with a FIFO queue program that I've created...

CREATE TABLE fifo (
  id serial,
  data varchar(100),
  status int4 DEFAULT 0
);

I have a program that claims a row for itself

  my $processid = $$;
  my $sql_update = 

[SQL] Different size in the DATA directory

2002-11-01 Thread Maurício Sessue Otta



Hi, in my production server a "du -hs" in the DATA 
directory
returns 21GB
in a newly installed server, after I restored the 
dumps from the 
production server, the "du -hs" gives me just 
8GB
 
why is there such a diference???
 
what should I do (besides buying bigger and bigger 
HDs) ???
 
[]'s Mauricio


Re: [SQL] Different size in the DATA directory

2002-11-01 Thread dima
Hi, in my production server a "du -hs" in the DATA directory
returns 21GB
in a newly installed server, after I restored the dumps from the
production server, the "du -hs" gives me just 8GB
 
why is there such a diference???
 
what should I do (besides buying bigger and bigger HDs) ???
VACUUM?



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [SQL] Different size in the DATA directory

2002-11-01 Thread Maurício Sessue Otta
I do daily vacuums on the production server


| > Hi, in my production server a "du -hs" in the DATA directory
| > returns 21GB
| > in a newly installed server, after I restored the dumps from the
| > production server, the "du -hs" gives me just 8GB
| >  
| > why is there such a diference???
| >  
| > what should I do (besides buying bigger and bigger HDs) ???
| VACUUM?
| 
| 


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [SQL] Different size in the DATA directory

2002-11-01 Thread Stephan Szabo
On Fri, 1 Nov 2002, [iso-8859-1] Maurício Sessue Otta wrote:

> I do daily vacuums on the production server

You haven't said what version you're running, but if you're
using 7.2 and non-FULL vacuums you also will want to make sure
that you have the free space map settings in postgresql.conf
are large enough.  Also, you may need to use reindex to lower
the space usage by indexes.


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] Different size in the DATA directory

2002-11-01 Thread Tim Perdue
Maurício Sessue Otta wrote:


Hi, in my production server a "du -hs" in the DATA directory
returns 21GB
in a newly installed server, after I restored the dumps from the
production server, the "du -hs" gives me just 8GB

why is there such a diference???

what should I do (besides buying bigger and bigger HDs) ???

[]'s Mauricio



I had this same problem with 7.1.x, even doing full vacuums on 
SourceForge.net last year.

I assumed after a while that there was some hidden bug where file bloat 
occurred, despite the vacuum. After 3 months or so, you'd have to 
dump/restore the entire db and you'd be golden for a while again. After 
the dump/restore process things seemed much snappier too, and vacuum ran 
almost instantly.

I haven't verified if this problem still occurs in 7.2.x, using vacuum full.

Tim


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org


Re: [SQL] FIFO Queue Problems

2002-11-01 Thread Richard Huxton
On Friday 01 Nov 2002 2:56 pm, Chris Gamache wrote:
> I'm having a race condition with a FIFO queue program that I've created...

> I have a program that claims a row for itself

[code sample]

> The problem occurrs when two of the processes grab the exact same row at
> the exact same instant. It happens roughly 1 out of 1000 times. I'm not
> sure if setting the transactions to serializable would fix the problem
> since it occurrs between different postmasters.
>
> My experience with Views and Triggers is minimal. Would they even help?
> Could someone suggest a better FIFO queue method?

Check out SELECT FOR UPDATE - this locks the row and should do what you want. 
This means that 1 time in 1000 your query will fail and need to be retried.

HTH
-- 
  Richard Huxton

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] Selecting * from the base table but getting the inheriteds

2002-11-01 Thread Ross J. Reedstrom
On Wed, Oct 30, 2002 at 10:25:17AM +0100, James Adams wrote:
> Yea it would be easier to have everything in one table filling unused with
> nulls, but I was trying to avoid that because of the wasted space.
 
> But I think I'll do it that way after all  :~]
 
> Thanks for your help

Don't fret too much about the wasted space: NULL fields set a bit in
a bitmask in the header of the on-disk tuple, so they take up _no_
storage at all.

Ross

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [SQL] FIFO Queue Problems

2002-11-01 Thread Bruno Wolff III
On Fri, Nov 01, 2002 at 06:56:30 -0800,
  Chris Gamache <[EMAIL PROTECTED]> wrote:
> 
> The problem occurrs when two of the processes grab the exact same row at the
> exact same instant. It happens roughly 1 out of 1000 times. I'm not sure if
> setting the transactions to serializable would fix the problem since it occurrs
> between different postmasters.

I played with this a bit and it looks like you want transaction isolation
level serializable and have the application retry when necessary.

Doing a select for update won't work cleanly. What will happen is that
you will get 0 rows returned when there is a conflict and you will need
to detect that same as you would by changing the isolation level.

What happens in the select for update case is you first rewrite the select
to use order by and limit to pick the minimum row since you can't use
for update when returning an aggregate. Then when the matching row is
found there is a possible lock contention with another update. If this
happens then when the other update commits the locked row is rechecked and
will no longer match and gets removed from the list of rows returned (leaving
no rows).

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [SQL] FIFO Queue Problems

2002-11-01 Thread Tom Lane
Chris Gamache <[EMAIL PROTECTED]> writes:
> I have a program that claims a row for itself

>   my $processid = $$;
>   my $sql_update = < UPDATE fifo
>   set status=$processid
> WHERE id = (SELECT min(id) FROM fifo WHERE status=0);
> EOS

> The problem occurrs when two of the processes grab the exact same row at the
> exact same instant.

Probably the best fix is to do it this way:

BEGIN;
LOCK TABLE fifo IN EXCLUSIVE MODE;
UPDATE ... as above ...
COMMIT;

The exclusive lock will ensure that only one process claims a row
at a time (while not preventing concurrent SELECTs from the table).
This way you don't need to worry about retrying.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [SQL] Database Design tool

2002-11-01 Thread Reinoud van Leeuwen
On Wed, 30 Oct 2002 14:35:23 + (UTC), [EMAIL PROTECTED]
(Johannes Lochmann) wrote:

>> Can anybody take me reference on Database design tool with PostgreSQL 
>> support.

You can use PowerDesigner from Sybase. Windows only. A fully
functional trial version (45 days) is downloadable at www.sybase.com.
it has:
- support for al major databases (Postgresql, Sybase, Oracle, MS SQL,
Mysql)
- support to generate triggers automatically
- nice reporting features
- ability to reverse engineer existing databases (through ODBC of from
a file)
- central repository (database) storage, or storage of your model in
an XML file (which I use to auto generate perl classes from my saved
model)
- UML modeling
- lots of other buzzword-compliant features ";-)

It is an expensive tool, but if you do serious design work, you'll
need a tool like this

-- 
__
"Nothing is as subjective as reality"
Reinoud van Leeuwen   [EMAIL PROTECTED]
http://www.xs4all.nl/~reinoud

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [SQL] [GENERAL] Database Design tool

2002-11-01 Thread Diogo Biazus
Viacheslav N Tararin wrote:


Hi.

Can anybody take me reference on Database design tool with PostgreSQL 
support.

thanks.


There's another one very good called Data Architect:

http://www.thekompany.com/products/dataarchitect/

--
Diogo de Oliveira Biazus
[EMAIL PROTECTED]
Ikono Sistemas e Automação
http://www.ikono.com.br



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[SQL] HA PostgreSQL

2002-11-01 Thread Charles H. Woloszynski
I am trying to develop a plan for a high-availability (HA) 
implementation of a database using PostgreSQL.  One wrinkle; the data we 
receive is performance data, and occassional loss of some measurements 
is Ok for us.  [I know, this is not in the main stream of database users 
:-)].

I have looked ar rserv and pg-replicator, and they seem to be targeted 
at replication without specific HA support.  Replication is great for 
lots of things; but I need HA more than ACID replication.  

I have seen a proposed solution that uses *rsync* on the database files 
between machines and linux-ha to roll over the network access to the 
available machine.  My question is pretty simple; can something as 
*simple* as rsync make a full copy of the database consistently between 
the machines?  That seems just too easy.  

If I replace the process with something that uses pg_dump and rsync that 
file (ok, now that seems more likely to generate a consistent database 
image) that and restore that into the slave, does this work?  Obviously, 
this approach is kinda a hammer approach; the poor active server will be 
dumping till the cows come home.

Any and all feedback and comments are greatly appreciated.  And, as 
always, thanks in advance,

Charlie

--


Charles H. Woloszynski

ClearMetrix, Inc.
115 Research Drive
Bethlehem, PA 18015

tel: 610-419-2210 x400
fax: 240-371-3256
web: www.clearmetrix.com





---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] FIFO Queue Problems

2002-11-01 Thread Wei Weng
Do you need to unlock the table fifo when you are done?

On Fri, 2002-11-01 at 11:51, Tom Lane wrote:
> Chris Gamache <[EMAIL PROTECTED]> writes:
> > I have a program that claims a row for itself
> 
> >   my $processid = $$;
> >   my $sql_update = < > UPDATE fifo
> >   set status=$processid
> > WHERE id = (SELECT min(id) FROM fifo WHERE status=0);
> > EOS
> 
> > The problem occurrs when two of the processes grab the exact same row at the
> > exact same instant.
> 
> Probably the best fix is to do it this way:
> 
>   BEGIN;
>   LOCK TABLE fifo IN EXCLUSIVE MODE;
>   UPDATE ... as above ...
>   COMMIT;
> 
> The exclusive lock will ensure that only one process claims a row
> at a time (while not preventing concurrent SELECTs from the table).
> This way you don't need to worry about retrying.
> 
>   regards, tom lane
> 
> ---(end of broadcast)---
> TIP 6: Have you searched our list archives?
> 
> http://archives.postgresql.org
-- 
Wei Weng <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] FIFO Queue Problems

2002-11-01 Thread Bruno Wolff III
On Fri, Nov 01, 2002 at 14:18:37 -0500,
  Wei Weng <[EMAIL PROTECTED]> wrote:
> Do you need to unlock the table fifo when you are done?

Locks only apply for the duration of a transaction. When you commit or
roleback the lock will be released.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [SQL] FIFO Queue Problems

2002-11-01 Thread Tom Lane
Wei Weng <[EMAIL PROTECTED]> writes:
> Do you need to unlock the table fifo when you are done?

That happens at COMMIT.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[SQL] making queries more effecient

2002-11-01 Thread Peter T. Brown
Hi. I have this query that I have been trying to reduce to a single
statement, but haven't figured out how. Am I missing something?

CREATE TEMP TABLE temp20561149207391 AS SELECT DISTINCT ON ("VisitorID")
"VisitorID","Type" FROM "ProgramEvent" WHERE "ProgramID" = 10 ORDER BY
"VisitorID","Created" DESC;INSERT INTO "VisitorPointer839" ("VisitorID")
SELECT temp20561149207391."VisitorID" FROM temp20561149207391  WHERE
temp20561149207391."Type" = 0


Thanks


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [SQL] unnecessary updates

2002-11-01 Thread Ross J. Reedstrom
Another way to approach this would be to add a trigger to your table
in the database, that rejects updates that don't change any values.
You'd basically have to hard code that same logic (new.a != old.a or
new.b != old.b ...) and it'd fire on every update, so you're talking
about trading computational cycles for savings in diskspace (and vacuum
time). Like all things, it's a tradoff. Only way to tell for your case is
to try it, I'd guess. It'd be kind of interesting to know if this would
be useful, but you'd need to write a tool to analyze your tables before
vacuum, to determine if the dead tuples differ from the current values
(or from each other).

Ross

On Wed, Oct 30, 2002 at 01:02:26PM -0500, Andrew Perrin wrote:
> One strategy is to use some sort of middleware that takes care of this. On
> a project I did a few years ago, I used a perl module that read the record
> from Postgres and made it into a perl object. The object contained a
> variable, "changed", that reflected whether anything had actually changed
> in the object. Finally, there was an object method put() that took care of
> updating the database. put() checked the changed property and simply
> silently finished unless changed was true.
> 
> ap
> 
> --
> Andrew J Perrin - http://www.unc.edu/~aperrin
> Assistant Professor of Sociology, U of North Carolina, Chapel Hill
> [EMAIL PROTECTED] * andrew_perrin (at) unc.edu
> 
> 
> On Wed, 30 Oct 2002, chester c young wrote:
> 
> > When doing database work over the web, especially when many records are
> > on one page, *many* updates get posted to pg that do not change the
> > record.  Eg, the page may contain 50 records, the user changes 1, and
> > submits.
> > 
> > I assume that a no-change update takes the same resources as a "real"
> > update, ie, a new block is allocated to write the record, the record
> > written, indicies are rerouted to the new block, and the old block
> > needs to be vacuumed later.  Is this true?
> > 
> > In SQL, the only way I know to prevent this thrashing is to write the
> > update with an elaborate where clause, eg, "update ... where pk=1 and
> > (c1!='v1' or c2!='v2' or ... )".  This adds cost both to the app server
> > and to pg - is the cost justified?
> > 
> > Finally, is there anyway to flag pg to ignore no-change updates?  This
> > seems to me to me the most efficient way of handling the needless work.
> > 
> > thanks
> > chester
> > 
> > __
> > Do you Yahoo!?
> > HotJobs - Search new jobs daily now
> > http://hotjobs.yahoo.com/
> > 
> > ---(end of broadcast)---
> > TIP 6: Have you searched our list archives?
> > 
> > http://archives.postgresql.org
> > 
> 
> 
> ---(end of broadcast)---
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[SQL] select syntax question

2002-11-01 Thread Wei Weng
This is what is on postgresql's manual page:

http://www.postgresql.org/idocs/index.php?sql-select.html

SELECT [ ALL | DISTINCT [ ON ( expression [, ...] ) ] ]
* | expression [ AS output_name ] [, ...]
[ FROM from_item [, ...] ]
[ WHERE condition ]
[ GROUP BY expression [, ...] ]
[ HAVING condition [, ...] ]
[ { UNION | INTERSECT | EXCEPT } [ ALL ] select ]
[ ORDER BY expression [ ASC | DESC | USING operator ] [, ...] ]
[ FOR UPDATE [ OF tablename [, ...] ] ]
[ LIMIT { count | ALL } ]
[ OFFSET start ]

According to this syntax, SELECT DISTINCT COUNT(ID) FROM test
should be valid while SELECT COUNT(DISTINCT ID) FROM test otherwise.

while in fact, both are valid.

Is there any difference between this two queries?

Thanks!


-- 
Wei Weng
Network Software Engineer
KenCast Inc.



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] select syntax question

2002-11-01 Thread Bruno Wolff III
On Fri, Nov 01, 2002 at 16:33:32 -0500,
  Wei Weng <[EMAIL PROTECTED]> wrote:
> This is what is on postgresql's manual page:
> 
> According to this syntax, SELECT DISTINCT COUNT(ID) FROM test
> should be valid while SELECT COUNT(DISTINCT ID) FROM test otherwise.
> 
> while in fact, both are valid.
> 
> Is there any difference between this two queries?

They mean different things.

The first will return the number of rows with non-null values for id.
Since only one row is returned it is guarenteed to be distinct so the
distinct keyword will have no effect.

The second form will return the number of distinct, non-null values for id
in the table.

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [SQL] making queries more effecient

2002-11-01 Thread Chad Thompson
This should be all you need.

insert into VisitorPointer839 ("VisitorID")
select VisitorID
from ProgramEvent
Where ProgramID = 10
and Type = 0
group by VisitorID

You dont need order by because its not important the order it goes in the
database, just the order that it comes out.
I have found that group by works faster than distinct in some cases.  You
may have to test it for your senario.

Thanks
Chad

- Original Message -
From: "Peter T. Brown" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, November 01, 2002 3:24 PM
Subject: [SQL] making queries more effecient


> Hi. I have this query that I have been trying to reduce to a single
> statement, but haven't figured out how. Am I missing something?
>
> CREATE TEMP TABLE temp20561149207391 AS SELECT DISTINCT ON ("VisitorID")
> "VisitorID","Type" FROM "ProgramEvent" WHERE "ProgramID" = 10 ORDER BY
> "VisitorID","Created" DESC;INSERT INTO "VisitorPointer839" ("VisitorID")
> SELECT temp20561149207391."VisitorID" FROM temp20561149207391  WHERE
> temp20561149207391."Type" = 0
>
>
> Thanks
>
>
> ---(end of broadcast)---
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to [EMAIL PROTECTED] so that your
> message can get through to the mailing list cleanly
>


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[SQL] Subtracting time fields

2002-11-01 Thread Kevin Old
Hello all,

I have two fields in my database access_time and release_time.I
would like to calculate the "duration" between these two values, but
can't figure out what to do.  

I've tried something like this but it doesn't work:

select access_time, release_time, time(access_time) - time(release_time)
as duration from mastertbl limit 10

Any suggestions?

Thanks,
Kevin
-- 
Kevin Old <[EMAIL PROTECTED]>


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [SQL] making queries more effecient

2002-11-01 Thread Josh Berkus
Peter,

For the rest of our sakes, in the future, please format your SQL before 
posting it to request help. 

> CREATE TEMP TABLE temp20561149207391 AS SELECT DISTINCT ON ("VisitorID")
> "VisitorID","Type" FROM "ProgramEvent" WHERE "ProgramID" = 10 ORDER BY
> "VisitorID","Created" DESC;INSERT INTO "VisitorPointer839" ("VisitorID")
> SELECT temp20561149207391."VisitorID" FROM temp20561149207391  WHERE
> temp20561149207391."Type" = 0

Can be re-written as:

INSERT INTO "VisitorPointer839" ("VisitorID")
SELECT "VisitorID" FROM (
SELECT DISTINCT ON ("VisitorID") "VisitorID","Type" 
FROM "ProgramEvent" WHERE "ProgramID" = 10 
ORDER BY "VisitorID","Created" DESC ) v_first
WHERE v_first."Type" = 0;

Please also keep in mind that the SELECT DISTINCT ON syntax is a 
PostgreSQL-specific extension and is not portable.

-- 
-Josh Berkus
 Aglio Database Solutions
 San Francisco


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [SQL] HA PostgreSQL

2002-11-01 Thread Robert Treat
On Fri, 2002-11-01 at 13:26, Charles H. Woloszynski wrote:
> I am trying to develop a plan for a high-availability (HA) 
> implementation of a database using PostgreSQL.  One wrinkle; the data we 
> receive is performance data, and occassional loss of some measurements 
> is Ok for us.  [I know, this is not in the main stream of database users 
> :-)].
> 
> I have looked ar rserv and pg-replicator, and they seem to be targeted 
> at replication without specific HA support.  Replication is great for 
> lots of things; but I need HA more than ACID replication.  
> 
> I have seen a proposed solution that uses *rsync* on the database files 
> between machines and linux-ha to roll over the network access to the 
> available machine.  My question is pretty simple; can something as 
> *simple* as rsync make a full copy of the database consistently between 
> the machines?  That seems just too easy.  
> 

Well, some folks have argued that it is likely to cause extreme data
corruption, my understanding being that if even one file is off your
going to end up in trouble.  Granted, I don't know that this has ever
been proven one way or the other, but I buy the argument enough not to
rely on it. 

> If I replace the process with something that uses pg_dump and rsync that 
> file (ok, now that seems more likely to generate a consistent database 
> image) that and restore that into the slave, does this work?  Obviously, 
> this approach is kinda a hammer approach; the poor active server will be 
> dumping till the cows come home.

it does seem more likely to give you a consistent db image, but
depending on the size of your database your going to have some delay
between snapshots depending how long it takes to dump the database. On a
small database this might be acceptable, on a large enough database it
will be unworkable.  I suppose you could run concurrent pg_dump's, but
I'd hate to think what that might do to the poor db server.  Don't
forget you'll have an issue actually loading the data onto the secondary
sever as well. Personally I think you'd be better off setting up some
type of master/slave replication and devise some networking fail over
scripts to switch hosts if need be.
> 
> Any and all feedback and comments are greatly appreciated.  And, as 
> always, thanks in advance,
> 

If you haven't looked at dbbalancer yet, you might want to. Someone
mentioned it just a few days ago in a very similar thread on the general
list (iirc)

Robert Treat


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] Subtracting time fields

2002-11-01 Thread Bruno Wolff III
On Fri, Nov 01, 2002 at 17:05:26 -0500,
  Kevin Old <[EMAIL PROTECTED]> wrote:
> Hello all,
> 
> I have two fields in my database access_time and release_time.I
> would like to calculate the "duration" between these two values, but
> can't figure out what to do.  
> 
> I've tried something like this but it doesn't work:
> 
> select access_time, release_time, time(access_time) - time(release_time)
> as duration from mastertbl limit 10

You should be able to just subtract release_time from access_time without
trying to call the time function. The type of the difference will be
an interval.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [SQL] making queries more effecient

2002-11-01 Thread Josh Berkus

Chad,

> Im not sure I can get my head around the difference between doing your
> subselect
> 
> INSERT INTO "VisitorPointer839" ("VisitorID")
> SELECT "VisitorID" FROM (
> SELECT DISTINCT ON ("VisitorID") "VisitorID","Type"
> FROM "ProgramEvent" WHERE "ProgramID" = 10
> ORDER BY "VisitorID","Created" DESC ) v_first
> WHERE v_first."Type" = 0;

This gives him a list of all Visitor IDs where the most recent instance of 
that VisitorID is of Type = 0;  It's the equivalent of, in SQL-92:

 INSERT INTO "VisitorPointer839" ("VisitorID")
SELECT "VisitorID" FROM 
"ProgramEvent" pe1 WHERE EXISTS(
SELECT "VisitorID", MAX("Created")
FROM "ProgramEvent" pe2 WHERE "ProgramID" = 10
GROUP BY "VisitorID" 
HAVING pe2."VisitorID" =  pe1."VisitorID"
AND MAX(pe2."Created") = pe1."Created") v_first
AND "Type" = 0;

... which is what I would use, as I dislike database-specific extensions of 
ANSI SQL.

> And Just writing it out straight.
> 
> insert into VisitorPointer839 ("VisitorID")
> select VisitorID
> from ProgramEvent
> Where ProgramID = 10
> and Type = 0
> group by VisitorID

This just gives him a list of all VisitorIDs with a Type = 0, most recent or 
not.

> Thanks
> Chad "I wanna be Josh when i grow up" Thompson

 Thanks.   But heck, I'm only 32!  I'm not grown up yet!

-- 
-Josh Berkus
 Aglio Database Solutions
 San Francisco


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] How do I get rid of these messages?

2002-11-01 Thread Bruce Momjian

7.3 beta has client_min_messages which can show only WARNING and
not NOTICE.

---

Bhuvan A wrote:
> > How do I get rid of the messages like "NOTICE:  CREATE TABLE / PRIMARY
> > KEY will create implicit index 'test_pkey' for table 'test'" coming out
> > from stderr when I run psql with my create table script?
> > 
> 
> AFAIK, by default it cannot be done eventhough syslog is enabled (somebody
> correct me if wrong), since such messages are sent to the frontend in all
> cases. If you are ready to re-compile, i have the patch.  You can contact
> me.
> 
> regards,
> bhuvaneswaran
> 
> 
> 
> ---(end of broadcast)---
> TIP 6: Have you searched our list archives?
> 
> http://archives.postgresql.org
> 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] Different size in the DATA directory

2002-11-01 Thread Bruce Momjian
Tim Perdue wrote:
> I had this same problem with 7.1.x, even doing full vacuums on 
> SourceForge.net last year.
> 
> I assumed after a while that there was some hidden bug where file bloat 
> occurred, despite the vacuum. After 3 months or so, you'd have to 
> dump/restore the entire db and you'd be golden for a while again. After 
> the dump/restore process things seemed much snappier too, and vacuum ran 
> almost instantly.
> 
> I haven't verified if this problem still occurs in 7.2.x, using vacuum full.

Tim, I guess your problem is dead index pages that can't be reclaimed,
and it isn't fixed in 7.3.  Only REINDEX fixes it, and we have a
/contrib/reindexdb script in 7.3.

--
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [SQL] select syntax question

2002-11-01 Thread Bruce Momjian

Yes, a big difference.  The first returns the distinct COUNT values, but
there is only one, of course.  The second returns the number of distinct
values in the column.

---

Wei Weng wrote:
> This is what is on postgresql's manual page:
> 
> http://www.postgresql.org/idocs/index.php?sql-select.html
> 
> SELECT [ ALL | DISTINCT [ ON ( expression [, ...] ) ] ]
> * | expression [ AS output_name ] [, ...]
> [ FROM from_item [, ...] ]
> [ WHERE condition ]
> [ GROUP BY expression [, ...] ]
> [ HAVING condition [, ...] ]
> [ { UNION | INTERSECT | EXCEPT } [ ALL ] select ]
> [ ORDER BY expression [ ASC | DESC | USING operator ] [, ...] ]
> [ FOR UPDATE [ OF tablename [, ...] ] ]
> [ LIMIT { count | ALL } ]
> [ OFFSET start ]
> 
> According to this syntax, SELECT DISTINCT COUNT(ID) FROM test
> should be valid while SELECT COUNT(DISTINCT ID) FROM test otherwise.
> 
> while in fact, both are valid.
> 
> Is there any difference between this two queries?
> 
> Thanks!
> 
> 
> -- 
> Wei Weng
> Network Software Engineer
> KenCast Inc.
> 
> 
> 
> ---(end of broadcast)---
> TIP 4: Don't 'kill -9' the postmaster
> 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[SQL] Timezone issue with date_part

2002-11-01 Thread Ken Kennedy
In trying to debug some code, I've come across this SQL issue that's
causing my problem.

I've got two epoch time values that I have to compare. Time #1 seems
to be working straightforwardly enough, but a tricky timezone-related
error has surfaced with Time #2.

Looking at the straight timestamp:

kenzoid=# select max(posted_date) from pinds_blog_entries
kenzoid-#  where  package_id = '2969'
anddraft_p = 'f'
anddeleted_p = 'f'
kenzoid-# kenzoid-# kenzoid-# ;
max 

 2002-11-01 09:56:41.474084

That's correct, for my timezone. (EST5EDT)

The query that's in the script now to return that as an epoch time is:
kenzoid=# select coalesce
(date_part('epoch',max(posted_date)),0) as last_update
from   pinds_blog_entries
where  package_id = '2969'
anddraft_p = 'f'
anddeleted_p = 'f'
kenzoid-# kenzoid-# kenzoid-# kenzoid-# kenzoid-# ;
   last_update
--
 1036144601.47408

I finally realized something was amiss, and reconstituted that epoch
value:

kenzoid=#  select timestamp 'epoch' + interval '1036144601.47408
seconds';
   ?column?   
--
 2002-11-01 04:56:41.47408-05

I'm five hours off...my timezone value, I imagine.

I tried putting the TIMESTAMP into the date_part, but no joy:

kenzoid=# select coalesce (date_part('epoch', 
kenzoid-# TIMESTAMP max(posted_date)),0)
kenzoid-# as last_update
kenzoid-# from   pinds_blog_entries
where  package_id = '2969'
anddraft_p = 'f'
anddeleted_p = 'f'
 kenzoid-# kenzoid-# kenzoid-# kenzoid-# ;
ERROR:  parser: parse error at or near "max"

I kinda figured that. 

So I'm stuck, without making two calls. If I call to the db and get
max(posted_date), and then turn around and call the date_part
with that value, things work. But I'm trying to avoid the two db
calls. Any ideas? Thanks!!

-- 

Ken Kennedy | http://www.kenzoid.com| [EMAIL PROTECTED]

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]