[SQL] slow query - only uses indices

2007-12-24 Thread Marc
Hey Folks,

This query is running really slowly.  Sometimes much slower then others.  I
have a feeling that there may be contention on one of the indices it is
using.  In the explain plan, it looks like it estimates 2 rows but actually
finds 228 rows?  Is that really bad?


Query and explain plan are below.  Seems like it spend the most time doing
Index Scan using i_tablea_atextfield on tablea ru
(cost=0.00..2265.28rows=2 width=12) (actual time=
0.624..881.313 rows=228 loops=1)

Any suggestions?

SELECT z.atextfield,
   z.btextfield,
   z.abigintfield,
   p.achar255field, p.ptextfield,
   z.achar1field, u.aboolfield,
   z.textfield1,
   z.achar8field,
   z.achar16field
FROM tablea ru
INNER JOIN tableb u ON ( u.atextfield = ru.anothertextfield )
INNER JOIN tablec z ON u.atextfield = z.atextfield
INNER JOIN tabled p ON p.id = z.pid
LEFT JOIN tablee m ON u.atextfield = m.atextfield  AND m.boolcol5
WHERE ru.atextfield = 'thelookupval'
  AND u.boolcol1 IS TRUE
  AND u.boolcol2 IS FALSE
  AND ru.achar1field <> 'N'
  AND ru.boolcol3 IS FALSE
  AND NOT EXISTS ( SELECT 'x' FROM tablea fru WHERE fru.atextfield =
ru.anothertextfield AND fru.boolcol3 IS TRUE )
  AND EXISTS ( SELECT 'x' FROM tablef s WHERE s.atextfield = ru.atextfieldAND
s.boolcol4 IS TRUE )
  ORDER by ru.anothertextfield asc



Sort  (cost=2341.96..2341.97 rows=2 width=146) (actual time=
1118.810..1119.098 rows=228 loops=1)
  Sort Key: ru.anothertextfield
  ->  Nested Loop Left Join  (cost=0.00..2341.95 rows=2 width=146) (actual
time= 0.930..1117.258 rows=228 loops=1)
->  Nested Loop  (cost=0.00..2313.36 rows=2 width=131) (actual time=
0.842..914.554 rows=228 loops=1)
  ->  Nested Loop  (cost=0.00..2296.65 rows=2 width=93) (actual
time= 0.765..901.916 rows=228 loops=1)
->  Nested Loop  (cost=0.00..2281.98 rows=2 width=72)
(actual time=0.690..893.648 rows=228 loops=1)
  ->  Index Scan using i_tablea_atextfield on tablea
ru  (cost= 0.00..2265.28 rows=2 width=12) (actual
time=0.624..881.313rows=228 loops=1)
Index Cond: (atextfield =
'thelookupval'::text)
Filter: ((achar1field <> 'N'::bpchar) AND
(boolcol3 IS FALSE) AND (NOT (subplan)) AND (subplan))
SubPlan
  ->  Index Scan using tablef_pkey on tablef
s  (cost=0.00..8.34 rows=1 width=0) (actual time=0.016..0.016 rows=1
loops=228)
Index Cond: (atextfield = $1)
Filter: (boolcol4 IS TRUE)
  ->  Bitmap Heap Scan on tablea fru  (cost=
4.61..90.41 rows=1 width=0) (actual time=3.590..3.590 rows=0 loops=243)
Recheck Cond: (atextfield = $0)
Filter: (boolcol3 IS TRUE)
->  Bitmap Index Scan on
i_tablea_atextfield  (cost= 0.00..4.61 rows=22 width=0) (actual time=
0.044..0.044 rows=17 loops=243)
  Index Cond: (atextfield = $0)
  ->  Index Scan using tablec_pkey on tablec z
(cost= 0.00..8.34 rows=1 width=60) (actual time=0.047..0.049 rows=1
loops=228)
Index Cond: (z.atextfield =
ru.anothertextfield)
->  Index Scan using tabled_pkey on tabled p  (cost=
0.00..7.32 rows=1 width=29) (actual time=0.030..0.031 rows=1 loops=228)
  Index Cond: (p.id = z.pid)
  ->  Index Scan using tableb_pkey on tableb u  (cost=
0.00..8.34 rows=1 width=38) (actual time=0.049..0.051 rows=1 loops=228)
Index Cond: (u.atextfield = ru.anothertextfield)
Filter: ((boolcol1 IS TRUE) AND (boolcol2" IS FALSE))"
->  Index Scan using tablee_atextfield_idx on tablee m  (cost=
0.00..14.28 rows=1 width=39) (actual time=0.883..0.883 rows=0 loops=228)
  Index Cond: (u.atextfield = m.atextfield)


[SQL] datatype SET

2000-08-20 Thread Marc Roos



Does anyone have information on how to create a datatype SET in
postgres???








[SQL] Outer Joins

2000-10-31 Thread Marc Rohloff

I've been looking at the open-source databases for a project I am working on and while 
reading about Postgres I saw that they do not support outer joins yet. I was intrigued 
by their solution of using a union query.

Something Like:
select a.col1, b.col2 from a,b where a.col1 = b.col2
union
select a.col1, NULL from a where a.col1 not in (select b.col2 from b)

But I was wondering if the following would work (it does in some other databases)

select a.col1, b.col2 from a,b 
where a.col1 = b.col2
   or  b.col2 is null

or maybe even

select a.col1, b.col2 from a,b 
where a.col1 = b.col2
   or  a.col1 not in (select b.col2 from b)

These would seem to be far more efficient than a union query
(I would try this but I don't have a Unix box at the moment to install PostgreSQL on!)

Marc Rohloff







Re: [SQL] Outer Joins

2000-11-01 Thread Marc Rohloff

>> select a.col1, b.col2 from a,b
>> where a.col1 = b.col2
>>or  a.col1 not in (select b.col2 from b)

>This would work, but it would be *much* slower than a UNION query.  "Not
>In" queries are perhaps the slowest you can run; see the earlier thread
>"Query Problem" for a discussion.  UNION queries are, in fact, very fast
>... just awkward to code and manipulate.

Why should this be slower since the UNION Query still has an identical not in clause?
This is far easier (for me) to read.

Marc





[SQL] Returning Recordsets from Stored-procs

2000-11-06 Thread Marc Rohloff

Is there anyway to return a recordset from a Stored Procedure in Postgres so that it 
can be used as a type of view or select?
I know that you can do this in Interbase or MS-SQL.

I have seen that you can return a complete record but that's not really the same thing.

Marc Rohloff




[SQL] postgres

2000-12-13 Thread Marc Daoust

Hi,

I in the search for a DB that would work with our product and have been told
to have a look at postgres.  Would you be able to foward me any information on
your product and or point me to where I might be able to find some.


Thank you in advance for your help!

Marc


Get your own FREE, personal Netscape WebMail account today at 
http://home.netscape.com/webmail



Re: [Re: [SQL] postgres]

2000-12-15 Thread Marc Daoust

Thank you very much Reberto,

It appears that your co-workers are not inerested in potential funding.
For the rude onesmaybe/perhaps people like myself were givin the email
address ever think of that.

A potential client that is having second thoughts.




Roberto Mello <[EMAIL PROTECTED]> wrote:
On Wed, Dec 13, 2000 at 04:44:55PM -0800, Josh Berkus wrote:
> Mr. Daoust,
> 
>   You have reached the PostgreSQL SQL developers mailing list.  We are
> not PostgreSQL sales people, and we have no marketing information to
> sell you.  Please have a clue.

Errr... forgive me, but maybe we could be help the PostgreSQL team by
kindly directing people to their corporate website, so they can purchase
services that are going to fund the project.
As long as the project is happy I am happy.

Just some thoughts.

-Roberto

-- 
Computer ScienceUtah State University
Space Dynamics Laboratory   Web Developer
USU Free Software & GNU/Linux Club  http://fslc.usu.edu
My home page - http://www.brasileiro.net/roberto



Get your own FREE, personal Netscape WebMail account today at 
http://home.netscape.com/webmail



[SQL] MD5 use in PL/Perl

2000-12-28 Thread Marc Rassbach



I'd like to be able to only store the database of usernames and passwrods
here locally as a md5 hash.  (in case the black hats come to visitI'd
like to make life hard for them)  Using AuthPG, I should be able to create
a SQL call to postgresbut there is no native md5 hashing function.

In my ideal blue-sky worldthe SQL call would like this:

SELECT name FROM Sample_table WHERE ( (userid='12345') AND
(userhashed=md5out('abc')) )

With the sample table looks like this:
Sample_table:
nameuseriduserhashed   
fred12345 900150983cd24fb0d6963f7d28e17f72

I'd get the string 'fred' in name from Sample_table.


Idea 1)  A call to a shell script.A question was asked back in 1999 if
there was a way to use a shell script in an SQL call.that person had
no public responses.  Moved onto 
Idea 2) use PL/Perl to take in the text to be hashed, and output the
hash.  Read the docs, looked on the list for more examples..


This perl code works as I'm expecting.
use MD5;
my $mdval = new MD5;
my $result ;
my $out;
$mdval->add('abc');
$result = $mdval->digest();
$out= unpack("H*" , $result );
print $out;

Attempting to xlate to PL/Perl

settle=# create function md5out3(varchar) returns varchar(32) as '
settle'# use MD5;
settle'# my $mdval = new MD5;
settle'# my $result ;
settle'# my $out;
settle'# $mdval->add($_[0]);
settle'# $result = $mdval->digest();
settle'# $out= unpack("H*" , $result );
settle'# return $out;'
settle-#  LANGUAGE 'plperl';
CREATE
settle=# select md5out3('fred');
ERROR:  creation of function failed : require trapped by operation mask at
(eval 6) line 2.


So...

What did I do wrong WRT PL/Perl? (Let me guesshaving perl call perl
modules causes breakage)  Should I be trying something different
to get to my desired end goal?  







[SQL] Extracting user db tabel info from system tables???

2001-01-05 Thread Marc Cromme
c=rw"} | pred | varchar | -1 | t
| prey_pkey | t   | t
 prey| mac | {"=r","mac=rw"} | pred_age | int8|  8 | t
| prey_pkey | t   | t
 prey| mac | {"=r","mac=rw"} | prey | varchar | -1 | t
| prey_pkey | t   | t
 prey| mac | {"=r","mac=rw"} | prey_age | int8|  8 | t
| prey_pkey | t   | t
 prey| mac | {"=r","mac=rw"} | wstom| float8  |  8 | f
| NULL` | NULL| NULL  
 prey| mac | {"=r","mac=rw"} | stomcon  | float8  |  8 | f
| NULL` | NULL| NULL  
(7 rows)


QUESTION 4: How do I extract also information on foreign keys from the
system tables, 
and add two columns to the above table like the following?

fkey   | ftable
---+-
pred_pkey  | pred
pred_pkey  | pred
pred_pkey  | pred
NULL   | NULL
NULL   | NULL
NULL   | NULL
NULL   | NULL



I do thank you very much in advance on any hints on how to juggle around
with PostgreSQL system tables.

Your's 

Marc Cromme
[EMAIL PROTECTED]



[SQL] Problem using IP functions

2001-05-12 Thread Marc Lamothe

Hi,

I'm having trouble using the host() and netmask() functions within a select
query.  For some reason, the following query returns 1 row:

ipdb=> select id, subnet_number from subnet where subnet_number =
'216.46.13.0';
 id | subnet_number
+---
 96 | 216.46.13.0
(1 row)

Yet, if I replace the ip string literal with host('216.46.13.0/24'), I get
no results.  ie:

ipdb=> select id, subnet_number from subnet where subnet_number =
host('216.46.13.0/24');

 id | subnet_number
+---
(0 rows)

Even though host('216.46.13.0/24') evaluates to 216.46.13.0

ipdb=> select host('216.46.13.0/24');
host
-
 216.46.13.0
(1 row)

The subnet_number column is a varchar(16) which I assume you can compare
with a text data type, which is what host() returns.  Just to be sure, I
tried casting everything to type text, but that didn't do the trick.

ipdb=> select texteq(host('216.46.13.0/24')::text, '216.46.13.0'::text);
 texteq

 f
(1 row)

Any insight would be greatly appreciated.

Marc



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



[SQL] Select most recent record?

2001-05-16 Thread Marc Sherman

Hi, I was hoping I could get some help with a select statement.

I have a log table with three columns: id int4, timestamp datetime,
value int4.

For any given ID, there will be a large number of rows, with
different timestamps and values.

I'd like to select the newest (max(timestamp)) row for each id,
before a given cutoff date; is this possible?

The best I've been able to come up with is the rather ugly (and
very slow):

select * from log as l1 where timestamp in
(select max(timestamp) from log where id=l1.id and
timestamp<'2001-01-01' group by id);

There must be a better way to do this; any tips?

Thanks,
- Marc


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



RE: [SQL] Select most recent record?

2001-05-16 Thread Marc Sherman

From: Tom Lane [mailto:[EMAIL PROTECTED]]
> 
> "Marc Sherman" <[EMAIL PROTECTED]> writes:
> > I'd like to select the newest (max(timestamp)) row for each id,
> > before a given cutoff date; is this possible?
> 
> select * from log order by timestamp desc limit 1;

Heh.  I obviously simplified my situation too much.

This is closer to what I've really got:

create table user (userid int4 primary key, groupid int4);
create table log (userid int4, timestamp datetime, value int4);

I need to select sum(value) for each group, where the values chosen
are the newest log entry for each group member that is before a cutoff 
date (April 1, 2001).

Here's what I'm currently using:

select user.groupid, sum(l1.value)
from log as l1, user
where user.userid=log.userid
and log.timestamp in (
select max(timestamp) from log
where log.timestamp<'2001-04-01'
and log.userid=l1.userid)
group by user.groupid;

When I first posted, this was _very_ slow.  I've since improved
it by adding an index on log(userid,timestamp) - now it's just
slow.  If anyone knows how to make it faster, I'd appreciate it.

- Marc


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



RE: [SQL] Select most recent record?

2001-06-21 Thread Marc Sherman

From: Mark Hamby [mailto:[EMAIL PROTECTED]]
> 
> Marc,
> Did you ever get your problem solved to your satisfaction?
> We have a very simular problem with a historical database
> containing typically 5K id and updates to data every few
> seconds.  We tried unsuccessfully to optimize queries
> such as those already suggested to you.  We found the best
> means to quickly query the data valid at any given time was
> to:
[snip]

By adding an index, I was able to get the performance of the
query within a reasonable threshold; it's still slow, but
acceptable.  I'll bear your suggestion in mind for next time
I run into this kind of problem, though, if my sol'n can't
handle the data.  Thanks,

- Marc


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [SQL] Index of a table is not used (in any case)

2001-10-23 Thread Marc Spitzer

In article <[EMAIL PROTECTED]>, Josh Berkus wrote:
> Reinier,
> 
> For future notice, [SQL] is the correct list for this kind of inquiry.
> Please do not post it to [HACKERS].  And please don't cross-post ... it
> results in a lot of needless duplication of effort.
> 
>> I have defined a table and the necessary indices.
> 
>> Is the order of index creation relevant? I.e., should I create the
>> indices before inserting
>> entries or the other way around?
> 
> Ummm ... not to be obvious, or anything, but did you VACCUUM ANALYZE
> after populating your table?
> 
> There's also some special steps to take if you are regularly deleting
> large numbers of records.

Could you tell me what those steps are or where to find them?  I have
a db that I delete about 1 million records a day from in a batch job.
The only special thing I do is every few days I reindex the table
involved to reclame the space burned by the indexes not reclaiming
space on deletion of rows.  What other good and useful things could I
do?

Thanks 

marc


> 
> -Josh
> 
> __AGLIO DATABASE SOLUTIONS___
>Josh Berkus
>   Complete information technology  [EMAIL PROTECTED]
>and data management solutions   (415) 565-7293
>   for law firms, small businessesfax 621-2533
> and non-profit organizations.  San Francisco
> 
> ---(end of broadcast)---
> TIP 4: Don't 'kill -9' the postmaster

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [SQL] Slow SELECT -> Growing Database

2002-06-27 Thread Marc Spitzer

On Thu, Jun 27, 2002 at 04:24:04PM +0100, Marcos Garcia wrote:
> On Mon, 2002-06-24 at 22:43, Stephan Szabo wrote:
> > On 24 Jun 2002, Marcos Garcia wrote:
> > 
> > Hmm, that should only happen if you're doing alot of updates or deletes
> > I would guess (updates would cause it to slow down as well since it's
> > similar to a delete and insert under MVCC).  And frequent normal vacuums
> > should do that as well unless the table has high turnover.
> > 
> > Well, if you haven't yet, you might try upping the sort_mem and
> > shared_buffers amounts, although I think the former would only
> > potentially cut down the difference between 32s and 59s and the
> > latter would probably only help on a later use of the call if the
> > buffer is big enough to hold a significant portion of the pages.
> > 
> 
> 
> The problem isn't in the select.
> 
> I realize that my database is growing and growing.
> 
> I've two tables that have, lets say, 120.000 records each, and:
> - delete about 30.000 records a day from each table
> - insert about 30.000 records a day on each table
> - update each record at least 4 four times
>  

if you have a lot of change going on in your db do not forget 
to reindex your tables every so often.  Index space is not 
reclamed by vacuum, the first time can take a long time.  Also
in 7.2+ vacuum got a new keyword 'full'.  "vacuum full' reclaims 
disk space like in 7.1, the devault vacuum just marks rows that 
were deleted as reusable.

lookin at your numbers you have 60,000 definate index entries 
created each day, per index. And you have 120,000 x 4(min)=
480,000 endex entries created per index, if I remember correctly
update is handled by inserting a new row and deleteing the old
row.  So it looks like 540,000 index entries changed per day.

good luck

marc


> I've two other ones, that were mentioned in my previous emails, that
> have 12.000 records each, and:
> - insert 48 records a day in each table
> - =~ 120.000 updates in the last inserted records.
> 
> Track the problem:
> 
> # df -h /var/lib/pgsql-> 7.8 GB (I create this database 4 month's ago)
> 
> # pg_dump dbnane > dbname.dump
> 
> # dropdb dbname
> 
> # createdb dbname
> 
> # psql dbaname < dbname.dump
> 
> # df -h /var/lib/pgsql-> 140 M
> 
> I don't understand why the database is growing
> And is still growing.
> I make a vacuum -z -d dbname everyday (I checked if it really runs).
> 
> The only reason, for the growing of the database space, that i can see
> for now, is described in the following lines.
> 
> I've some perl programs that are concurrent in the access to the
> database, so i've have to make "SELECT FOR UPDATE". The algorithm of the
> program is:
> 
> Autocommit = 0;
> eval {
>   select id from table where state=1 for update limit 10;
>   update table set locked = true where id in (?);
> };
> if (ERROR){
>   ROLLBACK;
> }else{
>   COMMIT;
> }
> Autocommit = 1;
> 
> 
> What are the major reasons for the growing of the database disk space?
> 
> Maybe the following information is important:
> 
> dbname> select relname, relpages,reltuples from pg_class order by relpages desc 
>limit 10;
> 
>  relname | relpages | reltuples 
> -+--+---
>  sms_recv|30911 | 46801
>  sms_send| 7026 | 49119
>  sms_recv_unique_idclimsgidclien | 4561 | 46801
>  sms_recv_pkey   | 3647 | 46801
>  sms_recv_msgidclient_idx| 3615 | 46801
>  recv_total  | 1864 |  8120
>  send_total  | 1378 | 12315
>  sms_send_pkey   |  991 | 49119
>  sms_send_idclient_idx   |  913 | 49119
>  recv_total_idclismsdate |  686 |  8120
> (10 rows)
> 
> 
> 
> I really appreciate your help,
> 
> thanks,
> 
> M.P.Garcia
> 
> 
> -- 
> M.P.Garcia
> PT Inovação, SA
> Serviços e Redes Móveis 
> Rua José Ferreira Pinto Basto - 3810 Aveiro
> Tel: 234 403 253  -  Fax: 234 424 160
> E-mail: [EMAIL PROTECTED]





---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly





Re: [SQL] XML to Postgres conversion

2002-07-12 Thread Marc Spitzer

On Thu, Jul 11, 2002 at 09:23:39AM -0500, [EMAIL PROTECTED] wrote:
> Look at contrib/xml at
> http://developer.postgresql.org/cvsweb.cgi/pgsql/contrib/xml/.  I never
> used this, but it might be useful.
> 
> George Essig
> 
> > Hello.
> >
> > I am trying to figure out how to import xml documents into a postgres
> > database.  I am running PostgreSql 7.1.3 on Red Hat 7.2 at the moment.
> > I have several dynamic xml documents that I want imported into the
> > database on a regular basis.  From my research so far, I know that
> > there is middleware available to perform this, but am having some
> > difficulty in finding the actual applications.  I am trying to stay
> > with open source applications, if possible.  Can anyone give me any
> > suggestions or resources to pull from?
> >
> > Thanks,
> >
> > N. Hill

tDOM and nstcl might ne what you need, and a little coding.

marc

> 
> 
> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
> 
> http://www.postgresql.org/users-lounge/docs/faq.html

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [SQL] Editor for pgsql

2002-07-22 Thread Marc Spitzer

On Mon, Jul 22, 2002 at 04:09:21PM -0400, James Orr wrote:
> On Monday 22 July 2002 12:27 pm, Josh Berkus wrote:
> 
> > Me, I use Kate, an MDI text editor from the KDE crew, and CVS for
> > version control.   Thanks, KDE guys!But, after 3 years of Postgres,
> > I'm pretty fluent in PL/pgSQL.  I even double-quote without thinking
> > about it.
> 
> How do you use CVS on your database?  I recently started doing this, and i'm 
> wondering how other people handle it.
> 
> Basically I create a sql folder with three sub-folders tables, views
> and functions. I have a file for each table in tables, each view in
> views and for each trigger and/or function in functions.
> 
> For the actual editing? I'm a vi fan myself :).  If i'm using the graphical 
> vim I can even do CVS operations with a custom menu.
> 
> - James

James,

That sounds very ugly, I will usually have 1-4 files per db.  Either
everything goes into 1 file, drops at the front then creates.  Or
2 files, 1 for ddl( create/drop table) and another for plpgsql procedures
and triggers.  Sometimes I will split each of those into a create and drop
file.  But that is about as complex as I want it to get.  

marc

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



[SQL] convert a bigint into a timestamp

2002-07-24 Thread marc sturm

Hello,

Does anyone know how to convert a bigint into a date
or timestamp in a SQL query.
Thanks a lot.

Marc

__
Do You Yahoo!?
Yahoo! Health - Feel better, live better
http://health.yahoo.com

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] 1 milion data insertion

2002-07-26 Thread Marc Spitzer

On Fri, Jul 26, 2002 at 03:34:12PM -0300, Elielson Fontanezi wrote:
> Hi fellows!
> 
>   I have tried to insert 1.000.000 of record to the following table;
> 
> --
> zakal=# \d teste;
>  codigo | bigint | not null
>  nome   | character varying(100) |
> --
> 
>   and I got these errors:
> 
> --
> zakal$ psql -c "copy teste from 'teste.dat' using delimeters '|'"
> ERROR:  parser: parse error at or near "delimeters"
> ERROR:  parser: parse error at or near "delimeters"
> zakal$ psql -c "copy teste from 'teste.dat' using delimiters '|'"
> ERROR:  COPY command, running in backend with effective uid 504, could not
> open
> file 'teste.dat' for reading.  Errno = No such file or directory (2).
> ERROR:  COPY command, running in backend with effective uid 504, could not
> open
> file 'teste.dat' for reading.  Errno = No such file or directory (2).
> zakal$ pwd
> /home/zakal/tmp
> zakal$ psql -c "copy teste from '`pwd`/teste.dat' using delimiters '|'"
> DEBUG:  copy: line 27536, XLogWrite: new log file created - consider
> increasing
> WAL_FILES
> DEBUG:  copy: line 93146, XLogWrite: new log file created - consider
> increasing
> WAL_FILES
> DEBUG:  recycled transaction log file 
> 
> 
> ERROR:  copy: line 164723, Bad int8 external representation "16722"
> ERROR:  copy: line 164723, Bad int8 external representation "16722"
> zakal$
> zakal$
> zakal$ DEBUG:  recycled transaction log file 0001
> --
> 
> the log has overflowed.
>   
>   Ok, this was a test. I'd like to know what would be happen.
>   But, from you, great PostGres DBA's, what is the best way to
> insert a large number of data?
>   Is there a way to turn off the log?
>   Is there a way to commit each 100 records?
> 
> regards,

in relativly small chuncks, do 100 10,000 record transactions and
you should be fine.  

marc

> 
> ..
> A Question...
> Since before your sun burned hot in space and before your race was born, I
> have awaited a question.
> 
> Elielson Fontanezi 
> DBA Technical Support - PRODAM
> +55 11 5080 9493
> 
> 
> 
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[SQL] Triggers for inserting on VIEWS

2002-08-03 Thread Marc SCHAEFER

Hi,

at least with PostgreSQL 7.1 it was possible to create a trigger on a
view. 7.2 seems to fail with:

   psql:t:25: ERROR:  CreateTrigger: relation "egg_view" is not a table

is there any replacement so that inserting somewhere acts on multiple
tables ?

Thank you.

Code reference: (stupid, real code is more complex and uses multiple
 tables)

DROP TRIGGER t_egg ON egg_view;
DROP FUNCTION f_egg_insert ();
DROP VIEW egg_view;
DROP TABLE egg;
DROP SEQUENCE egg_id_seq;

CREATE TABLE egg(id SERIAL,
 description TEXT,
 UNIQUE(id), PRIMARY KEY(id));

CREATE VIEW egg_view
   AS SELECT description FROM egg;

CREATE FUNCTION f_egg_insert ()
   RETURNS opaque
   AS 'BEGIN
  INSERT INTO egg (description) VALUES(NEW.description);
   END;'
   LANGUAGE 'plpgsql';

CREATE TRIGGER t_egg
   BEFORE INSERT
   ON egg_view
   FOR EACH ROW
   EXECUTE PROCEDURE f_egg_insert();





---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [SQL] Triggers for inserting on VIEWS

2002-08-03 Thread Marc SCHAEFER

On Sat, 3 Aug 2002, Marc SCHAEFER wrote:

> is there any replacement so that inserting somewhere acts on multiple
> tables ?

Thanks for the suggestion to use RULES. 

My solution (comments welcome):

DROP RULE r_entree_rapide_ecriture_insert;
DROP FUNCTION f_entree_rapide_ecriture_insert(TEXT,
  DATE,
  TEXT,
  DATE,
  TEXT,
  NUMERIC(10, 2),
  INT4,
  INT4);
DROP VIEW entree_rapide_ecriture;
DROP TABLE ecriture;
DROP SEQUENCE ecriture_id_seq;
DROP SEQUENCE ecriture_lot_seq;
DROP TABLE piece;
DROP SEQUENCE piece_id_seq;
DROP TABLE compte;
DROP SEQUENCE compte_id_seq;

CREATE TABLE compte(id SERIAL NOT NULL,
libelle TEXT NOT NULL,
montant_initial NUMERIC(10, 2) DEFAULT 0.0 NOT NULL,
UNIQUE(libelle),
PRIMARY KEY(id), UNIQUE(id));

CREATE TABLE piece(id SERIAL NOT NULL,
   libelle TEXT NOT NULL,
   date DATE NOT NULL DEFAULT CURRENT_DATE,
   description TEXT,
   UNIQUE(libelle),
   PRIMARY KEY(id), UNIQUE(id));

CREATE SEQUENCE ecriture_lot_seq;
   
CREATE TABLE ecriture(id SERIAL NOT NULL,
  piece INT4 REFERENCES piece NOT NULL,
  date DATE NOT NULL DEFAULT CURRENT_DATE,
  compte INT4 REFERENCES compte NOT NULL,
  description TEXT,
  montant NUMERIC(10, 2) NOT NULL
 CHECK (montant > CAST(0.0 AS NUMERIC(10, 2))),
  type CHAR(1) NOT NULL CHECK (type IN ('D', 'A')),
  lot INT4 NOT NULL DEFAULT currval('ecriture_lot_seq'),
  PRIMARY KEY(id), UNIQUE(id));

CREATE VIEW entree_rapide_ecriture
   AS SELECT p.libelle AS piece_libelle,
 p.date AS piece_date,
 p.description AS piece_descr,
 e1.date AS ecriture_date,
 e1.description AS ecriture_descr,
 e1.montant AS ecriture_montant,
 e1.compte AS ecriture_de_compte,
 e2.compte AS ecriture_a_compte
  FROM piece p, ecriture e1, ecriture e2
  WHERE (e1.lot = e2.lot)
AND (e1.date = e2.date)
AND (e1.montant = e2.montant)
AND (e1.piece = e2.piece)
AND (e1.type != e2.type)
AND (e1.piece = p.id)
AND (e1.type = 'D');

CREATE FUNCTION f_entree_rapide_ecriture_insert(TEXT,
DATE,
TEXT,
DATE,
TEXT,
NUMERIC(10, 2),
INT4,
INT4)
   RETURNS INT4 -- void
   AS 'DECLARE
  piece_libelle ALIAS for $1;
  piece_date ALIAS for $2;
  piece_descr ALIAS for $3;
  ecriture_date ALIAS for $4;
  ecriture_descr ALIAS for $5;
  ecriture_montant ALIAS for $6;
  ecriture_de_compte ALIAS for $7;
  ecriture_a_compte ALIAS for $8;
  lot_id INT4;
  piece_id INT4;
   BEGIN
  SELECT nextval(\'ecriture_lot_seq\') INTO lot_id;

  SELECT nextval(\'piece_id_seq\') INTO piece_id;

  INSERT INTO piece (id, libelle, date, description)
 VALUES(piece_id,
piece_libelle,
piece_date,
piece_descr);

  INSERT INTO ecriture(piece,
   date,
   compte,
   description,
   montant,
   type,
   lot)
 VALUES(piece_id,
ecriture_date,
ecriture_de_compte,
ecriture_descr,
ecriture_montant,
\'D\',
lot_id);

  INSERT INTO ecriture(piece,
   date,
   compte,
   description,
   montant,
   type,
   lot)
 VALUES(piece_id,
ecriture_date,
ecriture_a_compte,
ecriture_descr,
ecriture_montant,
\'A\',
lot_id);

  RETURN 0; -- Ass

Re: [SQL] slowing down too fast - why ?

2002-08-11 Thread Marc Spitzer

On Sun, Aug 11, 2002 at 02:10:34PM -0400, [EMAIL PROTECTED] wrote:
> 
>  I've even launched the backend with "-F" and removed BEGIN/COMMIT and 
> LOCK TABLE and FOR UPDATE, but I still get slow response.
> 
>  only when count(*) from file is 16000, I get about 2-3 rows / second on 
> average. When count(*) from file was 100, I get about 20-30 rows / second.
> 
>   Help !
> 
>  Thanx,
> 
>  John
> 

some qustions in no particular order

Have you tried 7.2.1?
Have you looked at the disk io performance?
Have you considdered reindexing every night?
How fast is a count on the tables primary key vs the count(*)?
You are using a foreign key in table file that can make 
things slow in 7.1.x, I think it is fixed in 7.2+

Another thing is in table am you are using 1 char fields
to represent boolean values, if they are heavily used you
might want to switch to pg's native boolean type.  It is
probably faster.

And please stop top posting, it makes it harder to
figure out what is going on(read the thread) so it is
less likely that you will get the help you want.  

I looked at your pl script it is not a good test for 
select speed, it does other stuff.

Try something like this:
 echo 'select now() ; select count(*) from yourtable; select now()'|psql dbname 

to try to localize the problem.

good luck 

marc



> On Sun, 11 Aug 2002 [EMAIL PROTECTED] wrote:
> 
> > 
> >  and I forgot to mention that my stats are available at:
> > http://John.Vicherek.com/slow/times.query.txt
> > 
> >   John
> > 
> > On Sun, 11 Aug 2002 [EMAIL PROTECTED] wrote:
> > 
> > > 
> > >  
> > >  Hi,
> > > 
> > >   I must be doing something silly. I have a 900MHz, 384MB RAM, and
> > > this thing is slow.  (Postgresql-7.1.2).
> > > 
> > >  And growing exponencially slower.
> > > 
> > > SQL: http://John.Vicherek.com/slow/schema.sql (save to /tmp/schema.sql)
> > > am.dat: http://John.Vicherek.com/slow/am.dat   (save to /tmp/am.dat )
> > > perl: http://John.Vicherek.com/slow/rpm2filerian.pl (save to 
>/tmp/rpm2filerian.pl)
> > > 
> > > when I do :
> > > 
> > > createdb filerian
> > > psql -d filerian -f /tmp/schema.sql
> > > echo 'create table times (the_moment datetime, the_number int4);' | psql -d 
>filerian
> > > cd /tmp/mdk/8.2/i586/Mandrake/RPMS # lots of RPMs here
> > > while sleep 10 ; do echo 'insert into times values( now(), count(file.id));' | 
>psql -d filerian ; done 2>&1 >/dev/null  &
> > > for i in *.rpm ; do echo $i ; perl /tmp/rpm2filerian.pl 0 $i ; done
> > > 
> > > 
> > >  Why are the times so bad ? Why is it slowing so fast ?
> > > 
> > > Am I missing any useful indeces ?
> > > 
> > >  This shows the slowage:
> > > select the_number,min(the_moment) from times group by the_number;
> > > 
> > > PS: if you look in the perl code for "exec", immediatelly above will you
> > > find the query it is doing.
> > > 
> > >Thanx,
> > > 
> > >   John
> > > 
> > > 
> > > 
> > > 
> > > 
> > 
> > 
> 
> -- 
> -- Gospel of Jesus is the saving power of God for all who believe --
>## To some, nothing is impossible. ##
>  http://Honza.Vicherek.com/
> 
> 
> 
> 
> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
> 
> http://www.postgresql.org/users-lounge/docs/faq.html

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[SQL] retrieving specific info. from one column and locating it in another

2002-11-26 Thread MARC BEDOIS
I'm trying to retrieve some info from one column and
put it in another.  I have a column that has a bunch
of information in it called 'Route'.  I don't need to
show all of that information.  Instead I need divide
that single column into two seperate columns called
'Sender' and 'Receiver'.  How do I divide this
information up into these two columns.  I know of
methods called charindex and patindex.  I need to do
something like that but instead of returning the
position of the info, to just return the selected
info.
Ex)  I have a column named Routewith info in it
similar to 'UPS NS  Ground'
   How do I create a second column called
'Delivery' and pull only the 'NS' out of the Route
column and put it into the 'Reciever' column?
   Similarly how would I pull just the UPS part
out of Route and put it into 'Sender'?


thanks,
Marc

__
Do you Yahoo!?
Yahoo! Mail Plus – Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



[SQL] INSERT INTO VIEW - Replacement

2004-11-09 Thread marc ratun
Hi,
I've this data model:
CREATE SEQUENCE a_seq START 1;
CREATE SEQUENCE b_seq START 1;
CREATE TABLE a (
   aid integer NOT NULL PRIMARY KEY,
   aval character varying (255) NOT NULL
);
INSERT INTO a (select nextval('a_seq'),'a1');
INSERT INTO a (select nextval('a_seq'),'a2');
CREATE TABLE b (
   bid integer NOT NULL PRIMARY KEY,
   bval character varying (255) NOT NULL
);
INSERT INTO b (select nextval('b_seq'),'b1');
INSERT INTO b (select nextval('b_seq'),'b2');
CREATE TABLE c (
   cid integer NOT NULL,
   aid integer REFERENCES a (aid),
   bid integer REFERENCES b (bid),
   cval character varying (255) NOT NULL,
   PRIMARY KEY (cid)
);
CREATE VIEW myview AS SELECT cid,aval,bval,cval FROM c INNER JOIN a ON 
(a.aid=c.aid) INNER JOIN b ON (b.bid=c.bid);

Now I'd like to insert a row into "c" and if necessary
simultaneously create the referenced rows in
the referenced tables:
INSERT INTO myview VALUES (10,'a3','b1','c1');
(Here a row in "a" with aval 'a3' should be
created)
INSERT INTO myview VALUES (20,'a1','b2','c2');
(here a row in "a" with aval 'a1' should not
be created but referenced.)
(Same for "b").
"Insert into view" would't work, how could I do
my INSERTS efficiently in postgres without
having eache time an extra query which asks
whether there already is a row in "a" which
could be referenced to.
ie mache ich das
Thanks!
Marc
_
Express yourself instantly with MSN Messenger! Download today it's FREE! 
http://messenger.msn.com/

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[SQL] Order of columns in a table important in a stored procedure?

2005-02-13 Thread Marc SCHAEFER
Hi,

I use a FOR one_row IN loop where one_row is of type saisies%ROWTYPE.
The loop does a SELECT on a table, bennes, changing a few values.

The idea is that the function should return some bennes rows, but
with additional information, which makes the returned rows
a saisies table-like row set.

I have however noticed that if the SELECT is *NOT* in the correct order
for the table saisies, funny errors happen (such as type constraints --
obviously columns are mixed).

What I do not understand is that I use AS in order to name the columns,
I would think PostgreSQL could get the column names paired.

This is annoying since it means that any change to the data structure,
such as adding columns may make my functions non working.

This is however an old version of PSQL (7.1 I think).

For reference:

CREATE OR REPLACE FUNCTION f_fa_montre(VOID)
   RETURNS SETOF saisies
   AS '
DECLARE
   one_row saisies%ROWTYPE;
BEGIN
   FOR one_row IN
  SELECT NULL as idsaisie,
 b.no_client AS num_client,
 b.lieu_entreposage_b5 AS chantier,
 DATE_TRUNC(\'month\', CURRENT_DATE) AS dates,
 \'0\' AS num_bon,
 NULL AS num_art
  FROM bennes b
  WHERE (type_fact_p = b.type_fact)
   LOOP
  -- here I do some changes to the one_row, BTW
  RETURN NEXT one_row;
   END LOOP;

   RETURN;
END;'
LANGUAGE 'plpgsql';

The issue: if I exchange num_bon and dates above the query fails.

Thank you for any idea.


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[SQL] Returning a Cross Tab record set from a function

2005-06-05 Thread Marc Wrubleski
I have read the great work that all the list members have done working
with cross tabs (pivot tables) in postgresql. The issue I have not seen
a solution for, but would really like to see, is the ability to return
the results of a dynamic (variable # of columns) cross tab function as a
recordset. 

The excellent code contributed by Christoph Haller in the "Generating a
cross tab II (pivot table)" thread was very useful, but it dumps the
results into a view. I need to query like "select * from
create_pivot_report('sales_report2','vendor','product','sales','sum','sales');"
and have the result back as a recordset. 

The reason I want to do this is that I have a hierarchical structure of
itemtypes where each itemtype contains an arbitrary number of items, AND
each itemtype has an arbitrary number of attributes. I want to perform
the crosstab on the items with attributes for a given itemtype. The
static code works perfectly fine for a query of an itemtype, BUT the
itemtypes and attributes may change often enough that creating views for
each itemtype will be insufficient.

It seems I can do this from any higher level language, but it drives me
crazy that I can't perform this operation as a function inside of
Postgres... 

Thanks for any thoughts you might have...

-- 
Marc Wrubleski 


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [SQL] Returning a Cross Tab record set from a function

2005-06-18 Thread Marc Wrubleski




Hi All, thanks for your responses.

I know higher level languages can perform the operation, but I think a function written in a higher level language could still not return a resulting (structure undefined) table back as a result set. I hope I am  wrong about this ;-) If not, read on...

My function caller cannot query like SELECT * FROM crosstab ('SELECT x,y,z FROM foo ...') AS ct(a int, b text, c text) because he does not know that x,y,or z are available to him, and there may also q,r, and s too. That's part of what he is hoping to get from the query! (as well as the data for these columns)

Instead I need to query like SELECT * FROM crosstab_undef ('SELECT * FROM foo' ...)

I am no programmer, so I need someone to tell me if it is possible to add this feature to Postgres, or does it go against every (type) rule written? The difference is that if it IS possible, I may be able to get someone to write some code to contribute. I would love to hear from a developer on this one. 

I have a system that has two interfaces to the database (More if you include reporting tools) and I would like similar functionality for all interfaces. This is why I would like to have a function defined at the database level.

Many Thanks,

Marc Wrubleski 

On Mon, 2005-06-06 at 12:20 +0200, KÖPFERL Robert wrote:



|-Original Message-
|From: Marc Wrubleski [mailto:[EMAIL PROTECTED]]
|Sent: Mittwoch, 01. Juni 2005 16:15
|To: pgsql-sql@postgresql.org
|Subject: [SQL] Returning a Cross Tab record set from a function
|
[...]
|
|It seems I can do this from any higher level language, but it drives me
|crazy that I can't perform this operation as a function inside of
|Postgres... 

Yes, semms very like this can't be done in just sql but needs a higher level
lng.
Actually Postgres provides such higher languages. Try including PL/Python
PL/perl or PL/tk. Fom there you are able to query all the metadata of the
wanted tables so that a adequate SQL-string can be generated.

In case of sourcecode, you find theses languages in the contrib dir
|
|Thanks for any thoughts you might have...
|
|-- 
|Marc Wrubleski 
|
|
|---(end of 
|broadcast)---
|TIP 9: the planner will ignore your desire to choose an index 
|scan if your
|  joining column's datatypes do not match
|





-- 
Marc Wrubleski <[EMAIL PROTECTED]>







[SQL] Last access date for databases/users

2005-09-01 Thread Marc McIntyre
Is there anyway to determine when a database was last accessed, or when 
a user last connected to the database using the pg_* views? I'm trying 
to determine what databases in my system are currently active.




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [SQL] Ask To Optimize Looping

2009-08-19 Thread Marc Mamin
Hello,
 
I would try to replace the loop with a single  UPDATE FROM  Statement:
 
 
Update EP_ES06_N_TEMP2
Set  
FROM  (
select
kodedivisi,kodeseksi,kodewip,nobatch,ket1,ket2,ket3,ket4,NILAIPROP as
nilaiygdibagi
from EDP040_07_23
--order by
kodedivisi,kodeseksi,kodewip,nobatch,ket1,ket2,ket3,ket4
) i
 
WHERE ..
 
 
Here a simple example for this syntax:

create table test (i int);
insert into test select * from generate_series (1,20);
 
update test set i =0
from (select * from generate_series (1,10) s)i
where test.i=i.s;
--Query returned successfully: 10 rows affected
 
 
But beware the limitation of update from:
 
"When a FROM clause is present, what essentially happens is that the
target table is joined to the tables mentioned in the fromlist, and each
output row of the join represents an update operation for the target
table. When using FROM you should ensure that the join produces at most
one output row for each row to be modified. In other words, a target row
shouldn't join to more than one row from the other table(s). If it does,
then only one of the join rows will be used to update the target row,
but which one will be used is not readily predictable. 

Because of this indeterminacy, referencing other tables only within
sub-selects is safer, though often harder to read and slower than using
a join. "

HTH,

 

Marc Mamin



[SQL] record to columns: syntax question and strange behaviour

2009-10-27 Thread Marc Mamin
Hello,

how should I retrieve the result from a function with some OUT
paramenters?

(PG is  8.3.7)

here a short example to illustrate my question:

CREATE OR REPLACE FUNCTION test (In a int, OUT b int, OUT c int) AS
$BODY$
BEGIN
  b:=a+1;
  c:=a+2;
  raise notice 'done: %', a;
END

$BODY$
  LANGUAGE 'plpgsql' IMMUTABLE




select column1, test(column1) FROM (values(1),(2)) foo

1, (2,3)
2, (3,4)

NOTICE:  done: 1
NOTICE:  done: 2


What I want is just

1,2,3
2,3,4 


Following returns the expected result, but the function is called for
each OUT parameter:

select column1, (test(column1)).* FROM  (values(1),(2)) foo

=> 

1,2,3
2,3,4 

NOTICE:  done: 1
NOTICE:  done: 1
NOTICE:  done: 2
NOTICE:  done: 2

Is there a way to avoid it ???


Thanks,

Marc Mamin


Re: [SQL] record to columns: syntax question and strange behaviour

2009-10-27 Thread Marc Mamin

Hello,

Your proposal unfortunately does not work if you try to query more than one 
value and want additional columns in the results, 
like in 

select column1,test(column1) FROM (values(1),(2)) foo

cheers,

Marc Mamin


>IMO easiest would be to include a   RETURNS SETOF record   in the
>function declaration and a   return next;   statement in the function
>body. E.g.
>
>
>CREATE OR REPLACE FUNCTION test (In a int, OUT b int, OUT c int)
>RETURNS SETOF record
>AS
>$BODY$
>BEGIN
>   b:=a+1;
>   c:=a+2;
>   return next;
>END
>$BODY$
>   LANGUAGE 'plpgsql'
>
>and then issue
>
>SELECT * FROM test(1);



[SQL] workaround for missing ROWNUM feature with the help of GUC variables

2010-08-04 Thread Marc Mamin
Hello,

 

here my two pence on this recurring thema.

 

(just a workaround)

 

regards,

 

Marc Mamin

 

 

 

The PG parameter must be set to allow defining own configuration
variables:

 

 

 
#---
---

# CUSTOMIZED OPTIONS

 
#---
---

 

custom_variable_classes = 'public'# list of custom
variable class names

 

 

usage example:



select my_rownum(),* from generate_series (10,15);

 

wrong usage:



select my_rownum() as n1,

   my_rownum() as n2,

   *

   from generate_series (10,15);

   

solution:   



select my_rownum('1') as n1,

   my_rownum('2') as n2,

   *

   from generate_series (10,15);

   

Code:

=

 

CREATE OR REPLACE FUNCTION public.my_rownum ()

returns int AS

$BODY$

 

/*

  

  equivalent to oracle rownum

  (The previous row value is attached to a GUC Variable valid in the
current transaction only)

  quite slow :-(

  

*/  

  

DECLARE

  current_rownum int;

  config_id varchar = 'public.my_rownum';

  

BEGIN

  

  BEGIN

 

current_rownum := cast (current_setting (config_id) as int);

 

  EXCEPTION when others then 

 

return cast( set_config(config_id, cast(1 as text), true) as int);

 

  END;

 

  RETURN  cast( set_config(config_id, cast(current_rownum + 1 as text),
true) as int);

 

END;

$BODY$

  LANGUAGE 'plpgsql' VOLATILE;

  

 

/*

-- 

   For multiple usage:

 

-- */  

  

CREATE OR REPLACE FUNCTION public.my_rownum ( id varchar )

returns int AS

$BODY$

  

  /*

  

  equivalent to oracle rownum

  quite slow :-(

  (The previous row value is attached to a GUC Variable valid in the
current transaction only)

  

  $1: when more than one my_rownum is used within a query, each call
must have its own ID in order to get different GUC variable).

  

  */

  

DECLARE

  current_rownum int;

  config_id varchar = 'public.my_rownum'||id;

  

 BEGIN

   

   BEGIN

 

 current_rownum := cast (current_setting (config_id) as int);

 

   EXCEPTION when others then 

 

 return cast( set_config(config_id, cast(1 as text), true) as int);

 

   END;

 

   RETURN  cast( set_config(config_id, cast(current_rownum + 1 as text),
true) as int);

 

 END;

 $BODY$

   LANGUAGE 'plpgsql' VOLATILE;

   

   

 

 

 



Re: [SQL] workaround for missing ROWNUM feature with the help of GUC variables

2010-08-06 Thread Marc Mamin

> I don't understand what you mean by missing ROWNUM feature, PG got
this with windows-functions in 8.4:
> http://www.postgresql.org/docs/8.4/interactive/functions-window.html
<http://www.postgresql.org/docs/8.4/interactive/functions-window.html> 



 

I'm unfortunately still using 8.3.

 

sorry not to have mentioned that.

 

Marc  Mamin

 



[SQL] Controlling join order with parenthesis

2010-09-09 Thread Marc Mamin


Hello,

According to the documentation, I thought it was possible to force given
join orders with parenthesis.

But in this small example, this seems to have no effect; 
With the first query, I'd expected to see t3 in the  Merge Right Join
but both queries return the same query plan .

I'm missing a point ?

(Postgres 8.3)

create temp table t1(i int);
create temp table t2(i int);
create temp table t3(i int);

select * from 
 ( 
   ( t1 LEFT OUTER JOIN  t2  on (t1.i=t2.i) )
   LEFT OUTER JOIN t3 on (t2.i=t3.i)
 )

select * from 
 ( 
   t1 LEFT OUTER JOIN  
   (t2 LEFT OUTER JOIN t3 on (t2.i=t3.i) )
   on (t1.i=t2.i) 
)

Merge Right Join  (cost=506.24..6206.24 rows=345600 width=12)
  Merge Cond: (t2.i = t1.i)
  ->  Merge Left Join  (cost=337.49..781.49 rows=28800 width=8)
Merge Cond: (t2.i = t3.i)
->  Sort  (cost=168.75..174.75 rows=2400 width=4)
  Sort Key: t2.i
  ->  Seq Scan on t2  (cost=0.00..34.00 rows=2400 width=4)
->  Sort  (cost=168.75..174.75 rows=2400 width=4)
  Sort Key: t3.i
  ->  Seq Scan on t3  (cost=0.00..34.00 rows=2400 width=4)
  ->  Sort  (cost=168.75..174.75 rows=2400 width=4)
Sort Key: t1.i
->  Seq Scan on t1  (cost=0.00..34.00 rows=2400 width=4)



best  regards,

Marc Mamin

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Overlapping Ranges- Query Alternative

2010-11-12 Thread Marc Mamin
or:

 

 

Select Groups, generate_series 

FROM

ranges JOIN generate_series(10,50,10)  on ( ColumnA < generate_series)

ORDER by  Groups , generate_series

;

 

regards,

 

Marc Mamin

 

 

 

From: pgsql-sql-ow...@postgresql.org
[mailto:pgsql-sql-ow...@postgresql.org] On Behalf Of Andreas Gaab
Sent: Freitag, 12. November 2010 09:23
To: 'Ozer, Pam'; pgsql-sql@postgresql.org
Subject: Re: [SQL] Overlapping Ranges- Query Alternative

 

Hi,

 

the following works:

 

Create temp table ranges (Groups int, ColumnA int);

Insert into ranges Values(2,45);

Insert into ranges Values(3,15);

Insert into ranges Values(4,25);

Insert into ranges Values(5,35);

 

Select Groups, 

Case when ColumnA between 0 and 19 then 0

 when ColumnA >=20 AND ColumnA < 30 then generate_series(20,20,10)

 when ColumnA >=30 AND ColumnA < 40 then generate_series(20,30,10)

 when ColumnA>=40 AND ColumnA < 50 then generate_series(20,40,10)

 when ColumnA>=50 then generate_series(20,50,10) end MinRange

from ranges;

 

 

--or even only

 

Select Groups, 

CASE WHEN ColumnA < 20 then 0 ELSE

generate_series(20, (floor(ColumnA / 10.0) * 10)::integer ,10) END
MinRange

from ranges;

 

 

Best, Andreas

 

Von: pgsql-sql-ow...@postgresql.org
[mailto:pgsql-sql-ow...@postgresql.org] Im Auftrag von Ozer, Pam
Gesendet: Donnerstag, 11. November 2010 20:07
An: pgsql-sql@postgresql.org
Betreff: [SQL] Overlapping Ranges- Query Alternative

 

I have the following problem:

 

Create temp table ranges (Groups int, ColumnA int);

Insert into ranges

Values(2,45);

 

Select Groups, 

Case when ColumnA between 0 and 19 then 0

 when ColumnA >=20 then 20

 when ColumnA >=30 then 30

 when ColumnA>=40 then 40

 when ColumnA>=50 then 50 end MinRange

from ranges

 

Results: 

Groups minrange

2;20

 

What I want Is : One column can fall into multiple ranges.  For example
45 >20, 30, and 40 so I want the following results

2;20

2;30

2;40

 

I know I could do a union with each range but is there any way to bring
back all ranges in one query?  I need to bring back the values in one
column so having separate columns for each range is not an option.

 

Thank you in advance for any help

 

Pam Ozer

 



[SQL] First aggregate with null support

2010-12-06 Thread Marc Mamin
Hello,

I'm looking for a First aggregate which may return null.

From the example at
http://wiki.postgresql.org/index.php?title=First_%28aggregate%29,
I have just defined a non strict function that returns the first value:

CREATE OR REPLACE FUNCTION public.first_agg_wnull ( anyelement,
anyelement )
RETURNS anyelement AS $$
SELECT $1;
$$ LANGUAGE SQL IMMUTABLE
COST 1;

And an aggregate:

CREATE AGGREGATE public.first_wnull (
   sfunc= first_agg,
   basetype = anyelement,
   stype= anyelement
);
   
   
But this always return null which is the default init condition of the
aggregate :-(

I also have a working one using an array function (first_wnull_a, below)
, but I wonder if there is a simpler solution ...

best regards,

Marc Mamin



CREATE OR REPLACE FUNCTION first_element_state(anyarray, anyelement)
RETURNS anyarray AS
$$
SELECT CASE WHEN array_upper($1,1) IS NULL THEN array_append($1,$2) ELSE
$1 END; 
$$
LANGUAGE 'sql' IMMUTABLE
COST 2;


CREATE OR REPLACE FUNCTION first_element(anyarray)
RETURNS anyelement AS
$$
SELECT ($1)[1] ;
$$
LANGUAGE 'sql' IMMUTABLE
COST 2;

CREATE AGGREGATE first_wnull_a(anyelement) (
SFUNC=first_element_state,
STYPE=anyarray,
FINALFUNC=first_element
);

select first_wnull(s) from generate_series (1,10) s
=> 
NULL

select first_wnull_a(s) from generate_series (1,10) s
=> 
1


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] conditional aggregates

2010-12-08 Thread Marc Mamin
something like ?

 

Select min (case when X > 0 then X end) 

 

 

HTH,

 

Marc Mamin

 

From: pgsql-sql-ow...@postgresql.org
[mailto:pgsql-sql-ow...@postgresql.org] On Behalf Of Marcin Krawczyk
Sent: Mittwoch, 8. Dezember 2010 14:20
To: Pavel Stehule
Cc: pgsql-sql@postgresql.org
Subject: Re: [SQL] conditional aggregates

 

Yeah I know but I'm having problems creating sfunc fuction for the
aggregate.

 


regards
mk



2010/12/8 Pavel Stehule 

Hello

use a CASE statement

http://www.postgresql.org/docs/7.4/static/functions-conditional.html

Regards

Pavel Stehule

2010/12/8 Marcin Krawczyk :

> Hi list,
> Can anyone advise me on creating an aggregate that would take
additional
> parameter as a condition ? For example, say I have a table like this
> id;value
> 1;45
> 2;13
> 3;0
> 4;90
> I'd like to do something like this
> SELECT min_some_cond_aggregate(value,0) FROM table
> to get the minimal value from table greater than 0, in this case 13.
> I can't do SELECT min(value) FROM table WHERE value > 0 as this will
mess my
> other computations. My current solution involves a function operating
on the
> output of array_accum from the docs, but I'm looking for more elegant
> solution.
> Is this possible at all ? I'm running 8.1.
>
> regards
> mk
>

 



[SQL] checking for the existence of a current_setting ?

2011-05-03 Thread Marc Mamin
 

Hello,

 

(Postgres 8.3)

 

I'm misusing the current settings at some places to store session
variables.

 

The next function retrieve such a variable, or initialized it with a
default value.

It is working as expected but performances are slow due to the exception
block.

Is there a way to make the check more smoothly, i.e. without relying on
the exception ?

maybe some undocumented internal function ?

 

many thanks,

 

Marc Mamin

 

 

 

CREATE OR REPLACE FUNCTION public.var_get_check(int,text)

RETURNS text AS

$BODY$

 

  BEGIN

 return current_setting('public.' || $2 || pg_backend_pid());

  EXCEPTION when undefined_object then

 perform set_config ('public.' || $2 || pg_backend_pid(), $1::text,
false);

 return $1::text;

  END  ;

$BODY$

  LANGUAGE plpgsql VOLATILE

  COST 100;



Re: [SQL] Prevent double entries ... no simple unique index

2012-07-11 Thread Marc Mamin
> 
> Or this one:
> 
> test=*# create unique index on log((case when state = 0 then 0 when
> state = 1 then 1 else null end));
> CREATE INDEX
> 
> 
> Now you can insert one '0' and one '1' - value - but no more.

Hi,

A partial index would do the same, but requires less space: 

create unique index on log(state) WHERE state IN (0,1);

best regards,

Marc Mamin



-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] reduce many loosely related rows down to one

2013-05-25 Thread Marc Mamin

> 
> Von: pgsql-sql-ow...@postgresql.org [pgsql-sql-ow...@postgresql.org]" im 
> Auftrag von "Bill MacArthur [webmas...@dhs-club.com]
> Gesendet: Samstag, 25. Mai 2013 09:19
> An: pgsql-sql@postgresql.org
> Betreff: [SQL] reduce many loosely related rows down to one
> 
> Here is a boiled down example of a scenario which I am having a bit of 
> difficulty solving.
> This is a catchall table where all the rows are related to the "id" but are 
> entered by different unrelated processes that do not necessarily have access 
> to the other data bits.
> 


> -- raw data now looks like this:
> 
> select * from test;
> 
>   id | rspid | nspid | cid | iac | newp | oldp |   ppv   |  tppv
> +---+---+-+-+--+--+-+-
>1 | 2 | 3 |   4 | t   |  |  | |
>1 | 2 | 3 | | |  100 |  | |
>1 | 2 | 3 | | |  |  200 | |
>1 | 2 | 3 | | |  |  | | 4100.00
>1 | 2 | 3 | | |  |  | | 3100.00
>1 | 2 | 3 | | |  |  | -100.00 |
>1 | 2 | 3 | | |  |  |  250.00 |
>2 | 7 | 8 |   4 | |  |  | |
> (8 rows)
> 
> -- I want this result (where ppv and tppv are summed and the other distinct 
> values are boiled down into one row)
> -- I want to avoid writing explicit UNIONs that will break if, say the "cid" 
> was entered as a discreet row from the row containing "iac"
> -- in this example "rspid" and "nspid" are always the same for a given ID, 
> however they could possibly be absent for a given row as well
> 
>   id | rspid | nspid | cid | iac | newp | oldp |   ppv   |  tppv
> +---+---+-+-+--+--+-+-
>1 |2  | 3 |  4  | t   | 100  | 200  |  150.00  | 7200.00
>2 |7  | 8 |  4  | |  |  |0.00  |0.00
> 
> 
> I have experimented with doing the aggregates as a CTE and then joining that 
> to various incarnations of DISTINCT and DISTINCT ON, but those do not do what 
> I want. Trying to find the right combination of terms to get an answer from 
> Google has been unfruitful.


Hello,
If I understand you well, you want to perform a group by whereas null values 
are coalesced to existing not null values.
this seems to be logically not feasible.
What should look the result like if your "raw" data are as following:

  id | rspid | nspid | cid | iac | newp | oldp |   ppv   |  tppv
+---+---+-+-+--+--+-+-
   1 | 2 | 3 |   4 | t   |  |  | |
   1 | 2 | 3 |   5 | t   |  |  | |
   1 | 2 | 3 | | |  100 |  | |

(to which cid should newp be summed to?)

regards,

Marc Mmain

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Advice on re-writing a SELECT query.

2013-05-25 Thread Marc Mamin
I have a query like this:
>
>SELECT
>lpt_titulo AS tmt_titulo,
>tmd_nombre AS tmt_nombre,
>tmd_album AS tmt_album
>SUM(lpt_puntos) AS tmt_puntos,
>lpt_fuente AS tmt_fuente
>FROM listas_pre_titulos, temp_lista_titulos
>WHERE
>listas_pre_titulos.lpt_tipo = 3 AND
>listas_pre_titulos.lpt_titulo <> temp_lista_titulos.tmt_titulo AND
>listas_pre_titulos.tmd_album <> temp_lista_titulos.tmt_album AND
>listas_pre_titulos.lpt_fuente <> temp_lista_titulos.tmt_fuente
>GROUP BY
>lpt_fuente, lpt_titulo, tmd_album
>ORDER BY tmt_puntos ASC
>
>Is it valid to re-write the FROM and WHERE statements as follows?
>
>FROM listas_pre_titulos
>INNER JOIN temp_lista_titulos ON
>(listas_pre_titulos.lpt_titulo, listas_pre_titulos.tmd_album, 
>listas_pre_titulos.lpt_fuente)
>NOT IN
>(temp_lista_titulos.tmt_titulo, temp_lista_titulos.tmt_album, 
>temp_lista_titulos.tmt_fuente)
>WHERE listas_pre_titulos.lpt_tipo = 3

hello,
your second syntax is not valid sql, but you can achieve it as in this example:

create temp table a(a int,b int,c int,d int);
create temp table b(a int,b int,c int,d int);

select * from a join b ON ((a.a,a.b,a.c)<>(b.a,b.b,b.c))

but beware if null values are involved( 1<>NULL => NULL).
In this case you can use :
select * from a join b ON ((a.a,a.b,a.c) IS DISTINCT FROM (b.a,b.b,b.c))

regards,

Marc Mamin



Re: [SQL] reduce many loosely related rows down to one

2013-05-29 Thread Marc Mamin
> SELECT id,
> (array_agg(rspid))[1] AS rspid,-- (1)


for such cases, I have created an new aggregate function:

SELECT firstnotnull(rspid) AS rspid,

this avoid to collect first all rspid values to then keep only the first one...


  CREATE OR REPLACE FUNCTION public.first_agg_nn ( anyelement, anyelement )
  RETURNS anyelement AS $$
SELECT $1;
  $$ LANGUAGE SQL IMMUTABLE STRICT
  COST 1;

  CREATE AGGREGATE public.firstnotnull (
   sfunc= public.first_agg_nn,
   basetype = anyelement,
   stype= anyelement
  );


regards,

Marc Mamin

> -Original Message-
> From: pgsql-sql-ow...@postgresql.org [mailto:pgsql-sql-
> ow...@postgresql.org] On Behalf Of Torsten Grust
> Sent: Dienstag, 28. Mai 2013 17:08
> To: pgsql-sql@postgresql.org
> Subject: Re: [SQL] reduce many loosely related rows down to one
> 
> On 25 May 2013, at 9:19, Bill MacArthur wrote (with possible
> deletions):
> > [...]
> > select * from test;
> >
> > id | rspid | nspid | cid | iac | newp | oldp |   ppv   |  tppv
> > +---+---+-+-+--+--+-+-
> > 1 | 2 | 3 |   4 | t   |  |  | |
> > 1 | 2 | 3 | | |  100 |  | |
> > 1 | 2 | 3 | | |  |  200 | |
> > 1 | 2 | 3 | | |  |  | | 4100.00
> > 1 | 2 | 3 | | |  |  | | 3100.00
> > 1 | 2 | 3 | | |  |  | -100.00 |
> > 1 | 2 | 3 | | |  |  |  250.00 |
> > 2 | 7 | 8 |   4 | |  |  | |
> > (8 rows)
> >
> > -- I want this result (where ppv and tppv are summed and the other
> > distinct values are boiled down into one row)
> > -- I want to avoid writing explicit UNIONs that will break if, say
> the
> > "cid" was entered as a discreet row from the row containing "iac"
> > -- in this example "rspid" and "nspid" are always the same for a
> given
> > ID, however they could possibly be absent for a given row as well
> >
> > id | rspid | nspid | cid | iac | newp | oldp |   ppv   |  tppv
> > +---+---+-+-+--+--+-+-
> > 1 |2  | 3 |  4  | t   | 100  | 200  |  150.00  | 7200.00
> > 2 |7  | 8 |  4  | |  |  |0.00  |0.00
> 
> One possible option could be
> 
> SELECT id,
> (array_agg(rspid))[1] AS rspid,-- (1)
> (array_agg(nspid))[1] AS nspid,
> (array_agg(cid))[1]   AS cid,
> bool_or(iac)  AS iac,  -- (2)
> max(newp) AS newp, -- (3)
> min(oldp) AS oldp, -- (4)
> coalesce(sum(ppv), 0) AS ppv,
> coalesce(sum(tppv),0) AS tppv
> FROM test
> GROUP BY id;
> 
> 
> This query computes the desired output for your example input.
> 
> There's a caveat here: your description of the problem has been
> somewhat vague and it remains unclear how the query should respond if
> the functional dependency id -> rspid does not hold.  In this case, the
> array_agg(rspid)[1] in the line marked (1) will pick one among many
> different(!) rspid values.
> I don't know your scenario well enough to judge whether this would be
> an acceptable behavior.  Other possible behaviors have been implemented
> in the lines (2), (3), (4) where different aggregation functions are
> used to reduce sets to a single value (e.g., pick the largest/smallest
> of many values ...).
> 
> Cheers,
>--Torsten
> 
> 
> --
> | Torsten "Teggy" Grust
> | torsten.gr...@gmail.com
> 
> 
> --
> Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make
> changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-sql


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] delete where not in another table

2013-07-14 Thread Marc Mamin
> Subject: [SQL] delete where not in another table
> DELETE FROM T1 WHERE T1.user_id NOT IN (SELECT user_id FROM T2 WHERE 
> T2.user_id=T1.user_id);


Following query use an anti join and is much faster:

  delete from t1 where not exists (select user_id  from t2 where t2.user_id 
=t1.user_id )

regards,

Marc Mamin

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] value from max row in group by

2013-07-25 Thread Marc Mamin

>
>Von: pgsql-sql-ow...@postgresql.org [pgsql-sql-ow...@postgresql.org]" im 
>Auftrag von "Venky >Kandaswamy [ve...@adchemy.com]
>
>You can use Postgres WINDOW functions for this in several different ways. For 
>example, one way of doing it:
>
>select stts_id,   last_value(stts_offset) over (partition by stts_id order by 
>stts_offset desc)
>  + last_value(stts_duration) over (partition by stts_id order 
> by stts_offset desc)
>from table
>group by stts_id;

another simple solution with distinct on:

select distinct on (stts_id, stts_offset) stts_id, stts_offset+stts_duration
from table
order by stts_id, stts_offset desc

Marc Mamin


From: pgsql-sql-ow...@postgresql.org  on behalf 
of Gary Stainburn 
Sent: Thursday, July 25, 2013 10:57 AM
To: pgsql-sql@postgresql.org
Subject: Re: [SQL] value from max row in group by

As usual, once I've asked the question, I find the answer myself.

However, it *feels* like there should be a more efficient way. Can anyone
comment or suggest a better method?

timetable=> select stts_id, stts_offset+stts_duration as total_duration
timetable-> from standard_trip_sections
timetable-> where (stts_id, stts_offset) in
timetable-> (select stts_id, max(stts_offset) from standard_trip_sections
group by stts_id);
 stts_id | total_duration
-+
   1 | 01:35:00
   2 | 01:35:00
   3 | 01:08:00
   4 | 01:38:00
   5 | 01:03:00
   6 | 01:06:00
(6 rows)

timetable=>


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Need to subtract values between previous and current row

2006-12-15 Thread Marc Mamin
I see 3 solutions.
 

A) self join
 
 
 
B) define a procedure that return a set of records.
   this use only a single table scan on the ordered table
 
   not tested, just the genera idea: 
 
 
 
CREATE OR REPLACE FUNCTION foo()
  RETURNS SETOF  AS
...
 
DELARE
previous_time int8; --(or whaever datatype you have)
rec  record ;
rec2 ;
 

BEGIN
 
  FOR rec in select id, time from yourtable ORDER BY ID LOOP
 
  select into rec2 id, rec.time - previous_time;
  return next rec2;
  END LOOP;
 
END;
 
 ...
 
 
C) embedding R in Postgres
 
http://www.joeconway.com/oscon-pres-2003-1.pdf
http://www.omegahat.org/RSPostgres/
 

This may be a solution to implement complex cross-rows aggregation.
But I never got the time to test it;
 
I'd be interested to know  which concerns this solution can show 
(stability, memory & cpu load, concurent requests)
 
 
 
Cheers, 
 
Marc
 
 



[SQL] a way to generate functions dynamically ?

2007-01-11 Thread Marc Mamin
Hello, 
 
I need to generate some procedures that depend  data models stored in my
DBs. 
As I have different models in different databases, the stored procedures
will differ.
 
My idea is to generate the required stored procedures dynamically once a
model is defined.
I will probably do this within the application. But is there a way to
achieve this with plpgsql ?
 
here a naive try to illustrate  my idea: 
 
CREATE OR REPLACE FUNCTION test(p1 int)
  RETURNS integer AS
$BODY$
 
EXECUTE'
 
CREATE OR REPLACE FUNCTION generated(p2 int)
RETURNS integer AS

$BODY$
   
 BEGIN
   return p2 + ' || p1 || '; 
 END;

$BODY$
  
LANGUAGE ''plpgsql'' VOLATILE
  ';
 
 ERROR: syntax error at or near "BEGIN"
SQL state: 42601
Character: 156 
 
 
Thanks,
 
Marc 


[SQL] EXECUTE 'EXECUTE ...' or how to use dynamic prepared statements ?

2007-09-04 Thread Marc Mamin

Hello,
 
I have a large upddate to perform on tables which are dynamically
generated (dynamic names).
 
In this simplified example, the operation should replace in each family
the "mygroup" of each item of rang=0 with the "mygroup" value of the
element of rang=1 :
(the * indicate the modified values) 
 
idfamily  rang mygroup
 
1  10  1
2  11  2
3  12  3
4  13  4
 
5  20  6
6  21  6
7  22  7
8  23  7
 
9  30  10
10 31  20
11 32  21
 
After the update: 
 
1 10   2 *
2 11   2
3 12   3
4 13   4
 
5 20   6
6 21   6
7 22   7
8 23   7
 
9 30   20 *
1031   20
1132   21
 

In the following function, I would like to use a prepared statement for
the update command but I get stuck with the tho different meanings of
EXECUTE ...
 
Is there a way to achieve this ?
 
Thanks,
 
Marc
 
 
 

CREATE OR REPLACE FUNCTION test_function(tablename varchar)
  RETURNS integer AS
$BODY$
 
DECLARE
rec record;
top_group int;
top_family character(16);
top_id int;
 
BEGIN
 
   /*
   the prepared statement must be generated dynamically in order to
include the table name.
   */
   EXECUTE 'PREPARE update_stmt (int, int) AS
   update '||tablename||' set mygroup= $1 where id = $2';
 

   
   /*
   using "select distinct on" allows to retrieve and sort the required
information for the update.
   this is faster than a self join on the table
   */
   for rec in execute 
  'select DISTINCT  on (family,rang) 
  family, rang, mygroup, id
  from '||tablename||'
  where  rang < 2
  order by family, rang'
   
   loop
 
 IF rec.rang = 0 THEN
 
   top_group  := rec.mygroup;
   top_family := rec.family;
   top_id := rec.id;
 
 ELSIF rec.family = top_family AND rec.mygroup <> top_group THEN
 
 
   /*
   Update without using a prepared statement
   EXECUTE 'update '||tablename||' set mygroup=
'||rec.mygroup||' where id = '||top_id;
   */
   
   -- This works, but the command has to be computed for each
iteration
   EXECUTE 'EXECUTE
update_stmt('||rec.mygroup||','||top_id||')';
 
   /*
   Following syntax would be fine
   PERFORM EXECUTE update_stmt(rec.mygroup,top_id);
   */
 
 END IF;
 

   end loop;
 
   DEALLOCATE update_stmt;
 
   RETURN 0; 
 

END;
 
$BODY$
  LANGUAGE 'plpgsql' VOLATILE;
 

/* 
test data:
=== */
 
--drop table test_table;
create table test_table(id int,family int,rang int,mygroup int);
 
insert into test_table values (1,1,0,1); 
insert into test_table values (2,1,1,2); 
insert into test_table values (3,1,2,3); 
insert into test_table values (4,1,3,4);
 
insert into test_table values (5,2,0,6); 
insert into test_table values (6,2,1,6); 
insert into test_table values (7,2,2,7); 
insert into test_table values (8,2,3,7);
 
insert into test_table values (9, 3,0,10); 
insert into test_table values (10,3,1,20); 
insert into test_table values (11,3,2,21);
 
select test_function('test_table');
 
select * from test_table order by id;


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] EXECUTE 'EXECUTE ...' or how to use dynamic prepared statements ?

2007-09-05 Thread Marc Mamin
Hello Dirk,


I have to disagree.

Your first update query is very low. It probably implies to run the sub
select statement for each row to be updated.

Following update statement is already much faster: (using UPDATE FROM)

   update test_table
  set mygroup= t.mygroup
   from test_table as t
   where t.family = test_table.family
   and t.rang = 1
   and table.rang=0
   -- perform the updte only when required
   and mygroup <> t.mygroup;

But when you are dealing with  "parent - child" relations within a
single table as in my case, 
a single table scan with SELECT DISTINCT ON  and a row by row comparison
on the result set appears to be faster.

I tested both approaches on tables with ca. 14'000'000 rows where 25% of
them needed to be updated.

The above update statement run in 5H30' where my function did the job in
2H.
(as my tables are very large, much time is lost in i/o wait)



Cheers,

Marc



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [SQL] SQL stored function inserting and returning data in a row.

2008-01-14 Thread Marc Mamin

> What about
> $$
> INSERT INTO  ;
> select currval('seq_matchmaking_session_id');
> $$ language sql;
> 
> ?

Hello,

I'm not sure that this would return the correct id in case of concurrent
calls to your function.
I'm using following kind of function to manage reference tables:

HTH,

Marc Mamin


CREATE TABLE xxx
(
  id serial NOT NULL,
  mycolumn character varying,
  CONSTRAINT xxx_pk PRIMARY KEY (id) ,
  CONSTRAINT xxx_uk UNIQUE (mycolumn)
)



CREATE OR REPLACE FUNCTION get_or_insert_id_xxx( input_value varchar)
RETURNS INT AS $$

DECLARE
  id_value int;

BEGIN
  select into id_value id from xxx where mycolumn =  input_value;
  IF FOUND THEN
return id_value;
  ELSE
insert into xxx ( mycolumn ) values (  input_value );
return id from xxx where mycolumn =  input_value;
  END IF;

EXCEPTION WHEN unique_violation THEN
return id from xxx where mycolumn =  input_value;

END;
$$ LANGUAGE plpgsql; 

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] SQL question: Highest column value of unique column pairs

2008-01-14 Thread Marc Mamin

Hello Kevin,

I would use "select distinct on" to first isolate the candidates in (1)
and (2) and then reitere the query on this sub result:
(the query below will retrieve the last score, not the best one...)


something like (not tested):

select distinct on (date,name)
date,name,score
from
   
   (select distinct (on date, LName1)
   date,LName1 as name ,score1 as score
   from table
   order by date desc,LName1
   
   union all
   
   select distinct on (date, LName2) 
  date,LName2 as name,score2 as score
   from table
   order by date desc, LName2
   )foo
   
order by date desc,name


regards,

Marc Mamin



 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Kevin Jenkins
Sent: Saturday, January 12, 2008 1:10 AM
To: pgsql-sql@postgresql.org
Subject: [SQL] SQL question: Highest column value of unique column pairs

Hi,

I have the following table which holds the result of 1 on 1 matches:

FName1, LName1, Score1, FName2, LName2, Score2, Date
John,   Doe,85  Bill,   Gates,  20  Jan 1.
John,   Archer, 90  John,   Doe,120 Jan 5
Bob,Barker, 70  Calvin, Klien   8   Jan 8
John,   Doe,60  Bill,   Gates,  25  Jan 3.

So columns 1 and 2 hold the first person. Column 3 holds his score. 
Columns 4 and 5 hold the second person. Column 6 holds his score.

I want to return the most recent score for each person (be they an
opponent or myself).  And the resultant table shouldn't care if they are
person 1 or 2.

So the end result would be

FName,   LName, Score, Date
John,Doe,   120Jan 5.
John,Archer 90 Jan 5.
Bob, Barker 70 Jan 8
Bill,Gates  25 Jan 3
Calvin   Klien  8  Jan 8

Thanks for any help!

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [SQL] dynamic events categorization

2008-06-26 Thread Marc Mamin

Hello,

I guess that the time offsets (now-21 and now-28) are evaluated each
time the corresponding condition is met.
It may be faster to put them into a separate sub query. I'm not sure
about putting "now" itself within the sub query...

It may also be better to put your query in a procedure where you can put
these constants into variables instead of using a sub query.

Depending of the distribution of a) 2_past,1_future,0_current and '' and
b) t.type,  it may be worth to have different queries, bound with UNION
ALL. This would simplify the "CASE" construct and at least part of the
tests should happen on indexes only.


If the query is run very often, you may want to add a boolean column
is_past on show_date, and have a separate job that put the concerned
records to true every x minutes ...


HTH,

Marc Mamin





SELECT s.id_event_subtype, s.subtype, t.id_event_type, t.type,
e.id_event, e.created_by, e.created_on, e.modified_by,
e.modified_on,
e.id_image, e.show_name, e.length, d.id_date,
d.start_date, d.end_date, d.low_price, d.high_price,
d.id_location,
d.showtime,
CASE
WHEN d.start_date <= 'now'::text::date
AND 
CASE
WHEN t.type = 'movie'::text THEN
d.start_date >= c.a
WHEN t.type = 'book'::text THEN
e.created_on >= c.b
ELSE d.end_date >=
'now'::text::date OR d.end_date IS NULL
END THEN '0_current'::text
WHEN d.start_date > 'now'::text::date
THEN '1_future'::text
WHEN d.start_date IS NOT NULL THEN
'2_past'::text
ELSE ''::text
END AS timing
FROM 
-- added sub query:
(select 'now'::text::date - 21 as a,  'now'::text::date - 28 as
b) c,
event e
NATURAL JOIN event_type2 t
LEFT JOIN event_subtype2 s USING (id_event_subtype)
LEFT JOIN show_date d USING (id_event);




-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] exclude part of result

2008-06-27 Thread Marc Mamin

Hi,

Two other ideas...

SELECT DISTINCT p.a, p.b, p.c, now(), count(item.item_pk) 
FROM product p 
 JOIN
 (select distinct a,b,c from  products
  except
  select distinct a,b,c from  navigation
 )foo  
 USING (a,b,c)
 LEFT JOIN item ON item.product_fk = product_pk
WHERE ...
GROUP BY p.a, p.b, p.c


or maybe 

SELECT DISTINCT foo.*, now(), count(item.item_pk) 
FROM (select distinct a,b,c from  products
  WHERE ...
  except
  select distinct a,b,c from  navigation
 )foo  
 LEFT JOIN item ON item.product_fk = product_pk
WHERE ...
GROUP BY p.a, p.b, p.c

HTH,

Marc Mamin

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] varchar::timezone conversion

2008-09-22 Thread Marc Mamin

Hi,

I'm facing an issue, where I build time stamps from a set of strings.


In this non working  example, I'm looking for a way to cast my strings
to the expected format 'timestamp' and timezone...


CREATE OR REPLACE FUNCTION "public"."timestamp_to_utcms"
(varchar,varchar) RETURNS int8 AS
$body$
  SELECT EXTRACT(EPOCH FROM $1 AT TIME ZONE $2)::int8*1000;
$body$
LANGUAGE 'sql' IMMUTABLE RETURNS NULL ON NULL INPUT SECURITY INVOKER;


Thanks for your help,



Marc Mamin;

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Query how-to

2008-10-03 Thread Marc Mamin

Hi,

What about something like that ?

select adate, sum(openedCount) as openedCount, sum(closedCount) as closedCount
from
(
  select sum(case when status ='Closed' then stop_date else start_date end) as 
adate,
   sum(case when status ='Closed' then 1 else 0 end) as closedCount
   sum(case when status ='New' then 1 else 0 end) as openedCount
  from  Yourtable
  where status in ('Closed','New')
)x
group by adate
order by adate

HTH,

Marc


Re: [SQL] Query how-to

2008-10-03 Thread Marc Mamin
this was silly from me!
 
this should naturally look like this:
 

  select case when status ='Closed' then stop_date else start_date end
as adate,
 sum(case when status ='Closed' then 1 else 0 end) as
closedCount,
 sum(case when status ='New' then 1 else 0 end) as openedCount
  from  Yourtable
  where status in ('Closed','New')
  group by case when status ='Closed' then stop_date else start_date end

  order by adate

Marc

> Hi,

> What about something like that ?

> select adate, sum(openedCount) as openedCount, sum(closedCount) as
closedCount
> from
> (
>   select sum(case when status ='Closed' then stop_date else start_date
end) as adate,
>sum(case when status ='Closed' then 1 else 0 end) as
closedCount
>sum(case when status ='New' then 1 else 0 end) as openedCount
>   from  Yourtable
>   where status in ('Closed','New')
> )x
> group by adate
> order by adate

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] aggregation problem: first/last/count(*)

2009-01-26 Thread Marc Mamin
Hello,

I have a query to aggregate data wich is too slow :-)

Here a simplified example:

create table test 
(
time int8, --store the time as epoch
a_group  varchar,
category varchar
)


For each group, I need the first/last times and categories , the number
of distinct categories and the number of records.


Here my best solution until now:


SELECT
FIRST.a_group,
FIRST.time as first_time,
FIRST.category as first_category,
LAST.time  as last_time,
LAST.category  as last_category,
AGG.c_count,
AGG.c_all
FROM 

(
 select distinct on (a_group) 
 a_group,
 time,
 category
 from test
 order by a_group, time
) FIRST,

(
 select distinct on (a_group) 
 a_group,
 time,
 category
 from test
 order by a_group, time DESC
) LAST,

(
 select a_group, 
 count(distinct category) as c_count,
 count(*) as c_all
 from test
 group by a_group
 order by a_group 
 ) AGG
 
 where FIRST.a_group = LAST.a_group
 and LAST.a_group=AGG.a_group



each sub query is quite fast -- thanks for the DISTINCT ON feature :-) ,
but the whole is really slow as Postgres start to swap due to the large
amount of data to merge.


I guess there must be a better solution as the three sub queries return
exactly one row for each  'a_group'
and are moreover already sorted  (The table does not contain any NULL
value). 
But in the query plan below, most of the cost comes form the merges.


I imagine there must be a way using custom aggregation functions, but 
I'm not confident with those:

Is it possible to define aggregate in order to retrieve the first/last
values of an ordered result set? This would allow to make a single scan
of the table.
something like 

select a_group,
first(category) as first_category,
last(category)  as last_category,
...
from test 
order by a_group,time


Many thanks for any hints.

Marc Mamin





Here are some dummy values if you'd like to play with this issue:

insert into test select s,'G'||s , 'C1' from(select
generate_series(1,1)as s)s;
insert into test select s+10,'G'||s , 'C2' from(select
generate_series(1,1)as s)s;
insert into test select s+13,'G'||s , 'C3' from(select
generate_series(1,1)as s)s;
insert into test select s+1,'G'||s , 'C2' from(select
generate_series(1,1,5)as s)s;
insert into test select s,'G'||s%10 , 'C3' from(select
generate_series(1,1,5)as s)s;
insert into test select s+1,'G'||s%5 , 'C2' from(select
generate_series(1,1,5)as s)s;
insert into test select s+1,'G'||s , 'C1' from(select
generate_series(1,100)as s)s; --10^6 !!

create index test_i on test(a_group);
analyze test;

=>

Merge Join  (cost=259000.31..34904377039.75 rows=1550421099181
width=128)
  Merge Cond: ((test.a_group)::text = (last.a_group)::text)
  ->  Merge Join  (cost=129500.16..17814340.14 rows=783387153 width=120)
Merge Cond: ((test.a_group)::text = (test.a_group)::text)
->  GroupAggregate  (cost=0.00..53681.23 rows=395825 width=10)
  ->  Index Scan using test_i on test  (cost=0.00..39973.53
rows=1036043 width=10)
->  Materialize  (cost=129500.16..133458.41 rows=395825
width=72)
  ->  Unique  (cost=119965.87..125146.08 rows=395825
width=18)
->  Sort  (cost=119965.87..122555.97 rows=1036043
width=18)
  Sort Key: test.a_group, test."time"
  ->  Seq Scan on test  (cost=0.00..16451.43
rows=1036043 width=18)
  ->  Materialize  (cost=129500.16..133458.41 rows=395825 width=72)
->  Subquery Scan last  (cost=119965.87..129104.33 rows=395825
width=72)
  ->  Unique  (cost=119965.87..125146.08 rows=395825
width=18)
->  Sort  (cost=119965.87..122555.97 rows=1036043
width=18)
  Sort Key: test.a_group, test."time"
  ->  Seq Scan on test  (cost=0.00..16451.43
rows=1036043 width=18)









-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] aggregation problem: first/last/count(*)

2009-01-26 Thread Marc Mamin
 

> I think the problem in here is that you want to collect the first and
last values in the same row


Your idea is ok, but it just postpone the problem. And I need the result
within the DB for further calculations /aggregations.

What I need is really something like:

test=# SELECT foo.ts, foo.grp, foo.val,foo2.val
 FROM (SELECT grp, MAX(ts) AS max_ts, MIN(ts) AS min_ts
 FROM foo
 GROUP BY grp)
   AS bar
   INNER JOIN foo
   ON foo.grp = bar.grp
  AND foo.ts = bar.min_ts
   INNER JOIN foo2
   ON foo2.grp = bar.grp
  AND foo2.ts = bar.max_ts 

I've tested different solutions and the DISTINCT ON clause was better.
(I guess the best solution depend of the distribution of grp and val).


I've also just found aggregate functions for first/last:
http://www.postgresonline.com/journal/index.php?/archives/68-More-Aggreg
ate-Fun-Whos-on-First-and-Whos-on-Last.html


But its is slightly slower as my solution.

I'll still make a test with more data As I guess that swapping will grow
fatser mith my query than with the first/last aggregate functions.


cheers,

Marc Mamin









-Original Message-
From: Volkan YAZICI [mailto:yazic...@ttmail.com] 
Sent: Monday, January 26, 2009 4:27 PM
To: Marc Mamin
Cc: pgsql-sql@postgresql.org
Subject: Re: aggregation problem: first/last/count(*)

On Mon, 26 Jan 2009, "Marc Mamin"  writes:
> create table test
> (
> time int8, --store the time as epoch
> a_group  varchar,
> category varchar
> )
>
> ...
>
> SELECT
> FIRST.a_group,
> FIRST.time as first_time,
> FIRST.category as first_category,
> LAST.time  as last_time,
> LAST.category  as last_category,
> AGG.c_count,
> AGG.c_all
> FROM
> ...

I think the problem in here is that you want to collect the first and
last values in the same row. Instead, splitting them into two sequential
rows would suit better to your database schema design, and you can
rebuild the data structure as you want in the application tier later.
For instance, consider below example:

test=# SELECT ts, grp, val FROM foo;
 ts | grp | val
+-+-
  1 |   1 |   1
  2 |   1 |   2
  3 |   1 |   3
  4 |   2 |   1
  4 |   2 |   2
  5 |   3 |   1
(6 rows)

test=# SELECT foo.ts, foo.grp, foo.val
 FROM (SELECT grp, MAX(ts) AS max_ts, MIN(ts) AS min_ts
 FROM foo
 GROUP BY grp)
   AS bar
   INNER JOIN foo
   ON foo.grp = bar.grp
  AND (foo.ts = bar.min_ts OR foo.ts = bar.max_ts);  ts | grp |
val
+-+-
  1 |   1 |   1
  3 |   1 |   3
  4 |   2 |   1
  4 |   2 |   2
  5 |   3 |   1
(5 rows)

After receiving above output, you can traverse returned rows one by one
in the application layer and output desired results.


Regards.

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] select regexp_matches('a a a', '([a-z]) a','g');

2009-05-07 Thread Marc Mamin
Hello,

I wonder if someone has an idea for this problem:

I have a string that contains a serie of chars, separated by single
spaces.

e.g 'a b x n r a b c b'

Having such a string, I d'like to get a list of all predecessors of a
given character.
In the example, the predecessors of b  are a,a,c.

If I now have the string 'a a a', the predecessors of 'a' are a,a 

I tried to use regexp_matches for this:

select regexp_matches('a a a', '([a-z]) a','g');
=> {"a "} only

As the second parameter of the function matches the first 2 'a', 
only the trailing ' a' will be used to seek for further matching...

Cheers,

Marc Mamin

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] constraint/restrict

2001-02-13 Thread Olaf Marc Zanger

hi there,

with two tables i want to make some constraint-restrictions

create table address ( id serial,  country_id int4, );
and
create table country (id serial, ...);

to make sure that now country-row is deleted if there is still a country_id 
in address table.

e.g.

address: 1, 2, ...
country: 2, ...

now country wouldn't be allowed to be deleted. 

how to do that?

thanks fo help

olaf
-- 
soli-con Engineering Zanger, Dipl.-Ing. (FH) Olaf Marc Zanger
Lorrainestrasse 23, 3013 Bern / Switzerland
fon:+41-31-332 9782, mob:+41-76-572 9782
mailto:[EMAIL PROTECTED], http://www.soli-con.com



[SQL] cannot create sequence

2001-02-14 Thread Olaf Marc Zanger

hi there,


as it seems postgresql 7.0 has trouble to create

ver_id_seq

and 

own_id_seq

there is nothing visible with \dt \ds \di

are these names occupied for other use?

thanks for help 

olaf


-- 
soli-con Engineering Zanger, Dipl.-Ing. (FH) Olaf Marc Zanger
Lorrainestrasse 23, 3013 Bern / Switzerland
fon:+41-31-332 9782, mob:+41-76-572 9782
mailto:[EMAIL PROTECTED], http://www.soli-con.com



[SQL] sum(bool)?

2001-02-24 Thread Olaf Marc Zanger

hi there,

i want to add up the "true" values of a comparison like

sum(a>b)

it just doesn't work like this

any workaround?

it is postgresql 7.0 under linux

thanks

olaf



-- 
soli-con Engineering Zanger, Dipl.-Ing. (FH) Olaf Marc Zanger
Lorrainestrasse 23, 3013 Bern / Switzerland
fon:+41-31-332 9782, mob:+41-76-572 9782
mailto:[EMAIL PROTECTED], http://www.soli-con.com




[SQL] a tricky one

2001-02-24 Thread Olaf Marc Zanger

hi there, 

something brainboggling :-)

three tables:
fac with:
id   | integer   | not null default 
nextval('fac_id_seq'::text)   
 
pre with:
  id  | integer | not null default nextval('pre_id_seq'::text)
 fac_id  | integer |
 date| date|  
 production   | float8|  

prd with:
 id| integer | not null default nextval('prd_id_seq'::text)
 date  | date|
 fac_id| integer |
 prediction   | float8 |

the last two have constraints as foreign keys  "fac_id int4 references fac 
(id)"

pre has only one row per month
prd has one value per day

if i do a 

select

count(fac.id)
 as fac_id,
sum(prd.production)
 as prd_production,
sum(pre.prediction)
 as pre_prediction

from
fac,
pre,
prd

where
date_part('year',timestamp(prd.date))=date_part('year',timestamp(pre.date))
and
date_part('month',timestamp(prd.date))=date_part('month',timestamp(pre.date))
and
pre.fac_id=fac.id
and
prd.fac_id=fac.id

group by
fac.id

what i acctually want is a result that shows the summed production of a year 
and the summed prediction of a year at the same time 


i got values of 365 for fac_id before i had the constraints :-), which was 
perfectly well.

now it shows 
372 (like 31x12) if i keep it as is
4392 (like 365x12) if i ommit the month-where-clause.

what's wrong

can anybody help, this goes over my know-how :-)
happy weekend

and thanks for the great help lately from the list

olaf

-- 
soli-con Engineering Zanger, Dipl.-Ing. (FH) Olaf Marc Zanger
Lorrainestrasse 23, 3013 Bern / Switzerland
fon:+41-31-332 9782, mob:+41-76-572 9782
mailto:[EMAIL PROTECTED], http://www.soli-con.com



[SQL] 2 tables, joins and same name...

2001-08-31 Thread Marc André Paquin

Hello,

Here is 2 tables:

airport
-
airport_id
name
code
city_id

destination
---
destination_id
dest_name
...
airport_dep_id  // using airport.airport_id (departure)
airport_arr_id  // using airport.airport_id  has well (arrival)

I have 2 columns in the second table that uses the same name column in
the first table...

I dont know how to formulate my SQL query... I want to select the
destinations in the destination table with not the ID of each airport
but their names. I can do a join with one but with the second one, I get
no results... And this is confusing!

select dest.dest_name, air.name as airport1, air.name as airport2 from
destination, airport air where dest.airport_dep_id_id=air.airport_id and
dest.airport_arr_id=air.airport_id;

This is not good...

Any help?

Thanks!
--
Marc Andre Paquin

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[SQL] List archives moved and cleaned up ...

2001-08-28 Thread Marc G. Fournier


Finally figuring that enough is enough, I've been spending the past few
days working on the list archives ...

I've reformatted, so far, the following lists into a cleaner format:

pgsql-hackers
pgsql-sql
pgsql-bugs
pgsql-general
pgadmin-hackers
pgadmin-support

With more lists to be worked on over the next few days ...

Major changes include the following:

Replaced the wide banner in the center with two smaller, 120x120
banners in the corners ...

Provide a search facility incorporated into each page that
searches the mhonarc pages themselves ...

Change the colors to better match the main site ...

Moved the archives to its own URL/Domain so that it is no
longer part of the general mirror of the site ...

There is still alot of work that I'm planning on doing on this, but I want
to get all of the archives moved first ...

To access any of the archives that have been moved, go to:

http://archives.postgresql.org/

I've been modifying the links from the main web site for those lists that
I've moved, as I've moved them, so getting there through 'normal channels'
should also work ...

Once finished, there will also be links to the OpenFTS search facility
that we have online, which uses a different way of formatting/displaying
the messages, so you will have the choice of using either ...


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[SQL] pgsql-performance mailing list / newsgroup created

2002-09-09 Thread Marc G. Fournier


Morning all ...

Josh Berkus the other day shook my cage a bit and reminded me to
create the -performance list that had been discussed awhile back ... so I
did :)

[EMAIL PROTECTED]

or

comp.databases.postgresql.performance

archives wont' show it up yet, still have to reconfig all of that
stuff, but the list is there and ready to go ... or should be.  If there
are any problems, please let me know ...




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[SQL] Testing gateway

2003-08-14 Thread Marc G. Fournier

In theory, the news2mail gateway is back in place ...


---(end of broadcast)---
TIP 8: explain analyze is your friend


[SQL] removing precision from timestamp (microseconds) ..

2003-10-14 Thread Marc G. Fournier

>From the docs, if you do:

traffic=# select CURRENT_TIMESTAMP(0);
  timestamptz

 2003-10-13 11:04:09-03
(1 row)

the 0 reduces the precision of the time to get rid of the microseconds ...
is there a way of having this done by default on, if anything, a per
connection basis?  For instance, I want to be get rid of the microseconds
from:

traffic=# select now();
  now
---
 2003-10-13 11:02:20.837124-03
(1 row)


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [SQL] off subject - pg web hosting

2003-11-08 Thread Marc G. Fournier

http://www.hub.org
http://www.pghoster.com
http://www.commandprompt.com

On Thu, 6 Nov 2003, chester c young wrote:

> can anybody recomend web hosting that provides postgresql?  I have
> found a couple, but their pricing is several times the going rate using mySql.
>
> __
> Do you Yahoo!?
> Protect your identity with Yahoo! Mail AddressGuard
> http://antispam.yahoo.com/whatsnewfree
>
> ---(end of broadcast)---
> TIP 9: the planner will ignore your desire to choose an index scan if your
>   joining column's datatypes do not match
>

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] [ADMIN] Field Size

2003-12-09 Thread Marc A. Leith


If you define the field as CHAR, 1000 bytes will be consumed.



If you define the field as VARCHAR, on sufficient bytes to store the contents will be used.



Marc



--Original Message Text---

From: Ganesan Kanavathy

Date: Mon, 8 Dec 2003 20:56:06 +0800



Clean Clean DocumentEmail MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman";}



Can anyone tell me, what harm would setting a field size to 1000 characters do to the size?  



   



By increasing the size, will the database require more space? Or does it only consume space if there is data in the field only?  



   



Regards,  



Ganesan  



   












Marc A. Leith
President
redboxdata inc.

e-mail: [EMAIL PROTECTED]
cell:(416) 737 0045


[SQL] simple LEFT JOIN giving wrong results ...

2003-12-11 Thread Marc G. Fournier

I've got to be missing something obvious here ... I have two tables, on
containing traffic stats, the other disk usage ... I want to do a JOIN in
one query to give me the sum of traffic and average of storage ...
seperately, the results are right .. put together, traffic values are way
off, while storage is still correct ...

So, I'm doing my LEFT JOIN wrong *somehow*, but its eluding me as to
what/how .. :(

ams=# select ct.ip_id, sum(ct.bytes) as traffic,
 avg(cs.bytes)::bigint as storage
from company_00186.traffic ct
   left join company_00186.storage cs ON ( ct.ip_id = cs.ip_id AND
   month_trunc(cs.runtime) = '2003-12-01')
   where month_trunc(ct.runtime) = '2003-12-01' group by ct.ip_id;
 ip_id |   traffic|   storage
---+--+-
  1194 | 76761728 |  1839676259
  1226 |   5744576925 |
  1134 | 17042528 |24794553
  1089 | 311779796360 | 10814211187
  1200 |  82535202840 |  3165073628
  1088 |   1969333472 |  2119206061
  1227 |  44816947957 |  4891683299
  1179 |   3867502285 |
(8 rows)

where, individually, the results should be:

ams=# select ip_id, avg(bytes)::bigint from company_00186.storage
   where month_trunc(runtime) = '2003-12-01' group by ip_id;
 ip_id | avg
---+-
  1227 |  4891683299
  1255 |   0
  1134 |24794553
  1194 |  1839676259
  1089 | 10814211187
  1088 |  2119206061
  1200 |  3165073628
(7 rows)

and

ams=# select ip_id, sum(bytes) from company_00186.traffic
   where month_trunc(runtime) = '2003-12-01' group by ip_id;
 ip_id | sum
---+-
  1194 | 9595216
  1226 |  5744576925
  1134 | 2130316
  1089 | 38972474545
  1200 | 10316900355
  1088 |   246166684
  1227 | 44816947957
  1179 |  3867502285
(8 rows)


the storage/avg values come out right in the JOIN, but the traffic/sum
values get royally screwed ...


Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] simple LEFT JOIN giving wrong results ...

2003-12-11 Thread Marc G. Fournier
On Thu, 11 Dec 2003, Tom Lane wrote:

> "Marc G. Fournier" <[EMAIL PROTECTED]> writes:
> > So, I'm doing my LEFT JOIN wrong *somehow*, but its eluding me as to
> > what/how .. :(
>
> > ams=# select ct.ip_id, sum(ct.bytes) as traffic,
> >  avg(cs.bytes)::bigint as storage
> > from company_00186.traffic ct
> >left join company_00186.storage cs ON ( ct.ip_id = cs.ip_id AND
> >month_trunc(cs.runtime) = '2003-12-01')
> >where month_trunc(ct.runtime) = '2003-12-01' group by ct.ip_id;
>
> I suspect you do not want the month_trunc constraint to be included
> in the JOIN ON condition, only in WHERE.

'k, but then would that take in all storage for all dates, since I'm only
then joining on the ip_id?  right now, I only have storage #s for Dec, so
it wouldn't make any differences for this one, but ..

results are still way off though, even with removing it:

 ip_id |   traffic|   storage
---+--+-
  1088 |   1979325872 |  2119206061
  1200 |  84004842024 |  3165073628
  1227 |  45591571353 |  4891683299
  1179 |   3893192839 |
  1194 | 77360968 |  1839676259
  1134 | 17357504 |24794553
  1226 |   5836213601 |
  1089 | 315424415080 | 10814211187
(8 rows)

By changing the query to:

ams=# select ip_id, sum(bytes),
 (select avg(bytes)
from company_00186.storage cs
   where month_trunc(runtime) = '2003-12-01'
 and cs.ip_id = ct.ip_id)::bigint as storage
from company_00186.traffic ct
   where month_trunc(runtime) = '2003-12-01' group by ip_id;
 ip_id | sum |   storage
---+-+-
  1194 | 9670121 |  1839676259
  1134 | 2169688 |24794553
  1226 |  5836213601 |
  1089 | 39428051885 | 10814211187
  1088 |   247415734 |  2119206061
  1200 | 10500605253 |  3165073628
  1227 | 45591571353 |  4891683299
  1179 |  3893192839 |
(8 rows)

I can get the right results again, it jus doesn't seem as clean ;(

 
Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[SQL] Skip dups on INSERT instead of generating an error ...

2003-12-12 Thread Marc G. Fournier

I need to be able to run an INSERT INTO / SELECT FROM UNION which combines
two tables into one ... *but* ... the INTO table has a primary key on the
first column, so if the result of the UNION generates dups, by default, of
course, it will generate errors ... what I'd like is to have it so that it
just skips over those records.

First thought would be to write a quite plpgsql function that would do a
SELECT first, to see if the value already exists, and if not, then do the
INSERT ... but am wondering if maybe there is a cleaner way that I'm not
thinking of?


Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] Ok, what am I doing wrong here?

2004-06-03 Thread Marc G. Fournier
On Tue, 17 Feb 2004, Karl Denninger wrote:

> I want to insert values from one table into another, and add some "default"
> values (that are not defaults on the table different reasons - that is, this
> is maintenance function and in normal operation there would be "real" values
> there - and null is valid)
>
> So, I want to do, for example, the following:
>
> insert into table (id, time, type) values (select id, now(), '1' from secondtable);
>
> Postgres's command line pukes on this, complaining that "select" is invalid
> inside the values part of the definition.
>
> SQL's language specification says otherwise, as does "\h insert" from the
> command line.
>
> The query stand-alone returns a table with values that are valid for the
> table I wish to insert into.
>
> Where's my brain-fade on this?

INSERT INTO table (id, time, type) SELECT id, now(), '1' FROM secondtable;


Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


[SQL] How do FKs work?

2004-10-09 Thread Marc G. Fournier
Got a problem here, and this is a new area for me ... analyzing FKs and 
improving their performance :)

Have a table with two FKs on it ... 2 different fields in the table point 
to the same field in another table ...

When I do an 'EXPLAIN ANALYZE DELETE FROM table WHERE field = #;', it 
never comes back ... or, at lesat, takes a *very* long time ...

If I do a count(1) for that #, there are 1639 rows ...
Now, 'ON DELETE' is set to 'NO ACTION', so my *guess* is that all it does 
is a 'SELECT FROM table WHERE field = value' on the referenced table, to 
make sure it exists ...

Is this correct?   So, its effectively having to do 3278 "SELECTS" against 
the REFERENCED table? (two fields have contraints on them, 1639 rows to be 
deleted) ... ?



Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [SQL] How do FKs work?

2004-10-09 Thread Marc G. Fournier
On Sat, 9 Oct 2004, Tom Lane wrote:
"Marc G. Fournier" <[EMAIL PROTECTED]> writes:
Have a table with two FKs on it ... 2 different fields in the table point
to the same field in another table ...

When I do an 'EXPLAIN ANALYZE DELETE FROM table WHERE field = #;', it
never comes back ... or, at lesat, takes a *very* long time ...
Do you have indexes on the referencing columns?  Are they exactly the
same datatype as the referenced column?  You can get really awful plans
for the FK-checking queries if not.
Yup, that was my first thought ... running SELECT's joining the two tables 
on the FK fields shows indices being used, and fast times ...


Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [SQL] How do FKs work?

2004-10-09 Thread Marc G. Fournier
On thing I failed to note here, that is probably critical ... its a 7.3 
database ...

On Sat, 9 Oct 2004, Tom Lane wrote:
"Marc G. Fournier" <[EMAIL PROTECTED]> writes:
Have a table with two FKs on it ... 2 different fields in the table point
to the same field in another table ...

When I do an 'EXPLAIN ANALYZE DELETE FROM table WHERE field = #;', it
never comes back ... or, at lesat, takes a *very* long time ...
Do you have indexes on the referencing columns?  Are they exactly the
same datatype as the referenced column?  You can get really awful plans
for the FK-checking queries if not.
    regards, tom lane

Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


[SQL] Easier way to 'strip' on multiple matches?

2005-01-09 Thread Marc G. Fournier
I need to strip out all spaces, and all \' from a string ... is there an 
easier way then doing:

select lower(replace(replace(name, ' ', ''), '\\\'', '')) from business;
Thanks ...

Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


[SQL] PL/PgSQL - returning multiple columns ...

2005-02-02 Thread Marc G. Fournier
I have a function that I want to return 'server_name, avg(load_avg)' ... 
if I wanted to return matching rows in a table, I can do a 'setof 
', with a for loop inside ... but what do I set the 'RETURNS' to if 
I want to return the results of query that returns only two fields of a 
table, or, in the case of the above, one column and oen 'group by' column?

thanks ...

Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [SQL] PL/PgSQL - returning multiple columns ...

2005-02-03 Thread Marc G. Fournier
Perfect, worked like a charm ... but the RETURNS still needs to be a 
SETOF, other then that, I'm 'away to the races' ... thanks :)

On Wed, 2 Feb 2005, George Weaver wrote:
Hi Marc,
One option is to create a simple data type and return the rowtype of the 
datatype

eg CREATE TYPE tserverload AS ("server_name"  text,  "load_avg"  int4);
CREATE FUNCTION getserverload()
RETURNS tserverload
AS
'DECLARE
  r   tserverload%rowtype;
etc.
You would then return r, comprised of  r.server_name and r.load_avg.
George

- Original Message - From: "Marc G. Fournier" 
<[EMAIL PROTECTED]>
To: 
Sent: Wednesday, February 02, 2005 3:10 PM
Subject: [SQL] PL/PgSQL - returning multiple columns ...


I have a function that I want to return 'server_name, avg(load_avg)' ... if 
I wanted to return matching rows in a table, I can do a 'setof ', 
with a for loop inside ... but what do I set the 'RETURNS' to if I want to 
return the results of query that returns only two fields of a table, or, in 
the case of the above, one column and oen 'group by' column?

thanks ...

Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match



Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[SQL] Major flood of mail to lists ...

2005-05-31 Thread Marc G. Fournier


Do to moderator error (namely, mine), several hundred messages (spread 
across all the lists) were just approved ...


Sorry for all the incoming junk :(


Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[SQL] UPDATEABLE VIEWS ... Examples?

2005-06-16 Thread Marc G. Fournier


Reading through the docs, both the CREATE VIEW and CREATE RULE pages refer 
to how you can use a RULE to 'simulate' an updateable VIEW ... but I can't 
seem to find any examples of this ...


Does anyone know of an online example of doing this that I can read 
through?


Thanks ...


Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


[SQL] Putting an INDEX on a boolean field?

2005-06-16 Thread Marc G. Fournier


Does that make sense?  Would it ever get used?  I can't see it, but 
figured I'd ask ...



Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 6: Have you searched our list archives?

  http://archives.postgresql.org


Re: [SQL] Putting an INDEX on a boolean field?

2005-06-16 Thread Marc G. Fournier

On Fri, 17 Jun 2005, Tom Lane wrote:


"Marc G. Fournier" <[EMAIL PROTECTED]> writes:

Does that make sense?  Would it ever get used?


It could get used if one of the two values is far less frequent than the
other.  Personally I'd think about a partial index instead ...


Hr, hadn't thought of that ... wouldn't you have to build two indexes 
(one for true, one for false) for this to be completely effective?  unless 
you know all your queries are going to search for one, but not the other?



Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


[SQL] using 'zoneinfo' to manipulate dates ...

2005-06-25 Thread Marc G. Fournier


Does anyone have a 'table of timezones' that could be used to "localize" 
as part of a query?


For instance, something like:

SELECT date
  FROM table
 WHERE ( time || ' ' || ( SELECT timezone
FROM zones
   WHERE id = table.timezone )) = '2004-12-12';

Something like this, but that works:

# select ( now() || ' ' || 'PST8PDT' )::timestamp;
ERROR:  invalid input syntax for type timestamp: "2005-06-26 00:23:29.789703-03 
PST8PDT"




Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[SQL] ARRAYs and INDEXes ...

2005-08-15 Thread Marc G. Fournier


Can't seem to find anything concerning this in the docs, and I don't think 
it is something that can be done, but figure I'll double check before I 
write it off completely ...


If I create a table:

CREATE TABLE customers (
  customer_id SERIAL,
  monthly_balance DECIMAL(7,2)[12]
);

Is it possible to create an INDEX on customers.monthly_balance such that I 
could do something like:


SELECT * FROM customers WHERE monthly_balance[6] = 0.00;

As an example ... or

SELECT * FROM customers WHERE 0.00 = any (monthly_balance);




Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[SQL] pl/PgSQL: Samples doing UPDATEs ...

2005-08-18 Thread Marc G. Fournier


I have an older PostgreSQL book here, that has chapter on pl/PgSQL in it, 
but no good samples 


What I'm looking for is a sample of a function that returns # of rows 
updated, so that I can make a decision based on that ... does anyone know 
where I could find such (and others, would be great) online?


Thanks ...


Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 6: explain analyze is your friend


[SQL] PL/SQL Function: self-contained transaction?

2005-08-22 Thread Marc G. Fournier


In PostgreSQL, as everyone knows, a QUERY == a transaction, unless wrap'd 
in a BEGIN/END explicitly ... how does that work with a function?  is 
there an implicit BEGIN/END around the whole transaction, or each QUERY 
within the function itself?


If the whole function (and all QUERYs inside of it) are considered one 
transaction, can you do a begin/end within the function itself to 'force' 
commit on a specific part of the function?



Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [SQL] [HACKERS] ALTER TABLE ( smallinto -> boolean ) ...

2005-09-01 Thread Marc G. Fournier


Moved off of -hackers, since its long gotten out of that realm :)

On Thu, 1 Sep 2005, Tom Lane wrote:


"Marc G. Fournier" <[EMAIL PROTECTED]> writes:

On Mon, 29 Aug 2005, Tom Lane wrote:

No, because there's no built-in cast from smallint to bool.



'k, I just took a read through the "CREATE CAST" man page, and don't think
I can use that for this,


Sure you can.  Make a SQL or PLPGSQL function that does the conversion
you want and then create a cast using it.


Ah, okay, I just re-read the man page and think I stumbled upon what I 
overlooked the first time ...


all I want to do is:

CREATE CAST ( 0 AS boolean )
   WITH FUNCTION 
 AS ASSIGNMENT;

And then each time I try to insert a '0' into a BOOLEAN field, it will 
auto convert that (based on my function) to 'f' ...


And I'd need to do a second one for 1 -> 't' ...

Am I reading it right this time ... ?


Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[SQL] a "catch all" type ... such a thing?

2005-09-08 Thread Marc G. Fournier


Are there any data types that can hold pretty much any type of character? 
UTF-16 isn't supported (or its missing from teh docs), and UTF-8 doesn't 
appear to have a big enough range ...



Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[SQL] convert timezone to string ...

2005-10-24 Thread Marc G. Fournier


I know that the server knows that ADT == -0400, and AST == -0300 ... is 
there any way of reversing that?  Basically, I want to say:


SELECT timezone_str(-0400, 'not dst');

and have it return ADT ... I've got a method of doing it right now, using 
a function, but just find it looks so messy, just wondering if there is a 
clean way of doing it ...


Thanks ...

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] convert timezone to string ...

2005-10-25 Thread Marc G. Fournier

On Tue, 25 Oct 2005, Tom Lane wrote:


"Marc G. Fournier" <[EMAIL PROTECTED]> writes:

I know that the server knows that ADT == -0400, and AST == -0300 ...


Other way around isn't it?  Unless Canada observes a pretty strange
variety of daylight saving time ;-)


I knew I was going to get that backwards :(


Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[SQL] pl/* overhead ...

2005-10-25 Thread Marc G. Fournier


Does anyone know of, or have, any comparisions of the overhead going with 
something like pl/perl or pl/php vs using pl/pgsql?


Thanks ...


Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 6: explain analyze is your friend


[SQL] padding an integer ...

2005-10-25 Thread Marc G. Fournier


Is there any way of "padding" an integer, similar to how, in perl, I would 
do:


printf("%03d", 1);

to get:

001

Specifically, I'm looking to do this in a pl/pgsql function ... like to 
avoid moving to pl/php or pl/perl if I can ... but, from what I've been 
able to find, I suspect I'm not going to have much of a choice ...



Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] pl/* overhead ...

2005-10-26 Thread Marc G. Fournier

On Wed, 26 Oct 2005, Michael Fuhr wrote:


On Wed, Oct 26, 2005 at 12:58:13AM -0300, Marc G. Fournier wrote:

Does anyone know of, or have, any comparisions of the overhead going with
something like pl/perl or pl/php vs using pl/pgsql?


Benchmark results will probably depend on the type of processing
you're doing.  I'd expect PL/pgSQL to be faster at database operations
like looping through query results, and other languages to be faster
at non-database operations like text munging and number crunching,
depending on the particular language's strengths.

[Does quick test.]

Whale oil beef hooked.  PL/pgSQL just outran PL/Perl when I expected
the latter to win.  Hang on, let me play with it until it comes back
with the results I want


'k, let's repharase the questions :)

Overall, I'd expect pl/pgsql to have less overhead, since its "built into" 
the server ... in the case of something like pl/php or pl/perl, assuming 
that I don't use any external modules, is it just as 'built in', or am I 
effectively calling an external interpreter each time I run that function?


For instance, if there wasn't something like to_char() (thanks for 
pointing that one out), then i could write a simple pl/perl function that 
'simulated it', but itself did no db queries just a simple:


RETURN sprintf("%04d", intval);

Don't know if that made much more sense ... ?



Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[SQL] # of 5 minute intervals in period of time ...

2005-12-13 Thread Marc G. Fournier


Is there a simpler way of doing this then:

select (date_part('epoch', now()) -
date_part('epoch', now() - '30 days'::interval)) / ( 5 *  60 );


Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[SQL] FOREIGN KEYs ... I think ...

2006-01-04 Thread Marc G. Fournier


I'm not sure if I'm looking at (for) the right thing or not, but if I am, 
then I'm not finding any useful examples :(


I have two tables, simplified as:

CREATE TABLE incident_summary (
  id serial,
  subject text,
  status boolean
);

CREATE TABLE incident_comments (
  id serial,
  incident_id int4,
  body text,
  comment_date timestamp,
  status boolean
);

Now, what I want to do is add a FOREIGN KEY (again, I think) that when 
incident_summary.status is changed (either closed, or reopened), the 
associated records in incident_comments are changed to the same state ...


It *looks* like it should be simple enough, I want 
incident_comments.status to change to incident_summary.status whenever 
incident_summary.status changes ... since I'm finding nothing searching on 
FOREIGN KEYS, I'm guessing that I'm looking at the wrong thing ...


So, what should I be searching on / reading for this one?  Pointers 
preferred, especially one with some good examples :)


Thanks ...



Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] FOREIGN KEYs ... I think ...

2006-01-04 Thread Marc G. Fournier

On Wed, 4 Jan 2006, Tom Lane wrote:


"Marc G. Fournier" <[EMAIL PROTECTED]> writes:

Now, what I want to do is add a FOREIGN KEY (again, I think) that when
incident_summary.status is changed (either closed, or reopened), the
associated records in incident_comments are changed to the same state ...


Why not just get rid of the status column in incident_comments, and
treat incident_summary.status as the sole copy of the state?  When you
need to get to it from incident_comments, you do a join.


I may end up getting to that point ...


The foreign key you really ought to have here is from
incident_comments.incident_id to incident_summary.id
(assuming that I've understood your schema correctly).


'k, where I'm getting lost here is how do I get status changed in 
_comments on UPDATE of incident_summary.id?  There doesn't seem to be 
anything for ON UPDATE to 'run SQL query' or some such ... or I'm reading 
old docs :(  This is the part that I'm having a bugger of a time wrapping 
my head around ...



Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] FOREIGN KEYs ... I think ...

2006-01-05 Thread Marc G. Fournier


Thanks to everyone for the responses ... ended up doing a trigger on the 
comments table that updates another table to maintain a "pointer" to the 
active record ... sped up the query that was hampering us from ~26 000ms 
to 47ms ... the killer part of the query was that each time it was havin 
gto figure out the 'active remark record' doing a 'max(create_time)' ...




On Wed, 4 Jan 2006, Marc G. Fournier wrote:



I'm not sure if I'm looking at (for) the right thing or not, but if I am, 
then I'm not finding any useful examples :(


I have two tables, simplified as:

CREATE TABLE incident_summary (
 id serial,
 subject text,
 status boolean
);

CREATE TABLE incident_comments (
 id serial,
 incident_id int4,
 body text,
 comment_date timestamp,
 status boolean
);

Now, what I want to do is add a FOREIGN KEY (again, I think) that when 
incident_summary.status is changed (either closed, or reopened), the 
associated records in incident_comments are changed to the same state ...


It *looks* like it should be simple enough, I want incident_comments.status 
to change to incident_summary.status whenever incident_summary.status changes 
... since I'm finding nothing searching on FOREIGN KEYS, I'm guessing that 
I'm looking at the wrong thing ...


So, what should I be searching on / reading for this one?  Pointers 
preferred, especially one with some good examples :)


Thanks ...



Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 6: explain analyze is your friend




Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[SQL] Update counter when row SELECT'd ... ?

2006-03-20 Thread Marc G. Fournier


I have a simple table:

name, url, counter

I want to be able to do:

SELECT * FROM table ORDER BY counter limit 5;

But, I want counter to be incremented by 1 *if* the row is included in 
that 5 ... so that those 5 basically move to the bottom of the list, and 
the next 5 come up ...


I've checked CREATE TRIGGER, and that doesn't work 'ON SELECT' ... is 
there anything that I *can* do, other then fire back an UPDATE based on 
the records I've received?


Thanks ...


Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[SQL] Reverse Index ... how to ...

2006-04-05 Thread Marc G. Fournier


I'm still searching through Google and whatnot, but not finding anything 
off the bat ... is there some way of creating a 'REVERSE INDEX' on a 
column in a table?


For instance, when I do a 'CLUSTER' to sort a table based on an INDEX, I'd 
like to sort it in reverse order, so would need the INDEX to go from 
'GREATEST to LOWEST', vs 'LOWEST to GREATEST' ...


Thx


Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


  1   2   >