[SQL] photos and OID's

2001-05-24 Thread Mark

Hi all,

Thanks for the great product.

We store photos in the db and serve them up from web servers.  We're
using java servlets on separate web server boxes that are load balanced.
NFS is not an option for us (unless nothing else is available).  This
works with byte streams using SQL Server from MS.  But, we don't want to
use SQL Server any more, we want to use PostgreSQL.

We need to get the photo data from the db without any file system
access.  In other words, we need the bytes, not the OID.

So I read docs and found lo_import/lo_export.  But that would only work
for us over NFS and would cause major complications.  lo_export won't
cut it for us unless we make some major changes.  Can I use the standard
JDBC Blob type to retrieve the photos?  What other options do I have?

Thanks,

Mark


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[SQL] Case Insensitive Queries

2001-05-29 Thread Mark

Is it possible to execute a query using a where clause that allows case
insensitive comparison between a field and text.

For example:

select * from account where username = 'test'

where username could be 'Test', which would be a match.  As is, this
compare is case sensitive.

grep'd the source, but stricmp is only used for keywords and not actual
column data.

Any help would be greatly appreciated.

Thanks,

Mark


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] Case Insensitive Queries

2001-05-29 Thread Mark

We tried these but it didn't work.  However, that's because username is
a bpchar and not a varchar, so its padded with blanks.  so we tried
where lower(trim(username)) = 'test' and it works.  We'll change that
column to varchar.  The real problem was in the datatype for username.

Thanks,

On 29 May 2001 12:35:53 -0400, ANDREW PERRIN wrote:
> Try:
> 
> - The ILIKE operator, for example,
> 
> SELECT * FROM account WHERE username ILIKE "test";
> 
> - upper() or lower(), for example,
> 
> SELECT * FROM accont WHERE lower(username) = "test";
> 
> -
>Andrew J. Perrin - Assistant Professor of Sociology
> University of North Carolina, Chapel Hill
> 269 Hamilton Hall CB#3210, Chapel Hill, NC 27599-3210 USA
>[EMAIL PROTECTED] - http://www.unc.edu/~aperrin
> 
> On 29 May 2001, Mark wrote:
> 
> > Is it possible to execute a query using a where clause that allows case
> > insensitive comparison between a field and text.
> > 
> > For example:
> > 
> > select * from account where username = 'test'
> > 
> > where username could be 'Test', which would be a match.  As is, this
> > compare is case sensitive.
> > 
> > grep'd the source, but stricmp is only used for keywords and not actual
> > column data.
> > 
> > Any help would be greatly appreciated.
> > 
> > Thanks,
> > 
> > Mark
> > 
> > 
> > ---(end of broadcast)---
> > TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
> > 
> 


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] Case Insensitive Queries

2001-05-30 Thread Mark

It appears that the behavior of a bpchar compare with a string literal
is not implicitly trimming the bpchar before the compare, which IMHO is
incorrect behavior.  Is my opinion valid?  If so, how difficult of a fix
would this be in terms of time and effort?  Should I submit a bug report
to another list, or is a developer receiving this?  Is this a feature?

This is an important issue for me, because I am converting a db from MS
SQL to postgresql.  The MS SQL database uses bpchar (or just char in MS
SQL terms) because performance is slightly better; the compares
automatically trim the blanks off of the char at compare time.  I have
over 150 tables to work with, and I would rather not have to change them
from bpchar to varchar, not to mention the performance decrease this
might incur.

You might be thinking, 'just use trim(username) everywhere you compare'.
Yes, that is a solution, but not a practical one in my case.  If this is
a bug, I don't want to hack around it: I'd rather wait for the fix.
Varchars would incur performance penalties I want to try to avoid if at
all possible.

Thanks,

Mark

On 29 May 2001 09:55:18 -0700, Dan Lyke wrote:
> Mark writes:
> > Is it possible to execute a query using a where clause that allows
case
> > insensitive comparison between a field and text.
> 
> select * from account where upper(username) = upper('test')
> 
> (Upper used because, as has been remarked on this list and in other
places,
> folding from richer character sets is likely to get better matches
this way).
> 
> And yes, you can create an index on upper(fieldname).
> 
> Dan
> 
> ---(end of
broadcast)---
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to
[EMAIL PROTECTED])
> 



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [SQL] Case Insensitive Queries

2001-05-30 Thread Mark

It appears that the behavior of a bpchar compare with a string literal
is not implicitly trimming the bpchar before the compare, which IMHO is
incorrect behavior.  Is my opinion valid?  If so, how difficult of a fix
would this be in terms of time and effort?  Should I submit a bug report
to another list, or is a developer receiving this?  Is this a feature?

This is an important issue for me, because I am converting a db from MS
SQL to postgresql.  The MS SQL database uses bpchar (or just char in MS
SQL terms) because performance is slightly better; the compares
automatically trim the blanks off of the char at compare time.  I have
over 150 tables to work with, and I would rather not have to change them
from bpchar to varchar, not to mention the performance decrease this
might incur.

You might be thinking, 'just use trim(username) everywhere you compare'.
Yes, that is a solution, but not a practical one in my case.  If this is
a bug, I don't want to hack around it: I'd rather wait for the fix.
Varchars would incur performance penalties I want to try to avoid if at
all possible.

Thanks,

Mark

On 29 May 2001 09:55:18 -0700, Dan Lyke wrote:
> Mark writes:
> > Is it possible to execute a query using a where clause that allows
case
> > insensitive comparison between a field and text.
> 
> select * from account where upper(username) = upper('test')
> 
> (Upper used because, as has been remarked on this list and in other
places,
> folding from richer character sets is likely to get better matches
this way).
> 
> And yes, you can create an index on upper(fieldname).
> 
> Dan
> 
> ---(end of
broadcast)---
> TIP 2: you can get off all lists at once with the unregister command
> (send "unregister YourEmailAddressHere" to
[EMAIL PROTECTED])
> 



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



bpchar compares (was Re: [SQL] Case Insensitive Queries)

2001-05-30 Thread Mark

It appears that the behavior of a bpchar compare with a string literal
is not implicitly trimming the bpchar before the compare, which IMHO is
incorrect behavior.  Is my opinion valid?  If so, how difficult of a fix
would this be in terms of time and effort?  Should I submit a bug report
to another list, or is a developer receiving this?  Is this a feature?

This is an important issue for me, because I am converting a db from MS
SQL to postgresql.  The MS SQL database uses bpchar (or just char in MS
SQL terms) because performance is slightly better; the compares
automatically trim the blanks off of the char at compare time.  I have
over 150 tables to work with, and I would rather not have to change them
from bpchar to varchar, not to mention the performance decrease this
might incur.

You might be thinking, 'just use trim(username) everywhere you compare'.
Yes, that is a solution, but not a practical one in my case.  If this is
a bug, I don't want to hack around it: I'd rather wait for the fix.
Varchars would incur performance penalties I want to try to avoid if at
all possible.

Thanks,

Mark


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] Case Insensitive Queries

2001-05-30 Thread Mark

On 30 May 2001 11:16:35 -0700, Stephan Szabo wrote:
> On Wed, 30 May 2001, Tom Lane wrote:
> 
> > Mark <[EMAIL PROTECTED]> writes:
> > > It appears that the behavior of a bpchar compare with a string literal
> > > is not implicitly trimming the bpchar before the compare, which IMHO is
> > > incorrect behavior.  Is my opinion valid?
> > 
> > regression=# create table foo (f1 char(20));
> > CREATE
> > regression=# insert into foo values ('zz');
> > INSERT 800569 1
> > regression=# select * from foo;
> >   f1
> > --
> >  zz
> > (1 row)
> > 
> > regression=# select * from foo where f1 = 'zz';
> >   f1
> > --
> >  zz
> > (1 row)
> > 
> > regression=#
> > 
> > You'll need to be more specific about what you're unhappy about.
> 
> Given the thread, I think the problem he's having is tied up in
> upper and lower implicitly converting to text.
> 
> select * from foo where upper(f1)='ZZ';
> gives no rows but if you put 18 spaces after the ZZ you get the
> row.
> 
> 


could I cast from text to something else?



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [SQL] Case Insensitive Queries

2001-05-30 Thread Mark


On 30 May 2001 12:53:22 -0400, Tom Lane wrote:

> 
> You are operating under misinformation about what's efficient or not.
> There are no performance penalties that I know of for varchar ... if
> anything, bpchar is the less efficient choice, at least in Postgres.
> The extra I/O costs for those padding blanks add up, and there's no
> compensatory savings anywhere.

with varchars, as I understand it (and postgresql may be different),
each varchar field has a header that stores the length of the particular
entry's length.  Further, if the varchar field precedes another field,
the system loses the ability to use fixed-length addressing to access
the field after the varchar, since the system must determine on a
case-by-case basis how to access the field after the varchar.  It has to
calculate the size of the varchar, add that to the start of the varchar
(plus header length), and then it has the address of the next field.
With non-variant char it is fixed length, so selects and updates operate
much more quickly.  Even the postgresql documentation asserts something
similar to this:

'Both TEXT and VARCHAR() store only the number of characters in the
string. CHAR(length) is similar to VARCHAR(), except it always stores
exactly length characters. This type pads the value with trailing spaces
to achieve the specified length, and provides slightly faster access
than TEXT or VARCHAR().'

Perhaps I am misinformed.

> 
> In any case, if your data is really variable-length strings, forcing
> it into a datatype that doesn't match its semantics because of dubious
> micro-efficiency considerations is just plain bad database design.
> Rather than having blanks that you want to pretend aren't there, you
> should not have the blanks in the first place.  IMHO anyway.
> 

Point well taken.  If the gain from using bpchar is not much more than
using varchar and the data used is actualy variable length up to a max
length, the argument is unfounded.

So, what to make of all of this?  It depends on the performance
gain/loss of using varchar.  We originally used fixed-length chars
because of the performance gain.  We try to avoid varchars for that
reason.  Now, if postgresql is different, then we'll use varchars, as
that precisely models our data.

Thanks,
Mark


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[SQL] Date manipulation

2001-05-31 Thread Mark


How does one perform date manipulation within SQL?  For example, SQL
Server has a dateadd() function that takes a date part, scalar, and the
date to manipulate.

I have a query that determines the number of days that are between now
and a particular date that looks something like this:

select datediff (dd, now (), column) as difference_in_days from ...

Anything available short of coding my own function?

Thanks,

Mark


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[SQL] create function atof?

2004-02-20 Thread mark
Hello,

Is it possible to create a database function that mimics the C function atof?
I'm guessing it should look something like this:

create function atof(varchar) returns float
as '??'
language 
returns null on null input;

Thanks,

Mark

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[SQL] Requirement for PostgreSQL Database Developer

2006-06-14 Thread Mark



Hi 
,

 
    
    This is Mark with ProV 
International, This email is in regards to the requirement we have with 
one of our direct client in San Diego, 
CA.
 
PostgreSQL 
Database Developer 
This 
position involves creating tables, views, functions and stored procedures to 
support front end OLTP and reporting applications. The ideal developer will have thorough 
knowledge of SQL (PL/pgSQL), experience with at least one other PostgreSQL 
language (e.g. PL/Perl), and extensive experience with complex stored 
procedures, code optimization, and index tuning in 
PostgreSQL.
 
Ideal 
candidate will have the following qualifications:
5+ 
years database development with PostgreSQL
Knowledge 
of at least one other language in addition to PL/pgSQL, such as PL/Perl or 
PL/Java.
Experience 
implementing PostgreSQL replication using Slony-I.
Some 
experience with either SQL Server 2000 or Oracle 9i/10g.
Significant 
background in creating complex stored procedures and SQL 
scripts
Understanding 
of database normalization concepts
Some 
experience in logical and physical database design and 
implementation
Prior 
experience working in a project oriented environment and meeting deadlines under 
tight time constraints
Strong 
analytical skills
Capable 
of working independently with minimal supervision.
 
Location: 
San Diego, CA
Duration: 
6+ months.
 
If 
you find yourself comfortable with this job profile & find it interesting 
please send me your resume in MS Word Format.
 
Kindest Regards,Mark,ProV InternationalTampa, FL 33607Tel 
408-241-7795 Ext: - 27[EMAIL PROTECTED]www.provintl.com 


[SQL] Requirement for PostgreSQL Database Developer

2006-11-06 Thread Mark



 
Hi 
,

  
Location: San Diego, CA [You can also 
TeleCommute...]
Duration: 6+ 
months.
    

This 
is Mark with ProV International, This 
email is in regards to the requirement we have with one of our direct client in 
San Diego, CA.
 
PostgreSQL 
Database Developer 
This 
position involves creating tables, views, functions and stored procedures to 
support front end OLTP and reporting applications. The ideal developer will have thorough 
knowledge of SQL (PL/pgSQL), experience with at least one other PostgreSQL 
language (e.g. PL/Perl), and extensive experience with complex stored 
procedures, code optimization, and index tuning in 
PostgreSQL.
 
Ideal 
candidate will have the following qualifications:
5+ 
years database development with PostgreSQL
Knowledge 
of at least one other language in addition to PL/pgSQL, such as PL/Perl or 
PL/Java.
Experience 
implementing PostgreSQL replication .
Some 
experience with either SQL Server 2000 or Oracle 9i/10g.
Significant 
background in creating complex stored procedures and SQL 
scripts
Understanding 
of database normalization concepts
Some 
experience in logical and physical database design and 
implementation
Prior 
experience working in a project oriented environment and meeting deadlines under 
tight time constraints
Strong 
analytical skills
Capable 
of working independently with minimal supervision.
 

If 
you find yourself comfortable with this job profile & find it interesting 
please send me your resume in MS Word Format.
 
thanks ,
 
Mark,ProV InternationalTampa, FL 33607Tel 408 - 
241 - 7795  Xtn - 27[EMAIL PROTECTED]www.provintl.com


[SQL] Requirement for PostgreSQL Database Developer

2006-12-18 Thread Mark


Hi ,



Location: San Diego, CA [You can also TeleCommute but you have to be on the 
client side for 2 Weeks in a month]

Duration: 6+ months.



This is Mark with ProV International, This email is in regards to the 
requirement we have with one of our direct client in San Diego, CA.

 

PostgreSQL Database Developer 

This position involves creating tables, views, functions and stored procedures 
to support front end OLTP and reporting applications. The ideal developer will 
have thorough knowledge of SQL (PL/pgSQL), experience with at least one other 
PostgreSQL language (e.g. PL/Perl), and extensive experience with complex 
stored procedures, code optimization, and index tuning in PostgreSQL.

 

Ideal candidate will have the following qualifications:

5+ years database development with PostgreSQL

Knowledge of at least one other language in addition to PL/pgSQL, such as 
PL/Perl or PL/Java.

Experience implementing PostgreSQL replication .

Some experience with either SQL Server 2000 or Oracle 9i/10g.

Significant background in creating complex stored procedures and SQL scripts

Understanding of database normalization concepts

Some experience in logical and physical database design and implementation

Prior experience working in a project oriented environment and meeting 
deadlines under tight time constraints

Strong analytical skills

Capable of working independently with minimal supervision.





If you find yourself comfortable with this job profile & find it interesting 
please send me your resume in MS Word Format. 



thanks ,



Mark,
ProV International
Tampa, FL 33607
Tel 408 - 241 - 7795  Xtn - 27
[EMAIL PROTECTED]
www.provintl.com


[SQL] Requirement for PostgreSQL Database Developer

2007-01-10 Thread Mark

Hi ,



Location: San Diego, CA [You can also TeleCommute but you have to be on the 
client side for 2 Weeks in a month]

Duration: 6+ months.



This is Mark with ProV International, This email is in regards to the 
requirement we have with one of our direct client in San Diego, CA.

 

Position :- PostgreSQL Database Developer 

This position involves creating tables, views, functions and stored procedures 
to support front end OLTP and reporting applications. The ideal developer will 
have thorough knowledge of SQL (PL/pgSQL), experience with at least one other 
PostgreSQL language (e.g. PL/Perl), and extensive experience with complex 
stored procedures, code optimization, and index tuning in PostgreSQL.

 

Ideal candidate will have the following qualifications:

5+ years database development with PostgreSQL

Must have experience with the replication tool called SIony-I in implementing 
failover redundancy with Slony-I and pgpool.

Knowledge of at least one other language in addition to PL/pgSQL, such as 
PL/Perl or PL/Java.

Experience implementing PostgreSQL replication .

Some experience with either SQL Server 2000 or Oracle 9i/10g.

Significant background in creating complex stored procedures and SQL scripts

Understanding of database normalization concepts

Some experience in logical and physical database design and implementation

Prior experience working in a project oriented environment and meeting 
deadlines under tight time constraints

Strong analytical skills

Capable of working independently with minimal supervision.





If you find yourself comfortable with this job profile & find it interesting 
please send me your resume in MS Word Format. 

thanks ,



Mark,
ProV International
Tampa, FL 33607
Tel 408 - 241 - 7795  Xtn - 27
[EMAIL PROTECTED]
www.provintl.com


[SQL] NULL function arguments?

2000-08-21 Thread Mark Volpe

It seems that why I provide a NULL argument to a PL/pgSQL function
it makes the rest of the arguments NULL, too!

Consider this function:

CREATE FUNCTION callme(text, text) RETURNS boolean AS
'
BEGIN
RAISE NOTICE ''$1: %'', $1;
RAISE NOTICE ''$2: %'', $2;
RETURN TRUE;
END;
' LANGUAGE 'plpgsql';

So that when I try SELECT callme('hello', 'world');
I get back:

NOTICE:  $1: hello
NOTICE:  $2: world

But when I do SELECT callme('hello', NULL);
I get back:

NOTICE:  $1: 
NOTICE:  $2: 

I'm using Postgres 7.0. Possible bug?

Mark



Re: [SQL] Time Help

2000-08-22 Thread Mark Volpe

I'm not sure at all what you are asking, but I'm thinking you're trying to
convert a "timespan" to a "time". Try adding it to a time like this:
SELECT '0:00:00'::time + '02:10:06'::timespan;

Mark

"Brian C. Doyle" wrote:
> 
> Hello all,
> 
> I have a query result of @ 2 hours 10 mins 6 secs and I would like to
> change that to 02:10:06.  Currently the field is listed as "timespan"  This
> allows me to input date as 02:10:06 or 4h 10m 6s or even 2hrs 10 min 6 sec
> which are all the formats that I will be entering the time formats! How do
> I convert it into a the format of 02:10:06
> 
> Thanks to you all



Re: [SQL] Time Help

2000-08-22 Thread Mark Volpe

I tried it on a box with postgres 6.5.3 and I got the result you did. On
postgres 7.0 I get 02:10:06 for both results. You should upgrade to 7.0 - it
is _much_ better.

Mark

"Brian C. Doyle" wrote:
> 
> Mark,
> 
> I tried that and had to change it to:
> 
> SELECT '0:00:00'::timespan + '02:10:06'::timespan;
> 
> To get any response.  the response i got was
> 
> @ 2 hours 10 mins 6 secs
> 
> Still in the wrong format
> If is use :
> 
> SELECT '0:00:00'::time + '02:10:06'::timespan;
> 
> It get
> 
> No such function 'time_timespan' with the specified attributes
> 
> So i guess what I want to do is convert a timespan into time
> How would I do that?
> 
> At 09:55 AM 8/22/00 -0400, Mark Volpe wrote:
> >I'm not sure at all what you are asking, but I'm thinking you're trying to
> >convert a "timespan" to a "time". Try adding it to a time like this:
> >SELECT '0:00:00'::time + '02:10:06'::timespan;
> >
> >Mark
> >
> >"Brian C. Doyle" wrote:
> > >
> > > Hello all,
> > >
> > > I have a query result of @ 2 hours 10 mins 6 secs and I would like to
> > > change that to 02:10:06.  Currently the field is listed as "timespan"  This
> > > allows me to input date as 02:10:06 or 4h 10m 6s or even 2hrs 10 min 6 sec
> > > which are all the formats that I will be entering the time formats! How do
> > > I convert it into a the format of 02:10:06
> > >
> > > Thanks to you all



Re: [SQL] Time Help

2000-08-22 Thread Mark Volpe

SELECT '0:00:00'::time + '@ 2 hours 10 mins 6 secs'::timespan;

 ?column? 
--
 02:10:06

Mark


"Brian C. Doyle" wrote:
> 
> Mark,
> 
> On your 7.0 box would you do:
> 
> SELECT '0:00:00'::time + '@ 2 hours 10 mins 6 secs':timespan;
> 
> For me and see if it will convert it! Need to decide if the upgrade will be
> with it and if it does this then it is
> 
> Thanks for your help Mark
> 
>   At 10:36 AM 8/22/00 -0400, you wrote:
> >I tried it on a box with postgres 6.5.3 and I got the result you did. On
> >postgres 7.0 I get 02:10:06 for both results. You should upgrade to 7.0 - it
> >is _much_ better.
> >
> >Mark
> >
> >"Brian C. Doyle" wrote:
> > >
> > > Mark,
> > >
> > > I tried that and had to change it to:
> > >
> > > SELECT '0:00:00'::timespan + '02:10:06'::timespan;
> > >
> > > To get any response.  the response i got was
> > >
> > > @ 2 hours 10 mins 6 secs
> > >
> > > Still in the wrong format
> > > If is use :
> > >
> > > SELECT '0:00:00'::time + '02:10:06'::timespan;
> > >
> > > It get
> > >
> > > No such function 'time_timespan' with the specified attributes
> > >
> > > So i guess what I want to do is convert a timespan into time
> > > How would I do that?
> > >
> > > At 09:55 AM 8/22/00 -0400, Mark Volpe wrote:
> > > >I'm not sure at all what you are asking, but I'm thinking you're trying to
> > > >convert a "timespan" to a "time". Try adding it to a time like this:
> > > >SELECT '0:00:00'::time + '02:10:06'::timespan;
> > > >
> > > >Mark
> > > >
> > > >"Brian C. Doyle" wrote:
> > > > >
> > > > > Hello all,
> > > > >
> > > > > I have a query result of @ 2 hours 10 mins 6 secs and I would like to
> > > > > change that to 02:10:06.  Currently the field is listed as
> > "timespan"  This
> > > > > allows me to input date as 02:10:06 or 4h 10m 6s or even 2hrs 10
> > min 6 sec
> > > > > which are all the formats that I will be entering the time formats!
> > How do
> > > > > I convert it into a the format of 02:10:06
> > > > >
> > > > > Thanks to you all



Re: [SQL] Multiple Index's

2000-09-21 Thread Mark Volpe

CREATE TABLE user_info(user_id name, entry_date date, info text);
CREATE UNIQUE INDEX user_info_key ON user_info(user_id, entry_date);

"Brian C. Doyle" wrote:
> 
> Hello all,
> 
> How would I prevent a user from submitting information to a table once they
> have already done so for that day.  I would need them to be able
> information on future dates as well as have information in the table from
> past dates from that user.
> 
> I am looking for something like insert user_id, date, info where user_id
> and date are not the same... does that make sense?
> 
> Brian C. Doyle



[SQL] Typecast a user-defined type?

2000-10-04 Thread Mark Volpe

Hi,

I've created my own datatype for Postgres. I have found it necessary to be
able to convert it to text, so I could match it up with a text column in a
UNION. I figured Postgres would do this for me, but then, I'm naive. Can
someone give me a hint, or point me to the appropriate material on how to get
"value::text" to work with my new type?

Thanks,
Mark



[SQL] AFTER triggers, short question

2000-10-11 Thread Mark Volpe

If I create a trigger that runs AFTER a DELETE, and then I delete several
rows, will the trigger function see the same thing every time it's called,
namely that all the rows I deleted are actually gone?

Mark



Re: [SQL] dynamic object creation

2000-10-12 Thread Mark Volpe

You may want to think about creating your table like this (for example):

CREATE TABLE data
(
key text,
field_type char,
value text
);

CREATE UNIQUE INDEX data_key ON data(key, field_type, value);

So this way each "record" takes up several rows in the table, and each "field"
can take up as many rows as you need. A table like this, with two columns
being arrays:

key  | field1  | field2 
-
a| [x,y,z] | [a,d,f]
b| [m,n]   | (NULL)

Can be represented like this instead:

key  | field_type | value
-
a| 1  | x
a| 1  | y
a| 1  | z
a| 2  | a
a| 2  | d
a| 2  | f
b| 1  | m
b| 1  | n


I'm not sure what your data looks like, but I hope this helps.

Mark

Indraneel Majumdar wrote:
> 
> Hi,
> 
> I'm not sure if the subject line has been proper. I have this following
> problem which I hope PostgreSQL can handle.
> 
> I'm converting a complex flatfile where records are arranged serially.
> some fields are as 'n' times repeating blocks of multiple lines. Some
> subfields within these are also 'n' time repeating blocks of multiple
> lines. So in my main table I do not know (until at run time) how many
> fields to create (same for any sub tables). How can I do this dynamically?
> 
> I tried using arrays, but retrieval from that is causing some problems. I
> have already checked the array utilities in the contrib section and have
> extended the operator list for other types (I'll send the file to it's
> original author so that he may include it if he wishes).
> 
> I think there must be some object-oriented way of doing this without
> creating too many keys. or are keys the only and best method? Using this
> is causing a performance hit. If it's any help, what I'm trying to convert
> are biological databases distributed in 'SRS' flatfile format from
> ftp.ebi.ac.uk/pub/databases/
> 
> Thank you,
> Indraneel
> 
> /.
> # Indraneel Majumdar  ¡  E-mail: [EMAIL PROTECTED]  #
> # Bioinformatics Unit (EMBNET node),  ¡  URL: http://scorpius.iwarp.com  #
> # Centre for DNA Fingerprinting and Diagnostics, #
> # Hyderabad, India - 500076  #
> `/



Re: [HACKERS] Re: [SQL] Rules with Conditions: Bug, or Misunderstanding

2000-11-30 Thread Mark Hollomon

On Wednesday 29 November 2000 19:42, Tom Lane wrote:
>
> Hm.  Perhaps the "cannot update view" test is too strict --- it's not
> bright enough to realize that the two rules together cover all cases,
> so it complains that you *might* be trying to update the view.  As the
> code stands, you must provide an unconditional DO INSTEAD rule to
> implement insertion or update of a view.

The idea was to check just before the update occurred to see if the 
destination was view. Maybe the test is too high up, before all rewriting
occurs.

It is in InitPlan, the same place we check to make sure that we are not 
changing a sequence or a toast table. (actually initResultRelInfo called from 
InitPlan). I gathered from the backend flowchart that this wasn't called 
until all rewriting was done. Was I wrong?

If all rewriting _is_ done at that point, why is the view still in the 
ResultRelInfo ?

-- 
Mark Hollomon



Re: [SQL] Re: NULL

2000-10-24 Thread Mark Volpe

\N is normally used to represent NULL in a text file, however
you can change that to another string (or an empty string) using
COPY FROM ... WITH NULL AS

Mark

Sandis Jerics wrote:
> 
> Hello,
> 
> how must i write a NULL value in a text file for the \copy command to
> understand it?
> NULL, \0 doesnt work..
> 
> how must i write a boolean value in a text file for the \copy command to
> understand it?
> t doesnt work, however in a file written by /copy command it looks as
> t or f ..
> 
> tried to search archives, but it seems deadly slow...
> 
> Thanks in advance,
> sandis



[SQL] BTP_CHAIN errors fixed?

2001-01-27 Thread Mark Volpe

Hi,
I have been using PostgreSQL-7.0.0 and have had the problem that, when
searching a btree index that contains large numbers of duplicate keys,
Postgres crashes with a BTP_CHAIN error. Now that I have installed 7.1beta3
the problem has seemingly been fixed. Was this problem actually fixed
somewhere between 7.0.0 and 7.1beta3, or am I just getting lucky right now
(just want to know before I put this into production :-))?

Thanks,
Mark



Re: [SQL] BTP_CHAIN errors fixed?

2001-01-27 Thread Mark Volpe

Tom Lane wrote:
> 
> Mark Volpe <[EMAIL PROTECTED]> writes:
> > I have been using PostgreSQL-7.0.0 and have had the problem that, when
> > searching a btree index that contains large numbers of duplicate keys,
> > Postgres crashes with a BTP_CHAIN error. Now that I have installed 7.1beta3
> > the problem has seemingly been fixed. Was this problem actually fixed
> > somewhere between 7.0.0 and 7.1beta3, or am I just getting lucky right now
> 
> Well, it was worked on ;-) ... that whole chunk of code was rewritten.
> Whether it has new bugs remains to be seen, but the old bugs are gone...
> 
> regards, tom lane

Thanks for the quick response. The new code will get plenty of testing from
me!



Re: [SQL] interval query.

2001-01-31 Thread Mark Volpe

Try

SELECT * FROM Towns WHERE id= OR id BETWEEN 3 AND 12


Antti Linno wrote:
> 
> Good morning.
> 
> Is there some way to make interval query?
> 
> Towns table(estonia towns, heh :P)
> 
> id | name
> 
> 1   Elva
> 2   Tartu
> Tallinn
> 3   Tallinn/Haabersti
> 4   Tallinn/Mustamae
> ...
> etc.
> 
> What I need is when the town's id= I want to make query
> where id= OR id=[3..12] for example. I could generate bunch of OR's
> like id=3 OR id=4 ... but is there some more elegant solution?
> 
> Greetings,
> Antti



Re: [SQL] Permissions for foreign keys

2001-01-31 Thread Mark Volpe

The problem is fixed in the 7.1 beta series.

Rick Delaney wrote:
> 
> I'm using 7.0 and have noticed that I need to grant SELECT and UPDATE
> permissions on any referentially-related tables.  Can/should I get
> around this?  A somewhat contrived example:
> 
> CREATE TABLE emp (
>  id integer PRIMARY KEY,
>  salary integer
> );
> CREATE TABLE proj (
>  id integer PRIMARY KEY,
>  emp_id integer REFERENCES emp
> );
> CREATE TABLE bill (
>  id integer PRIMARY KEY,
>  proj_id integer REFERENCES proj
> );
> INSERT INTO emp  VALUES (1, 10);
> INSERT INTO proj VALUES (1, 1);
> INSERT INTO bill VALUES (1, 1);
> 
> GRANT ALL ON proj TO someone;
> 
> Connect as someone:
> => INSERT INTO proj VALUES (2, 1);
> ERROR:  emp: Permission denied.
> => UPDATE proj SET id = 2;
> ERROR:  bill: Permission denied.
> 
> It appears that I need to grant:
>SELECT,UPDATE on  emp to UPDATE or INSERT into proj.
>SELECT,UPDATE on bill to UPDATE proj.
> 
> When I grant these permissions, the above statements succeed.
> 
> If I don't want users to have UPDATE (or even SELECT) access on the
> other tables (bill and emp), but I want referential integrity, what can
> I do?
> 
> --
> Rick Delaney



Re: [SQL] Index scan

2001-01-31 Thread Mark Volpe

Since you are selecting all the rows of media, there is no reason to use the
index to do this as it would just slow things down.

Mark

Najm Hashmi wrote:
> 
> Hi all,
>  I  am unable to understand why  my inidcies are not  used in the query.
> I have following indices:
> index on categories.root
> index on media.category
> unique index on categories.id
>  Here is my query :
> mondo=# explain select m.id
>form media m, categories c
> where c.id=m.category and c.root like 'B%';
> NOTICE:  QUERY PLAN:
> 
> Hash Join  (cost=22.55..116.80 rows=11 width=28)
>   ->  Seq Scan on media m  (cost=0.00..60.40 rows=840 width=16)
>   ->  Hash  (cost=22.50..22.50 rows=20 width=12)
> ->  Index Scan using cat_id_root on categories c
> (cost=0.00..22.50 rows=20 width=12)
> EXPLAIN
> 
>  I simply dont understand why it uses seq scan on media.
> Thanks all for your help.
> -Najm



[SQL] PL/PGSQL Cook Book

2001-02-12 Thread mark proctor

I've just spent the last day or two trying to get to grips with plpgsql and can't 
believe how abysmal the documetentation and examples are.
I've been trawling through the mailist lists and I notice there was talk back in 1999 
abouta PLPGSQL Cook Book - did anything come of this?

If no one is maintaining something like this and people think its a good idea I think 
we should have another crack at it.
I'd be happy to maintain something like this and put it up on the web, although I'm 
only a newbie and would rely upon user contribution.

Here are some possible sections to help get people thinking. Even if you don't know 
the answer send me the questions and I'll add them to the list.
   How can I create Tree structures?
   Are recursive functions supported?
   Whats are the advanteds disadvantes of the different scipting langues PL/PGSQL, 
PL/Perl, PL/Tcl?
   How do variable scopes work in PL/PGSQL?
   Can I output variables from a function to the command line for debugging purposes?
   How to debug PL/PGSQL?
   Various examples for each of the statements

Anyway lets discuss this, a lot could be done just from piecing together relavent tips 
from this mailing list.
ie there are some good posts on tree structures, which if I'm willing to piece 
together if people think this project is worth while.

Regards

Mark




[SQL] Recusrive Functions in 7.0.3

2001-02-12 Thread mark proctor

If I remove the line calling PERFORM it works without problems moving the children 
node for the passed integer into test and then exiting.
I can then repeat the process going through the output to test iteratively and do not 
have any problems.
However if I put the PERFORM line back in to create a recursive function it just goes 
on forever, and I only have 6 nodes.

CREATE FUNCTION get_children (integer) RETURNS integer AS
'
DECLARE
pnode_parent ALIAS FOR $1;
rec RECORD;
BEGIN
FOR rec IN SELECT * FROM  tree_adjacency_matrix WHERE node_parent = 
pnode_parent LOOP
INSERT INTO test (node1, node2) VALUES(rec.node_child, 
rec.node_parent);
PERFORM get_children(rec.node_child);
END LOOP;
RETURN 0;
END;
'LANGUAGE 'plpgsql'  

Mark


On Saturday 10 February 2001 03:29, Tom Lane wrote:
> <[EMAIL PROTECTED]> writes:
> > Are recursive functions allowed in in 7.0.3
>
> Sure.
>
> play=> create function myfactorial(int) returns int as '
> play'> begin
> play'>   if $1 > 1 then
> play'> return $1 * myfactorial($1 - 1);
> play'>   end if;
> play'>   return $1;
> play'> end;' language 'plpgsql';
> CREATE
> play=> select myfactorial(1);
>  myfactorial
> -
>1
> (1 row)
>
> play=> select myfactorial(10);
>  myfactorial
> -
>  3628800
> (1 row)
>
> play=>
>
> I get a stack overflow crash at about myfactorial(7500), but that seems
> like a sufficient level of recursion depth for normal purposes ...
>
> > as I seem to be unable to
> > get them to work in plpgsql,
>
> Are you sure you aren't asking for infinite recursion, eg by invoking
> the same function with the same argument?
>
>   regards, tom lane



[SQL] Re: Recusrive Functions in 7.0.3

2001-02-12 Thread mark proctor

Ahh I found what I was doing wrong, there was a rogue value being returned causing to 
infinite loop. Its fixed now. 
Creating that script you recommended set my thinking process straight.

many thanks

Mark

On Saturday 10 February 2001 08:42, Tom Lane wrote:
> mark proctor <[EMAIL PROTECTED]> writes:
> > However if I put the PERFORM line back in to create a recursive
> > function it just goes on forever, and I only have 6 nodes.
>
> Hm.  There may be a bug here, or maybe you're still confused... but I'm
> not eager to reverse-engineer your table declarations and data from this
> sketch.  Would you mind providing a complete example, ie a SQL script to
> reproduce the problem starting from an empty database?
>
>   regards, tom lane



Re: [SQL] What's wrong with this function

2001-02-12 Thread mark proctor

the select query returns the first row to rec. You can then access its values with:
rec.field_name
at END LOOP it jumps back to FOR checks to see if there any more rows and if so moves 
to the next row and repeats the loop. 
It also looks like your missing a LOOP keyword at the end of the FOR line.
Here is an example that works.

CREATE FUNCTION get_children (integer) RETURNS integer AS
'
DECLARE
pnode_parent ALIAS FOR $1;
rec RECORD;
BEGIN
FOR rec IN SELECT * FROM  tree_adjacency_matrix WHERE node_parent = 
pnode_parent LOOP
INSERT INTO test (node1, node2) VALUES(stm.node_child, .rec.node_parent);
END LOOP;
RETURN 0;
END;
'LANGUAGE 'plpgsql'

Mark

On Saturday 10 February 2001 20:23, Najm Hashmi wrote:
> Jie Liang wrote:
> > I just know you can use implict cursor inside the plpgsql
> > e.g
> > declare
>
> result text;
> tcount int4;
>
> > rec record;
> > begin
> > FOR rec IN select_clause LOOP
> >  statements
> > END LOOP;
> > end;
>
> Thank you Jie for your help. I am bit confused about how it works. I want
> for each row , obtained by select statment,  get certain values and then do
> some calculations and out put that resulst  eg
>  for rec IN select title, dcount from songs where  artist='xyz'
>  tcount:= tcount+rec.dcount;
> END LOOP;
>  return tcount;
> would this work ?
> Thanks again for your help.
> Regards, Najm



[SQL] Datetime Query

2001-02-16 Thread Mark Byerley

I need to create a query which will select a request_no between Data1 and 
Date2 so...

SELECT request_no FROM request where status_code ='C' and (completed_date 
between 01/01/2000 and 01/01/2001);

The problem I have run into is that the completed_date field is a datetime 
format (not by my own design) and I am having some problems extracting just 
the request_no's between those dates.
I have tried a few extract month,day,year clauses with no success. If 
anyone has an idea I would appreciate it!
Thanks in advance.
Mark




[SQL] On Clusters

2001-03-06 Thread Mark Kirkwood

A previous posting mentioning clusters prompted me to revist some earlier 
tests done on clustered and unclustered data.

It appears that currently ( 7.1beta5 )  the optimizer is unaware of any 
clustering on a table - how important is that ?

To answer this question I used by "pet" data warehouse tables :

 Table "fact1"  300 rows ~ 350Mb
 Attribute |  Type   | Distribution
---+-+-
 d0key | integer | 3000 distinct values 0-9000 clustered 
 d1key | integer |
 val   | integer |
 filler| text|
Index: fact1_pk ( d0key,d0key ) cluster "key"
 
 Table "fact2" 300 rows ~ 350Mb
 Attribute |  Type   | Distribution
---+-+-
 d0key | integer | 3000 distinct values 0-9000 uniformly spread
 d1key | integer |
 val   | integer |
 filler| text|
Index: fact2_pk  ( d0key,d0key )

The sample queries used to shed some light on the nature of the difference 
are : firstly the index scan -

explain select count(*) 
from fact1 where d0key between 200 and 279;

Aggregate  (cost=58664.62..58664.62 rows=1 width=0)
  ->  Index Scan using fact1_pk on fact1  (cost=0.00..58598.72 rows=26360 
width=0)

and the sequential scan -

explain select count(*) 
from fact1 where d0key between 200 and 280;

Aggregate  (cost=59020.73..59020.73 rows=1 width=0)
  ->  Seq Scan on fact1  (cost=0.00..58954.00 rows=26693 width=0)

and analogous versions for fact2 ( with the same execution plan )

On the unclustered table fact2 the optimizer correctly assess the time to 
switch between an index scan and an sequential scan - both queries take about 
30 s.

However on the clustered table fact1, the (same) choice results in a jump 
from1s for the index scan to 30s for the sequential scan.

(this was the guts of the previous research... bear with me those of you who 
read the last article )

So how long should an index scan be used for ?, some experimentation led me to
adjust the "where" clause in my queries to 

where d0key between 0 and 4500

This produces a query plan of :

Aggregate  (cost=62692.75..62692.75 rows=1 width=0)
  ->  Seq Scan on fact1  (cost=0.00..58954.00 rows=1495498 width=0)

coercing the optimizer with a brutal set of cpu_tuple_cost = 0.4 gives :

Aggregate  (cost=868673.82..868673.82 rows=1 width=0)
  ->  Index Scan using fact1_pk on fact1  (cost=0.00..864935.08 rows=1495498 
width=0)

(note that these scan 150 rows, ie. half the data )

Testing these queries on fact1 gives run times af about 35s for both -

thus it is worthwhile to keep using index scans of upto 50% 
of the ( clustered ) table data.

I found this result interesting, as I was thinking more like 15-20% of the 
table data would be the limit.

The answer to the original question ( finally ) is "it is pretty important", 
as knowlege of the clustering drastically changes the optimal access path.

So what to do if you know you have clustered data ? ( either via cluster or 
"it just happens to go in that way" ).

Tampering with the various *cost type parameters to encourage index scans 
seems to be the only solution (other sugestions welcome here), but tends to 
be too global in effect ( for example trying the previous query on 
(unclustered ) fact2 with cpu_tuple_cost=0.4 takes more that 300s - I got 
tired of waiting...) .

So be careful out there...

Cheers

Mark



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[SQL] Re: Select very slow

2001-03-20 Thread Mark Kirkwood

>
>That 'count(*)' is going to be slow.
>Try counting a column that's indexed (p.doc might work?)

That is not the case, you can convince yourself with explain - you get the 
same plan(s)  :

e.g :  ( table dim0 with indexed column d0key ) ...

explain select count(*) from  dim0 where d0key < 1000; 

Aggregate  (cost=96.13..96.13 rows=1 width=0)
  ->  Index Scan using dim0_pk on dim0  (cost=0.00..93.63 rows=1000 width=0)

explain select count(d0key) from  dim0 where d0key < 1000;
 
Aggregate  (cost=96.13..96.13 rows=1 width=4)
  ->  Index Scan using dim0_pk on dim0  (cost=0.00..93.63 rows=1000 width=4)

>  (the tables and query snipped..).
>  anyone help-me ?
>

I would try to make the primary key ( doc) a fixed length varchar(n) instead 
of text  if possible, as text is intended for very long strings and btree 
indexes usually perform best on (small) fixed length columns.

If your key is going to be a really long string then maybe rethink the design 
( use a "synthetic" key like a sequence or integer checksum of doc to index 
on ).

perform a vacuum analyze and then post the output of the explain below to 
this list !

explain select  p.city,count(*) from sales s, person p where s.doc = p.doc
group by p.city;  


Cheers

Mark
 

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[SQL] Memory And Performance

2001-04-06 Thread Mark Kirkwood

> >The system that I'm developing, I have about 25000 (persons) x 8 
>>(exams)
>> x 15 (answers per exam) = 300 records to process and it is VERY SLOW.
>
>f you need to import large quantities of data, look at the copy
>command, that tends to be faster.


By way of example for the level of improvement COPY gives:

a 300 row table ( 350Mb dump file -> 450Mb table ) can by loaded via copy 
in 7 minutes. To insert each row (say using a perl prog to read the file and 
DBD-Pg to insert, committing every 1 rows ) takes about 75minutes. I used 
a PII 266Mhz/192Mb and Postgresql 7.1b5 for these results. Postgresql 7.0.2 
is slower ( 20-30% or so...), but should still display a similar level of 
improvement with copy.

Good loading

Mark



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[SQL] Inheritance Semantics

2001-04-09 Thread Mark Butler

Could someone (Chris Mead?) post an update on the status of fixing
PostgreSQL's inheritance semantics in the following ways:

Has a decision been made to implementing true inheritance via INHERITS or an
alternative keyword?

By true inheritance, I mean first and foremost that any query on a super-class
should query *all members* of that class by default regardless of which table
they are stored in.  Any other behavior violates the very natural expectation
that a table called "PERSON" actually implements the class of all persons.  

Second, for performance reasons, there needs to be a way for an index on a
parent class attribute to be shared by all the tables that implement or
inherit from that parent class.  This is also necessary to enforce unique
constraints on all members of a class.

I imagine that the current implementation of "SELECT column FROM table*" is a
logical UNION ALL of the select statement applied to each sub table, using
different indexes for each one - Is this correct?

Third, all declarative constraints on a parent class should be enforced
against all members of all sub-classes without exception.  

Fourth, someday it would be nice to be able to create object methods & member
functions that operate in the context of a single object.  Does anyone know if
the OQL supports this capability?

I understand the backwards compatibility issue with the current semantics. 
Rather than adding some sort of run-time setting, I think it would be much
better to add a new keyword / extension to the DDL syntax so that true ODMG
style inheritance can be implemented correctly without breaking old
applications.

Any comments would be appreciated.

 - Mark Butler

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[SQL] Semantics of Typed Numeric Constants

2001-04-10 Thread Mark Butler

Thomas Lockhart wrote in comp.databases.postgresql.hackers:

> The parser does not know that your int4 constant "0" can be represented
> as an int2. Try
> 
>   SELECT * FROM access_log WHERE method_num = int2 '0';
> 
> (note the type coersion on the constant; there are other ways of
> specifying the same thing).

Surely this is something that should be fixed.  An int2 column ought to behave
exactly like an int4 with a CHECK() constraint forcing the value to be in
range. 

In object oriented terms:

  a smallint isA integer
  a integer isA bigint

Likewise:

  a integer isA smallint if it falls in -32768..32767
  a bigint isA integer if it falls in -2147483648..2147483647

Similar promotion rules should apply for all other numeric types. Any floating
point value without a fractional part should be treated exactly like a big
integer.

The issues here are closely related to the 7.1 changes in INHERITS semantics.
If any operator treats a smaller precision (more highly constrained) type in
a materially different way than a compatible higher precision type, it is
fundamentally broken for exactly the same reason that we expect a query on a
super-class would be if if did not return all matching instances of every sub
class.

If a function is overloaded with multiple compatible scalar data types, the
database should be free to call any matching implementation after performing
an arbitrary number of *lossless* compatible type conversions.

i.e. if you have f(smallint), f(integer), and f(double) the actual function
called by f(0) should be undefined.  The distinction between smallint '0',
integer '0', and double '0' is meaningless and should be explicitly ignored.

This is a little extreme, but I do not think it makes a lot of sense to
maintain semantic differences between different representations of the same
number. (Oracle certainly doesn't)

Any comments?


 - Mark Butler

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



[SQL] a select statement that sometimes joins

2001-04-10 Thread Mark Stosberg




Here's a situation I've run into more than once with SQL:

I want to select all the rows in a table that match a criteria, where one
of the criteria is possibly having a related entry in a second table. For
my example, lets say I have table named 'messages' and another named
'message_attachments'. The former has a primary key of msg_id, the latter
also contains msg_id, and has an attachment_id as it's primary key.

This statement shows me all the messages that also have attachments:

SELECT
messages.msg_id,
message_attachments.attachment_id
FROM messages,message_attachments
WHERE messages.msg_id = message_attachments.msg_id;

But I want a statement that says: "Show me all the messages, and include
information about an attachment if they have one"

(Let's further assume that a message will have only one attachment).

Is this possible? Anyone like to share an example? Much thanks.

  -mark

http://mark.stosberg.com/



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[SQL] function to format floats as money?

2001-04-17 Thread Mark Stosberg


Hello,

  I'm curious to know if there is a function available in Postgres 7.0.3
(or 7.1) that will format a float style number as "money". I understand
that the money type is going away in the future, so using a float type
that is sometimes formatted like money seems like a good alternative. So
ideally, I'm looking for a solution that won't go away when the money type
does. :) Thanks!

  -mark

http://mark.stosberg.com/


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [SQL] function to format floats as money?

2001-04-17 Thread Mark Stosberg

Peter Eisentraut wrote:
> 
> Mark Stosberg writes:
> 
> >   I'm curious to know if there is a function available in Postgres 7.0.3
> > (or 7.1) that will format a float style number as "money". I understand
> > that the money type is going away in the future, so using a float type
> > that is sometimes formatted like money seems like a good alternative. So
> > ideally, I'm looking for a solution that won't go away when the money type
> > does. :) Thanks!
> 
> to_char() for formatting.
> 
> numeric for storage.
> 
> Using floats for monetary amounts is not only an extremely bad idea
> because of the inexactness of storage and arithmetic, it might even be
> illegal if you're using it for official purposes.

Thanks Peter.

  So  if the money type is going away, and floats can be illegal, whats
the best way to store money? 

  -mark

personal website }  Summersault Website Development
http://mark.stosberg.com/{  http://www.summersault.com/

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [SQL] function to format floats as money? (removing space padding)

2001-04-19 Thread Mark Stosberg


Now that I've figured out that numeric is good for storing money, and
that I can format with like this:

to_char(price, '9,999,999.99') as price

Then I discovered that sometimes this returns leading spaces I don't
want. I can get rid of them like this:

trim(to_char(price, '9,999,999.99')) as price

Is that the recommended money formatting style, for amounts less than
9,999,999.99? (assuming I'll tack on my own currency symbol? ). Other
there other general styles that folks like for this? Thanks,

  -mark

personal website }  Summersault Website Development
http://mark.stosberg.com/{  http://www.summersault.com/

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] Client/Server Security question

2001-04-20 Thread Mark Stosberg

Lonnie Cumberland wrote:
> 
> Hello All,
> 
> We are developing an application that will allow our websites to talk to our
> database.
> 
> In the interest of security, I am wondering if it is possible to turn off some
> of the functions in the SQL command list such that a user can only communicate
> to the database through our functions.
> 
> What I mean is this. We have built a number of "C" extensions and PL/pgSQL
> proceedures that will work on our database, but I only want to allow an outside
> query to only one or two of our selected entry points.
> 
> The webserver interface query statement might, for example, be able to only
> call "select register_user(...)" or "select login_user()" and NONE of
> the other PostgreSQL command functions.
> 
> I only want to allow access to these functions from the outside world, but the
> server needs to be able to execute all of the original functions without
> restrictions.

Lonnie,

  Have you checked the Postgres docs on security and access? It offers a
lot of flexibility. For example, you can use a different postgres
username to access the database from the outside world, in conjunction
with using "grant" statements and views to give that user only the
ability to perform specific actions on specific tables and views. If
after reading the docs you still have specific questions about details
that are not clear them, send a follow-up post with a more specific
question and we can give you a more useful answer. :) 

  -mark

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[SQL] creating tables that are visible across databases

2001-04-20 Thread Mark Stosberg


Hello,

  I'd like to create some tables that would visible across databases,
much like the postgres system tables. These would be for "static" data,
such as state and country codes, and geo-spatial data. I couldn't find
this mentioned in the docs, but unless this feature of the system tables
is magical, it appears to be possible. Did I miss an explanation in some
docs, or could someone give me a pointer? 
  Thanks!

   -mark

personal website }  Summersault Website Development
http://mark.stosberg.com/{  http://www.summersault.com/

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[SQL] RE:Table corrupted and data lost (second time in one month!!)

2001-04-24 Thread Mark Kirkwood


Previously...
>FATAL 1: Memory exhausted in AllocSetAlloc()
>pqReadData() -- backend closed the channel unexpectedly.
>This probably means the backend terminated abnormally
>before or while processing the request.
>
>A table has been corrupted and i don't know why...
>It's really hard to recover the table, and last time i lost some data ;-(
>
I have never used 7.0.2 for any length of timebut I would recommend 
upgrading to 7.1 - many bugs have been fixed in this release - and hopefully 
not too many more introduced :)

also adding more RAM would be a good idea, as 64Mb is not much these days ( I 
use apache 1.3.19 + mod perl 1.25 and regularly see 150Mb RSS when I run gtop 
)   

This may be a Red Herring, but check to see you are not exhausing your swap 
space, as all sorts of odd errors can be encouraged by this...

Cheers

Mark

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[SQL] problems with pl/pgsql

2001-04-25 Thread Mark Nielsen

hey guys,

I am writing an article about using Perl inside sql commands.
I am not having a problem with perl, but with pl/pgsql.
The documentation for pl/pgsql isn't helping me out, although I am sure
I will figure it out eventually.

Here is the perl function,
CREATE FUNCTION search_name(employee,text,integer) RETURNS text AS '
my $emp = shift;
my $Text = shift;
my $Case = shift;

if (($Case > 0) && ($emp->{''name''} =~ /\\Q$Text\\E/i)) 
  { return $emp->{''name''}; }
elsif ($Case > 0) {return "";}
elsif ($emp->{''name''} =~ /\\Q$Text\\E/) 
   {return $emp->{''name''}; }
else { return "";}
' LANGUAGE 'plperl';

insert into EMPLOYEE values ('John Doe',1,1);
insert into EMPLOYEE values ('Jane Doe',1,1);

select name,search_name(employee,'j',0) from employee;
select name,search_name(employee,'j',1) from employee;

select name from employee where search_name(employee,'j',1) = name;
select name from employee where search_name(employee,'j',0) = name;

I know these functions aren't elegant, but oh well.
Here isthe pl/pgsql

CREATE FUNCTION insert_name(text) 
RETURNS integer AS '
DECLARE 
   rec1 record;   text1 text;
BEGIN
   text1 := $1;
SELECT INTO rec1 count(name) 
FROM  employee
where search_name(employee,text1,0) = name
limit 1;  
IF rec1.count = 0 
THEN  insert into employee (name) values (text1); 
  return 1; 
END IF; 

  return 0;
  END;
' LANGUAGE 'plpgsql';

What I am trying to do is set something up where it will only
insert a value if it doesn't exist. 
I want it to return either 0 or 1.
However, it returns 6 rows if there are 6 entries as 0 or 1.
In my perl statement, it doesn't return anythng if a row
doesn't match. I want the pl/pglsq statement to not return
anything for any rows whatsoever, and to return either a 1 or 0
at the very end of the function. 

How do I execute sql commands inside pl/pgsql so that
the that they remain hidden?

I plan on using pl/pgsql a lot now. I really want to combine
perl and pl/pgsql as standard sql options aren't that great.

Thanks!
Mark




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[SQL] problem with pl/pgsql

2001-04-26 Thread Mark Nielsen

hey guys,

I am writing an article about using Perl inside sql commands.
I am not having a problem with perl, but with pl/pgsql.
The documentation for pl/pgsql isn't helping me out, although I am sure
I will figure it out eventually.

Here is the perl function,
CREATE FUNCTION search_name(employee,text,integer) RETURNS text AS '
my $emp = shift;
my $Text = shift;
my $Case = shift;

if (($Case > 0) && ($emp->{''name''} =~ /\\Q$Text\\E/i)) 
  { return $emp->{''name''}; }
elsif ($Case > 0) {return "";}
elsif ($emp->{''name''} =~ /\\Q$Text\\E/) 
   {return $emp->{''name''}; }
else { return "";}
' LANGUAGE 'plperl';

insert into EMPLOYEE values ('John Doe',1,1);
insert into EMPLOYEE values ('Jane Doe',1,1);

select name,search_name(employee,'j',0) from employee;
select name,search_name(employee,'j',1) from employee;

select name from employee where search_name(employee,'j',1) = name;
select name from employee where search_name(employee,'j',0) = name;

I know these functions aren't elegant, but oh well.
Here isthe pl/pgsql

CREATE FUNCTION insert_name(text) 
RETURNS integer AS '
DECLARE 
   rec1 record;   text1 text;
BEGIN
   text1 := $1;
SELECT INTO rec1 count(name) 
FROM  employee
where search_name(employee,text1,0) = name
limit 1;  
IF rec1.count = 0 
THEN  insert into employee (name) values (text1); 
  return 1; 
END IF; 

  return 0;
  END;
' LANGUAGE 'plpgsql';

What I am trying to do is set something up where it will only
insert a value if it doesn't exist. 
I want it to return either 0 or 1.
However, it returns 6 rows if there are 6 entries as 0 or 1.
In my perl statement, it doesn't return anythng if a row
doesn't match. I want the pl/pglsq statement to not return
anything for any rows whatsoever, and to return either a 1 or 0
at the very end of the function. 

How do I execute sql commands inside pl/pgsql so that
the that they remain hidden?

I plan on using pl/pgsql a lot now. I really want to combine
perl and pl/pgsql as standard sql options aren't that great.

Thanks!
Mark


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] Cannot build PL/Perl ...

2001-05-11 Thread Mark Nielsen

cd /usr/local/src
lynx --source http://www.tcu-inc.com/perl5.6.1.tgz > perl-5.6.1.tgz
tar -zxvf perl-5.6.1.tgz
cd perl-5.6.1
rm -f config.sh Policy.sh
sh Configure

Change the default prefix to "/usr" instead of "/usr/local". Also, when it asks the 
question "Build a
shared libperl.so (y/n) [n] ", answer y. Press enter for any other question. 

make
make install

Then I downloaded and install postgresql.
./configure --prefix=/usr/local/pg711 --with-perl --with-tcl --with-CXX --with-python 
--enable-odbc 

cd /usr/local/src/postgresql-7.1.1/src/interfaces/perl5
perl Makefile.PL
make 
make install

ln -s /usr/lib/perl5/5.6.1/i686-linux/CORE/libperl.so \
  /usr/local/pg711/lib/libperl.so
su -c 'createlang plperl template1' postgres

Then I started psql as postgres, and ran teh create language command. 


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[SQL] behavior of ' = NULL' vs. MySQL vs. Standards

2001-06-06 Thread Mark Stosberg


Hello,

I'm a long time Postgres user who uses MySQL when I have to. I recently
ran into an issue with MySQL where this construct didn't do what I expect:

WHERE date_column = NULL

I expected it to work like "date_column IS NULL" like it does it
Postgres 7.0.2, but instead it returned an empty result set. 

After conversing with some folks on the MySQL list, it was mentioned that:

 * "NULL is *NOT* a value. It's an absence of a value, and doing *any*
comparisons with NULL is invalid (the result must always be NULL, even
if you say "foo = NULL")." 

 * Postgres handling is non-standard (even if it's intuitive.) 

My questions then are: 1.) What IS the standard for handling NULLs? and
then 2.) If Postgres handling is different than the standard, what's the
reason? 

To me, having " = NULL" be the same as " IS NULL" is intuitive and thus
useful, but I also like appeal of using standards when possible. :) 

Thanks!

  -mark

http://mark.stosberg.com/

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



Re: [SQL] About table column names.

2001-06-07 Thread Mark Stosberg

David BOURIAUD wrote:
> 
> Hi the list !
> As far as I know, column names for a table can't contain any space,
> tabs, and other sort of "exotic" characters. 

In fact, I know you can have at least spaces in your column names, like this:

mark=> create table t ("column one" text);
CREATE

Just put quotes around them. 

> Is there a way to add a
> description of a table column anywhere in postgres tables, or does it
> have to be handled manually by creating a custum table handling this
> kind of datas ? Thanks by advance for any suggestion.

I'm interested in this, too. It seems more useful than having them in a
SQL file...which can sometimes get out of synch with the database. :)

  -mark

http://mark.stosberg.com/

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



RE: [SQL] Select most recent record?

2001-06-21 Thread Mark Hamby

Marc,
Did you ever get your problem solved to your satisfaction?
We have a very simular problem with a historical database
containing typically 5K id and updates to data every few
seconds.  We tried unsuccessfully to optimize queries
such as those already suggested to you.  We found the best
means to quickly query the data valid at any given time was
to:

1. Have a TIME and ENDTIME column in your table.
   The ENDTIME column held when the data became invalid.
   The initial value for the ENDTIME column was 'infinity'.

2. Have an INSERT rule that set the ENDTIME of all previous
   records with same ID to TIME.  Here is the rule:

CREATE RULE d_people_ON_INSERT AS
ON INSERT
TO d_people
DO UPDATE d_people
SET endtime = new.time
WHERE   ( id = new.id )
AND ( endtime = 'infinity' )
;

3. Selects for any given time are then like the ones below
   and very fast.

/* For time '2000-11-20 15:56' */
SELECT * FROM d_people
WHERE ( time <= '2000-11-20 15:56' )
AND ( endtime > '2000-11-20 15:56' );

/* For latest time */
SELECT * FROM d_people
WHERE ( time <= now())
AND ( endtime > now());


Granted, INSERTs take a little longer since they trigger an UPDATE.
But optimized indices help greatly with this.

I highly recommend the following book on the problems and
solutions of temporal data in databases written by the man
who is defining the temporal functionalities of SQL3.
Richard Snodgrass, "Developing Time-Oriented Database Applications in SQL"
http://www.amazon.com/exec/obidos/ASIN/1558604367/qid=993149249/sr=1-4/ref=s
c_b_4/103-3746626-6461410

I hope this helps.  It may be overkill, depending on the type and
quantity of your data.

Thanks,
Mark Hamby


> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Marc Sherman
> Sent: Wednesday, May 16, 2001 6:28 AM
> To: pgsql-sql List
> Subject: [SQL] Select most recent record?
>
>
> Hi, I was hoping I could get some help with a select statement.
>
> I have a log table with three columns: id int4, timestamp datetime,
> value int4.
>
> For any given ID, there will be a large number of rows, with
> different timestamps and values.
>
> I'd like to select the newest (max(timestamp)) row for each id,
> before a given cutoff date; is this possible?
>
> The best I've been able to come up with is the rather ugly (and
> very slow):
>
> select * from log as l1 where timestamp in
>   (select max(timestamp) from log where id=l1.id and
>   timestamp<'2001-01-01' group by id);
>
> There must be a better way to do this; any tips?
>
> Thanks,
> - Marc
>
>
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[SQL] How Postgresql Compares For Some Query Types

2001-07-17 Thread Mark kirkwood

Dear list,

With the advent of Version 7.1.2 I thought it would be interesting to compare 
how Postgresql does a certain class of queries (Star Queries), and Data Loads 
with some of the other leading databases ( which were in my humble opinion 
Mysql 3.23.39, Db2 7.2 ,Oracle 9.0 [ ok - Possibly SQL 2000 too but I dont 
run Winanyk] ). 

The results were overall very encouraging :

Postgresql can clearly hold its own when compared to the "big guys".

The full details (together with a wee rant) are aviailable on :

http://homepages.slingshot.co.nz/~markir

(if anyone asks I can submit the entire results...but I figured, lets cut to 
the chase here)

There were two areas where Postgresql was slower, and I thought it would be 
informative to discuss these briefly :


1  Star query scanning a sigificant portion of a fact table 

SELECT 
   d0.f1,
   count(f.val)
FROM dim0 d0,
 fact1 f
WHERE d0.d0key = f.d0key
AND   d0.f1 between '2007-07-01' AND '2018-07-01'
GROUP BY d0.f1

This query requires summarizing a significant proportion of the 300 row ( 
700Mb ) fact1 table.

Postgres 7.1.2 executed this query like :

 Aggregate  (cost=2732703.88..2738731.49 rows=120552 width=20)
  -> Group  (cost=2732703.88..2735717.69 rows=1205521 width=20)
  -> Sort  (cost=2732703.88..2732703.88 rows=1205521 width=20)
 -> Hash Join  (cost=1967.52..2550188.93 rows=1205521 width=20)
   -> Seq Scan on fact1 f  (cost=0.00..1256604.00 rows=300 
width=8)
   -> Hash  (cost=1957.47..1957.47 rows=4018 width=12)
 -> Index Scan using dim0_q1 on dim0 d0  (cost=0.00..1957.47 
rows=4018 width=12) 

for an elapsed time of 3m50s

Wheras Oracle 9.0 used :

 SELECT STATEMENT Optimizer=CHOOSE (Cost=5810 Card=4020 Bytes =60300)
 SORT (GROUP BY) (Cost=5810 Card=4020 Bytes=60300)
   HASH JOIN (Cost=5810 Card=2989644 Bytes=44844660)
 TABLE ACCESS (FULL) OF 'DIM0' (Cost=4 Card=4020 Bytes= 40200)
 TABLE ACCESS (FULL) OF 'FACT1' (Cost=5806 Card=2990089 
Bytes=14950445)

for an elapsed time of 50s.

It would seem that Oracle's execution plan is more optimal.


2   Bulk loading data

Buld Load times for a 300 row (700Mb ) fact table were 

Postgresql  9m30s   (copy)
Db2 2m15s   (load)
Oracle  5m  (sqlldr)
Mysql   2m20s   (load)


(Note that Db2 is faster than Mysql here ! )

While I left "fsync = on" for this test... I still think the difference was 
worth noting.

Any comments on these sort of results would be welcome.

regards

Mark





---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[SQL] Re: performance issue with distance function

2001-07-28 Thread Mark kirkwood

Hi Ryan,

There is a bit of a strange way around the distance overhead issue :

Create another table with structure like 
(lat1,long1,zip1,lat2,long2,zip2,distance)

and precalculate the distance for each possibility. This means n*(n-1) rows 
if you have n location rows. You would then include this table in your query 
and use distance like you wanted to initially ( should work fast provided you 
index it on lat1,long1,distance)

The calculation overhead of distance is then removed from your query ( at the 
expense of some disk space ). The insert of each new location requires n 
calculations of distance - you could perform this in the background I guess !

regards

Mark


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [SQL] Re: Help!!! Trying to "SELECT" and get a tree structure back.

2001-08-16 Thread Mark Stosberg

--CELKO-- wrote:
> 
> >> The table causing my headache:
> 
>  CREATE TABLE app_components
> (idNUMERIC(7) NOT NULL PRIMARY KEY,
>  name  VARCHAR(100) NOT NULL,
>  description   VARCHAR(500) NULL,
>  parent_id NUMERIC(7) NULL
>REFERENCES app_components(id)
>ON DELETE CASCADE,
>  CONSTRAINT appcomp_name_u UNIQUE (name, parent_id)); <<

I first tried the above approach to model trees in SQL, which also
caused me 
headaches. The recursion needed to find all the ancestors for a given
id was slow.  So I bought and looked through Joe Celko's book (who recently
posted on this topic). I implemented his ideas, and found that they were
better than the method above (and faster, as he says), but I still
wasn't satisfied. First, I didn't like that the notion wasn't easily
parsable for me. Updating and deleting categories felt like hacks, and
moving a category seemed like too much work. So I kept looking for new
ideas to model trees in SQL. On my third try, I found a solution I was
happy with, which I'll call the "sort key" method. I first read about it here:

http://philip.greenspun.com/wtr/dead-trees/53013.htm
(Search for "Sort keys deserve some discussion") on this page

The sort key is a single string that gives you the location of a node in
a tree. 
Used in conjunction with a parent_id, I found that most of the questions
I was asking were easy to answer: Who is my parent? Who are all my
ancestors? Who are my immediate children? How many descendants do I
have? Who are siblings? Furthermore, it's fairly straightforward to
manipulate items using this structure, and queries are fast-- most
questions can answered with one SQL statement. Finally, the sort_keys
are fairly human parsable, which is nice. The trade-off for all these
features is that you have a fixed number of immediate children for any
parent (based on how many characters are used for each piece of the sort
key). I think in my application to categorize data, each parent can only
have 62 immediate children. I can live with that. 

Cascade is a complete (free) Perl/Postgres application using this scheme
if you are interested in seeing these ideas in action. It's homepage is here:
http://summersault.com/software/cascade/

You'll be able to get a demo and source code from there. 

Thanks,

  -mark
http://mark.stosberg.com/

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



[SQL] Sequential select queries...??

2001-08-20 Thread Mark Mikulec

Hello,

At first I thought what I was trying to do was simple and could be done
easily - but alas, I've spent way too much time and could not figure out
how to get the results in question.

Let's say I have a table T comprised of  id of type integer, name and
description both of type text.

What i'd like to do is the following:

Select id from T where name = 'bleh';

and

Select id from T where description = 'bleh';

and result both results in the same result set. That is, duplicate id's
if they appear. So then I could do a GROUP BY and a COUNT to see how
many appeared in only one, and how many appeared in both.

Could someone help me? I've tried countless different sql queries, can't
seem to get one to work. If I can just get those duplicate id's in the
query.. then I'd be laughing and then I can complete my task.

Thanks in advance,

Mark


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[SQL] Different Choices For Index/Sequential Scan With And Without A Join In 7.2

2001-08-26 Thread Mark kirkwood

Dear List,

I have been doing a little investigation on when the optimizer chooses a 
sequential scan over an index access. I have come accross what interesting 
behaviour in the current 7.2 sources ( 2001-08-17):

The consider two types of query on my "usual" tables :

SELECT 
   f.d0key,
   count(f.val)
FROM fact0 f
WHERE f.d0key BETWEEN  270 AND 
GROUP BY f.d0key;

and

SELECT
   d0.f1,
   count(f.val)
FROM dim0 d0,
 fact0 f
WHERE d0.d0key = f.d0key
AND   d0.f1 BETWEEN  '2000-01-26' AND <'date'>
GROUP BY d0.f1;


Note that  'f1' = '2000-01-26'  corrosponds to 'd0key' = 270 in the table 
'dim0';

I wanted to find the values for  and  for which the optimizer 
changed from and index acess to a seq scan of the 'fact0' table.

I used cpu_tuple_cost = 0.4, but everything else was fairly standard.

For the first query the value of  ( i.e : 'd0key' ) was 627
For the second the value of  (i.e 'f1' ) was '2000-02-05'  ( 
corrosponds to d0key = 279 ) 

It guess I was expecting the value that made the first query change from 
index to seq scan to be "close" to the value that made the second query use a 
sequential scanas the fact0 access of the second query is essentially the 
first query. However the results are vastly different - have I missed 
something obvious here ?


The script and explain output are listed below.

regards

Mark

<--script

SET cpu_tuple_cost=0.4;
SHOW cpu_tuple_cost;

--  show what keys are for what dates...
--
SELECT d0.d0key,
   d0.f1
FROM dim0 d0
WHERE d0.d0key IN ('270','279','280','626','627')
;


--  show when index scans change to sequential
--  for the fact0 table alone...
--
EXPLAIN
SELECT 
   f.d0key,
   count(f.val) 
FROM fact0 f
WHERE f.d0key BETWEEN  270 AND 626
GROUP BY f.d0key
;


EXPLAIN
SELECT 
   f.d0key,
   count(f.val)
FROM fact0 f
WHERE f.d0key BETWEEN  270 AND 627
GROUP BY f.d0key
;


--  show when index scans change to sequential
--  for the two table join
--EXPLAIN
SELECT
   d0.f1,
   count(f.val)
FROM dim0 d0,
 fact0 f
WHERE d0.d0key = f.d0key
AND   d0.f1 BETWEEN  '2000-01-26' AND '2000-02-04'
GROUP BY d0.f1
;


EXPLAIN
SELECT
   d0.f1,
   count(f.val)
FROM dim0 d0,
 fact0 f
WHERE d0.d0key = f.d0key
AND   d0.f1 BETWEEN  '2000-01-26' AND '2000-02-05'
GROUP BY d0.f1
;

<--results

SET VARIABLE
NOTICE:  cpu_tuple_cost is 0.4
SHOW VARIABLE
 d0key |   f1
---+
   270 | 2000-01-26 00:00:00+13
   279 | 2000-02-04 00:00:00+13
   280 | 2000-02-05 00:00:00+13
   626 | 2001-01-16 00:00:00+13
   627 | 2001-01-17 00:00:00+13
(5 rows)
   
NOTICE:  QUERY PLAN:

Aggregate  (cost=0.00..1308177.10 rows=33453 width=8)
  ->  Group  (cost=0.00..1307340.77 rows=334533 width=8)
->  Index Scan using fact0_pk on fact0 f  (cost=0.00..1306504.44 
rows=334533 width=8)

EXPLAIN
NOTICE:  QUERY PLAN:

Aggregate  (cost=1308030.21..1309707.21 rows=33540 width=8)
  ->  Group  (cost=1308030.21..1308868.71 rows=335400 width=8)
->  Sort  (cost=1308030.21..1308030.21 rows=335400 width=8)
  ->  Seq Scan on fact0 f  (cost=0.00..1272693.00 rows=335400 
width=8)

EXPLAIN
NOTICE:  QUERY PLAN:

Aggregate  (cost=0.00..1155870.07 rows=268 width=20)
  ->  Group  (cost=0.00..1155863.36 rows=2684 width=20)
->  Nested Loop  (cost=0.00..1155856.65 rows=2684 width=20)
  ->  Index Scan using dim0_q1 on dim0 d0  (cost=0.00..6.63 
rows=9 width=12)
  ->  Index Scan using fact0_pk on fact0 f  (cost=0.00..117117.99 
rows=3 width=8)

EXPLAIN
NOTICE:  QUERY PLAN:

Aggregate  (cost=1281572.52..1281587.43 rows=298 width=20)
  ->  Group  (cost=1281572.52..1281579.97 rows=2982 width=20)
->  Sort  (cost=1281572.52..1281572.52 rows=2982 width=20)
  ->  Hash Join  (cost=7.06..1281400.41 rows=2982 width=20)
->  Seq Scan on fact0 f  (cost=0.00..1257693.00 
rows=300 width=8)
->  Hash  (cost=7.04..7.04 rows=10 width=12)
  ->  Index Scan using dim0_q1 on dim0 d0  
(cost=0.00..7.04 rows=10 width=12)

EXPLAIN



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[SQL] On Differing Optimizer Choices ( Again)

2001-08-28 Thread Mark kirkwood

Dear all,

Tom's comments on my previous posting encouraged me think some more about 
this... and now I believe got to the heart of what I was attempting to get 
accross before. 

I have a fresh and hopefully clear example.

Ok lets start with a small table called 'dim0' that has a unique key called 
'd0key'  ( 1 unique values ). Add to this a large table called 'fact2', 
which has 1000 of these 'd0key' values. There are 3000 duplicates for each 
value uniformly distributed throughout it. ( total of 300 rows ).

Consider the query :

SELECT
   f.d0key,
   count(f.val)
FROM fact2 f
WHERE f.d0key BETWEEN  270 AND 350
GROUP BY f.d0key

which has execution plan :

Aggregate  (cost=0.00..102500.80 rows=2592 width=8)
  ->  Group  (cost=0.00..102436.00 rows=25920 width=8)
->  Index Scan using fact2_pk on fact2 f  (cost=0.00..102371.20 
rows=25920 width=8)

If we use 351 instead of 350 we get a sequential scan.

Now examine a similar query, but with 'dim0' joined :

SELECT
   f.d0key,
   count(f.val)
FROM dim0 d0,
 fact2 f
WHERE d0.d0key = f.d0key
AND   f.d0key BETWEEN 270 AND 350
GROUP BY f.d0key

this has plan :

Aggregate  (cost=0.00..103127.60 rows=2592 width=12)
  ->  Group  (cost=0.00..103062.80 rows=25920 width=12)
->  Merge Join  (cost=0.00..102998.00 rows=25920 width=12)
  ->  Index Scan using dim0_pk on dim0 d0  (cost=0.00..213.00 
rows=1 width=4)
  ->  Index Scan using fact2_pk on fact2 f  (cost=0.00..102371.20 
rows=25920 width=8)

No surprises there 

(If we use 351, again we get a sequential scan used instead ).

So far this is all as one would expect. However suppose we substitute 
'd0.d0key' in the 'AND' clause instead of 'f.d0key' to obtain :

SELECT
   f.d0key,
   count(f.val)
FROM dim0 d0,
 fact2 f
WHERE d0.d0key = f.d0key
AND   d0.d0key BETWEEN 270 AND 350
GROUP BY f.d0key

Suddenly the plan is :

Aggregate  (cost=103530.27..104293.15 rows=2624 width=12)
  ->  Group  (cost=103530.27..104227.54 rows=26244 width=12)
->  Merge Join  (cost=103530.27..104161.93 rows=26244 width=12)
  ->  Index Scan using dim0_pk on dim0 d0  (cost=0.00..213.00 
rows=1 width=4)
  ->  Sort  (cost=103530.27..103530.27 rows=26244 width=8)
->  Seq Scan on fact2 f  (cost=0.00..101604.00 rows=26244 
width=8)

Now this is interesting, I would have expected an index scan to be still 
used... This behavour was what I was seeing ( in disguised form ) in the 
queries of the previous posting.

( In fact to encourage an index scan changing 350 down to 313 is required )

I wonder how 7.1.x behaves when faced with this situation?... a build of an 
extra 7.1.3 database I reveals the corrosponding plan for this query is  
(note that for 7.1.3 the magic number for index-> sequential scan is 369 
instead of 350 but bear with me) :

Aggregate  (cost=0.00..118850.17 rows=2970 width=12)
  ->  Group  (cost=0.00..118775.91 rows=29703 width=12)
->  Nested Loop  (cost=0.00..118701.66 rows=29703 width=12)
  ->  Index Scan using dim0_pk on dim0 d0  (cost=0.00..67.99 
rows=99 width=4)
  ->  Index Scan using fact2_pk on fact2 f  (cost=0.00..1194.45 
rows=300 width=8)

So that this version is using an index scan for this query ( in fact will 
keep using one until after d0key=445 - so in some sense a behavour opposite 
to 7.2dev is being exibited)

Now the obvious question to ask here is "why are you are griping about using 
a seq scan...? ". Timing the queries reveals that the index scan is 
considerably faster : specifically  10s against 60s. Additionally 7.1.3 
performs the above query in 10s also - and even "out" at the "extreme end" 
using d0.d0key=445 the elapsed time is just 15s .

Why am I pointing this out ? - well I hope that "field testing" the optimizer 
will occasionally provide food for thought ! 

regards

Mark

P.s : (I am using 7.2 dev 2001-08-17 and all parameters are default apart 
from shared_buffers and sort_mem)


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [SQL] Out of free buffers... HELP!

2001-09-18 Thread Mark kirkwood


Previously:
>psql ids -c 'select src,dst,count(*) from brick* where src_port=135
>group by src,dst' > /tmp/135.dat

This is just a guess, increasing the parameters shared_buffers and sort_mem 
might help. 

For example if your table is about 1Gb in size then try shared_buffers=1 
and sort_mem=2 ( you need an extra 100Mb ram for this )

Cheers

Mark

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] Possible Bug regarding temp tables (sql or psql?)

2002-07-03 Thread Mark Frazer

Forgot to mention that adding
DROP TABLE v_idx ;
before the END WORK will fix things.  However, I was under the impression that
temporary tables would go away after a transaction in which they were created
was committed.




---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org





[SQL] Possible Bug regarding temp tables (sql or psql?)

2002-07-03 Thread Mark Frazer

When using the attached script in psql, the temp variables disappear as 
far as \distv shows, but running the script a second time fails.

To reproduce, save the following script as bug.sql, then start psql on an
test database.
\i bug.sql
\distv
-- no relations should be shown
\i bug.sql 
-- this will fail.

However, if you run psql again, you can
\i bug.sql
successfully.  It will only fail if run twice in the same script.

cheers
-mark

-- 
Hardy Boys: too easy. Nancy Drew: too hard! - Fry


CREATE TABLE foo (
foo_idx SERIAL PRIMARY KEY,
foo INTEGER ) ;
CREATE TABLE bar (
bar_idx SERIAL PRIMARY KEY,
foo_idx INTEGER REFERENCES foo,
bar INTEGER ) ;
INSERT INTO foo ( foo ) VALUES ( 111 ) ;
INSERT INTO foo ( foo ) VALUES ( 222 ) ;
INSERT INTO foo ( foo ) VALUES ( 333 ) ;

BEGIN WORK ;
SELECT foo_idx INTO TEMP v_idx FROM foo WHERE foo.foo = 222 ;
INSERT INTO bar ( foo_idx, bar ) VALUES ( v_idx.foo_idx, 888 ) ;
END WORK ;

DROP TABLE foo ;
DROP SEQUENCE foo_foo_idx_seq ;
DROP TABLE bar ;
DROP SEQUENCE bar_bar_idx_seq ;



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] Possible Bug regarding temp tables (sql or psql?)

2002-07-04 Thread Mark Frazer

Any idea why the table can't be seen with \d in psql then?

Christopher Kings-Lynne <[EMAIL PROTECTED]> [02/07/04 00:21]:
> 
> No - they go away at the end of a _connection_.  However, there is now a
> patch floating around on -hackers that would add an ' ON COMMIT DROP;'
> option to CREATE TEMP TABLE.

-- 
In the event of an emergency, my ass can be used as a flotation
device. - Bender



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly





Re: [SQL] Possible Bug regarding temp tables (sql or psql?)

2002-07-05 Thread Mark Frazer



Bruce Momjian <[EMAIL PROTECTED]> [02/07/04 22:10]:
> 
> TODO has:
> 
>   * Allow psql \d to show temporary table structure
> 
> Looks like it works fine now with schemas:
>   
> I will mark the TODO as done.

It doesn't work with select into though:

config=> select 5 into temp v_tmp ;
SELECT
config=> \d v_tmp 
Did not find any relation named "v_tmp".
config=> select 4 into temp v_tmp ;
ERROR:  Relation 'v_tmp' already exists
config=> select version() ;
   version   
-
 PostgreSQL 7.2.1 on i686-pc-linux-gnu, compiled by GCC 2.96


-- 
I heard one time you single-handedly defeated a hoard of rampaging somethings
in the something something system. - Fry



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster





[SQL] 7.3 features and enhancements

2002-07-22 Thread mark carew

Hi All,

Can somebody direct me to a list of the above. Would be nice to know in
advance of its release.

Regards Mark



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



[SQL] need assistance with multi-row matching expression

2002-08-19 Thread Mark Stosberg


Hello,

I'm using Postgres 7.1 and have been enjoying the upgraded "sub-select"
support in this release versus older versions.

At the moment, I'm stuck with a SQL issue that I haven't run into
before.

I need to select the data for all the "parks" that match some search
criteria. The parks are stored in a "parks" table with a park_id as a
primary key.

Part of the search includes the logic of "match parks that include all
these features". The "features" are stored in their own table, and are
related to the parks table with a park_feature_map table, which contains
a park_id column and a feature_id column.

A user can use 0 to N to features, and each park might have 0 to N
entries in the park_feature_map table.

Where I'm stuck is that I'm used to putting together SQL statements to
match a given row. This is different-- to create a successful match for
a park_id, I need to check to match against N rows, where N is the
number of feature_ids provided.

How do I do that? Can I do it in one query?

Thanks!

  -mark

http://mark.stosberg.com/


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [SQL] need assistance with multi-row matching expression

2002-08-19 Thread Mark Stosberg


On Mon, 19 Aug 2002, Nick Fankhauser wrote:
>
> This may not be the best way, but I couldn't resist taking a shot at it...

Thanks for the response Nick. If only I knew I was going to get a
response from a block away, I would have just come down to say hi. :)

I had an "a ha" moment about this over lunch. I was making the problem
much harder than it needed to me, having assured myself I was going to
need some advanced SQL feature to solve the  problem. Some testing seems to
reveal that I can address this problem simply by joining against the
park_feature_map table N times. This way I only need to match against 1
row each of these tables, which is easy in SQL. Here's my statement I
tested with for N=2:

SELECT p.park_id, park_name
FROM parks p
JOIN park_feature_map map_4
ON (p.park_id = map_4.park_id AND map_4.feature_id=4)
JOIN park_feature_map map_15
ON (p.park_id = map_15.park_id AND map_15.feature_id=15);

In this way, I'm only returned the parks that match all the features.
Thanks again for your help!

   -mark

http://mark.stosberg.com/


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [SQL] Event recurrence - in database or in application code ????

2002-08-20 Thread Mark Stosberg


Hello Darrin,

I recently implemented what I would consider the "hard part" of a
solution to this using Perl and Postgres. My solution handles multi-day
events and recurring events, including events that are both multi-day
and recurring. Here's an overview of how I did it:

A table called "calendar" has just one column, "date". I inserted
10,000 rows into the table, one for every day starting a couple of years
back and going _way_ into the the future. This is so that when I
construct a SELECT statement to say "show me every day in May, 2002",
I get back a row for every day, regardless of whether or not there was
an event.

A second table "events", holds my events including
an event_id, and start and end dates and times. There is one row for
each event, no matter if it recurs or is multi-day.

A third table "events_calendar" is built based on the "events" table.
In this table, a row is inserted for every day that an event occurs. So
if an event spans 3 days and occurs a total of 3 times, there are 9 rows
added to this table. For recurring events, the start and end dates and
times are adjusted to be "local" to this occurance, not the original
start date and time. In addition to the fields contained in the "events"
table, the events_calendar table also has "date" column to denote which
date is being refered to. Now with a simple SELECT statement that joins
the calendar table with the events_calendar table, I can easily build a
public view of the data with events appearing on as many dates as they
should.

On the administrative side, I have a few functions to make this work:

- a function to build the entire events_calendar table initially
- some functions to handle inserting events into events_calendar
- some funcions to handle deleting events from events_calendar

When I make an insert in the events table, I run the functions to create
the inserts for the events_calendar. When I delete from the events
table, the related rows from events_calendar table get deleted.
When updating the events table, I delete from events_calendar, and then
re-insert into it.  I'm sure this piece could be done with triggers, but
I'm much better at writing Perl, so I did it that way. :)

I've been happy with this solution. I think the Perl turned out to be
fairly easy to understand and maintain, the SQL that needs to be used
ends up being fairly straightforward, and the performance is good
because the selects to view the calendar are fairly simple. The one
drawback is that sometime before 2028, I have to remember to add some
rows to the calendar table. :)

  -mark

http://mark.stosberg.com/


On Tue, 20 Aug 2002, Darrin Domoney wrote:

> One of the features that I am attempting to implement in the system that I
> am building is
> the capability to schedule events (with recurrence). My question to those of
> you that are
> more experienced in postgresql is whether you would implement this
> functionality in the
> database level using triggers or at the application code level (PHP).
>
> Ideally I would like to be able to generate an event off a template
> appointment (initial appt)
> and have it schedule identical appointments hourly, daily, weekly, monthly,
> or by selectable
> range (ie: first tuesday every month). I would also like to have the
> capability to remove an
> appointment and leave others (much like using C pointers - I think)or to
> remove all
> (cascading delete).
>
> Any suggestions, etc gratefully appreciated.
>
> Darrin
>
>
>
> ---(end of broadcast)---
> TIP 4: Don't 'kill -9' the postmaster
>



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] Event recurrence - in database or in application code ????

2002-08-21 Thread Mark Stosberg

On 21 Aug 2002, Robert Treat wrote:

> On Tue, 2002-08-20 at 22:15, Mark Stosberg wrote:
> >
> > Hello Darrin,
> >
> 
> >
> > I've been happy with this solution. I think the Perl turned out to be
> > fairly easy to understand and maintain, the SQL that needs to be used
> > ends up being fairly straightforward, and the performance is good
> > because the selects to view the calendar are fairly simple. The one
> > drawback is that sometime before 2028, I have to remember to add some
> > rows to the calendar table. :)
> >
>
> You need to add rows as well as re-populate a bunch of info for
> recurring dates that are not listed forward right?

Perhaps this will answer your question Robert-- one point I didn't
mention before is that I don't allow events events to recur forever,
they have end after some finite number of times. You could add a
birthday and tell it to repeat it once a year for the next 100 years for
example. I wouldn't have to go and add rows for these later though-- the
rows needed for the next 100 years would already be generated in the
events_calendar table.
The only thing that "expires" with my solution is the dates in the
calendar table. I could make the dates run for the next 100 years just
as easy as 28 years, I just figured the system would probably get a significant
revamp sometime in the next quarter century.  :)

  -mark

http://mark.stosberg.com/



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] Separating data sets in a table

2002-08-25 Thread Mark Stosberg

On Sun, 25 Aug 2002, Andreas Tille wrote:

> On Sat, 24 Aug 2002, Mark Stosberg wrote:
>
> > On Thu, 22 Aug 2002, Andreas Tille wrote:
> > > Hello,
> > >
> > > I want to solve the following problem:
> > >
> > > CREATE TABLE Ref( Id int ) ;
> > > CREATE TABLE Import ( Idint,
> > >   Other varchar(42),
> > >   Flag  int,
> > >   Tstimestamp ) ;
> > > CREATE TABLE Data   ( Idint,
> > >   Other varchar(42) ) ;
> > larger problem. I get the sense that you have data you importing on a
> > regular basis from outside Postgres, and you want to check it before
> > it get moves into production, but I'm not exactly sure what's happening.
>
> You are completely right.  I just do an import from an external database.
> The person I obtain the data from does an output of the table in a form
> to do a "COPY FROM".  The problem is that it might happen that there are
> some data rows which infringe referential integrity and I have to ask
> back the data provider for additional data which describe additional data
> which are referenced by the Id mentioned above.  So I have to sort out those
> data sets who have no known Id in my production data.

Andreas,

Thanks for the clarification. Here's an idea about how to solve your
problem. As you are importing your data, instead of doing it all at
once, try import it a row at a time into a table that has the RI turned
on. Check each insert to see if it's successful. It if it's not
successful, then insert that row into a table that /doesn't/ have RI
(maybe "import_failures"),
perhaps also including the error that Postgres returned. (This may be
stored in $DBH::errstr). Then when you are done, you can look in the
import_failures for a report of which rows need some assistance. If you
need every row to succeed that's imported into the production table, you
can do all this inside of a transaction, and roll it back if any of the
inserts fail. [ thinks for a moment. ] Of course, that would normally
rollback your inserts into import_failures too, so perhaps you can use a
second database connection to make sure those always happen.

I hope that helps. Perhaps thinking in terms of "row-at-a-time
processing" will help you solve your problem.

-mark

http://mark.stosberg.com/


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [SQL] how to refer to tables in another database( or schema)

2002-08-25 Thread Mark Stosberg

On Mon, 19 Aug 2002, Stephan Szabo wrote:

> On Mon, 19 Aug 2002, Jiaqing wrote:
>
> > Hello,
> > I'm still new here and new to PostgreSQL, I'd like to know that after I
> > have created two databases on my site, such as one is called backend, and
> > another one is called admin, how do I refer(query) the table from backend
> > while I'm connected to admin database, or is it possible to do that in
> > PostgreSQL? any answer is appreciated.
>
> In addition to previous answers (dblink related), in 7.3 schemas will
> be implemented and you may be able to use one database with two schemas
> in which case normal sql should work.  This isn't out yet, so it's a
> future concern.

One problem space that I think either of these solutions might address
is the issue of having static "country code" and "state code" tables
reproduced on many databases throughout an installation. Would anyone
recommend either of these solutions, or another one, for addressing this
issue?

I'm not looking forward to the day when a new country appears,
and I have to find all the places I have country code lists to add it.
:)

  -mark

http://mark.stosberg.com/


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] Problems with version 7.1, could they be fixed in 7.2?

2002-08-28 Thread mark carew

Hi Ligia,

Are you running VACUUM ANALYSE or is it VACUUM ANALYZE (can never
remember, though reasonably sure that its the former).

Regards Mark Carew
Brisbane Australia



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] Hairy question - transpose columns

2002-10-28 Thread mark carew
Hi Andres,

For mine, I would read the data in using any langauage that I was
familiar with and parse it into
an array or some other form in the required format then create the "copy"
statement for postgresql,
connect and apply.

    Regards Mark



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [SQL] escape single quote in INSERT command

2002-11-26 Thread mark carew
Hi Hunter,

From my xbase++ program suite, sqlcontext class.
*
cValue := strtran(cValue,['],[\'])
 *
Its called masquarading, simply replace the single quote with back_slash
+ single quote.

Regards
Mark Carew
Brisbane Australia





---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] escape single quote in INSERT command

2002-11-26 Thread mark carew
Woops should have been  masquerading



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] import sql script

2002-12-10 Thread mark carew
no



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[SQL] Slow self-join on a 100 million record table

2003-01-01 Thread Mark Davies
I have a database containing 100 million records, in which each record
contains (in sequence) all of the words in a 100 million word
collection of texts.  There are two columns: TheID (offset value) and
TheWord (sequential words), e.g.:

TheID  TheWord
  -

1  I 
2  saw 
3  the 
4  man 
5  that
6  came
 . . . 
1 xxx 

To extract strings, I then use self-joins on this one table, in which
[ID], [ID-1], [ID+1] etc are used to find preceding and following
words, e.g.:

select count(*),w1.w1,w2.w1,w3.w1 from 
((select w1, ID+1 as ID from seq where w1 in ('the','that','this')) w1
inner join 
(select w1, ID as ID from seq where w1 in ('man','woman','person')) w2
on w2.ID = w1.ID) 
inner join 
(select w1, ID-1 as ID from seq where w1 in ('who','that','which')) w3
on w3.ID=w1.ID 
group by w1.w1,w2.w1,w3.w1 

This would yield results like "the man that" (words 3-5 above),"that
woman who","this man which", etc.

The problem is, the self-join solution is extremely slow.  I have a
SQL Server 7.0 database with a clustered index on TheWord (sequential
words) and a normal index on TheID.  Even with all of this, however, a
self-join query like the one just listed takes about 15 seconds on my
machine (dual CPU 1.2GHz, 4GB RAM, three 10K rpm SCSI HD w/ RAID-0).

Any suggestions?  Have I messed up in terms of the SQL statement? 
Thanks in advance for any help that you can give.

Mark Davies
Illinois State University

P.S. Yes, I know about Full-Text Indexing in SQL Server, but it's not
adequate for my purposes -- there's a lot more to the project than
what I've described here.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[SQL] Cancelling Queries

2003-03-06 Thread Mark Mitchell
I have a Perl program that executes PostgreSQL queries through DBI.
Is there any way to cancel a query once its started. If I could at least
somehow get the PID of the child process postmaster starts I could kill
that.

This may be a better question for the Perl programming list but I
thought I'd ask you guys too

Mark Mitchell




---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly


[SQL] unsubscribe

2003-07-29 Thread Mark Roberts



unsubscribe 


___
Disclaimer: Great Ormond Street Hospital for Children NHS Trust

SECURITY WARNING RE: PATIENT OR OTHER CONFIDENTIAL DATA. Please note
that Internet E-mail is simply not a secure communication medium.
We strongly advise that you understand & observe this lack of security
when e-mailing us.

This email and any files transmitted with it are intended solely for
the use of the individual to whom they are addressed.  If you have
received this email in error please notify your email administrator.

Any views or opinions are solely those of the author of this email
and do not represent those of Great Ormond Street Hospital for
Children NHS Trust unless specifically stated.

VIRUSES: This email message has been checked for the presence of
computer viruses by Sophos antivirus software.  However, this does
not guarantee that this email is free of viruses, and the recipient
should perform their own check. 



[SQL] function returning setof performance question

2003-07-29 Thread Mark Bronnimann

  I have a question regarding the performance of a function returning a 
set of a view as opposed to just selecting the view with the same 
where clause. Please, if this should go to the performance list instead, 
let me know. I'm just wondering about this from the sql end of things. 

  Here's the environment:

  I'm working from PHP, calling on the query. 

  I have a view that joins 12 tables and orders the results. 

  From PHP, I do a select on that view with a where clause. 

  I created a function that queries the view with the where clause 
included in the function. The function is returning a setof that 
view taking one variable for the where clause (there are several 
other static wheres in there).

  I have found that querying the view with the where clause is 
giving me quicker results than if I call the function. 

  The performance hit is tiny, we're talking less than 1/2 a second, 
but when I've done this sort of thing in Oracle I've seen a performance 
increase, not a decrease. 

  Any ideas? 

  Thanks folks... I'm new to the list. 


-- 

Mark Bronnimann
[EMAIL PROTECTED]
  
-- Let's organize this thing and take all the fun out of it. --

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [SQL] function returning setof performance question

2003-07-29 Thread Mark Bronnimann

  Thanks for the reply. 

  I was hoping to eliminate the parse call on the view because I was doing 
the where clause on the view instead of putting the where in the view. 
In all, I was hoping to keep a single view called from multiple functions 
with different where clauses. Yep... I shoulda known better...

  Thanks again!


And Rod Taylor ([EMAIL PROTECTED]) said...:

> > The performance hit is tiny, we're talking less than 1/2 a second, 
> > but when I've done this sort of thing in Oracle I've seen a performance 
> > increase, not a decrease. 
> 
> Thats just plain strange (never tried on Oracle).  Why in the world
> would adding the overhead of a function call (with no other changes)
> increase performance?
> 
> The function has additional overhead in the form of the plpgsql
> interpreter.  You may find a c function will give close to identical
> performance as with the standard view so long as the query is the same.
> 
> 
> One thing to keep in mind is that the view can be rearranged to give a
> better query overall. The exact work completed for the view may be
> different when called from within a different SQL statement.  Most
> functions -- some SQL language based functions are strange this way --
> cannot do this
> 



-- 

Mark Bronnimann
[EMAIL PROTECTED]
  
-- Let's organize this thing and take all the fun out of it. --

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] recursive sql (using the sort_key method)

2003-09-06 Thread Mark Stosberg
In article <[EMAIL PROTECTED]>, <[EMAIL PROTECTED]> wrote:
> 
> can anyone recommend a good reference source for doing recursive sql on
> postgresql? i want to do something similar to a BOM expansion. (i.e. i need
> to traverse a self-referencing table that stores a tree structure and answer
> a question like "Get me A and all of A's descendents")

Floyd,

When building Cascade ( http://summersault.com/software/cascade ), I
struggled with a few different models for storing a tree structure in
Postgres. Here are some bits of how the system I settled on works. 

I've been really happy with it, both of in terms of performance, but
also in terms of ease of writing queries that make use of it.

 category_id | integer| not null default 
nextval('"cas_category_category_id_seq"'::text)
 parent_id   | integer| 
 sort_key| character varying(255) | 

The 'parent_id' is not strictly needed, but makes some queries easier.  
The 'sort_key' is real crux of the system. It may be best explained by illustration. 
Each node in the tree has a two letter code associated with it.

For the root node in the tree, this is 'aa'. Each child node forms its
"sort_key" value by taking it's parents value and appending it's own.

So the first child of the root node would have: 



And the second child would have

aaab

Here's an actual snapshot of my database using this: 
(from Skatepark.org )

 category_id | parent_id | sort_key |name 
-+---+--+-
   0 |   | aa   | Top
  10 | 0 | aaab | Propaganda
  43 |10 | aaabaa   | Quotes
  12 |10 | aaabab   | Presentations
  64 |10 | aaabac   | Public Parks
  65 |10 | aaabad   | Private Parks
  66 |10 | aaabae   | Essays
  67 |10 | aaabaf   | Letters
  69 |10 | aaabah   | Surveys
  70 |10 | aaabai   | Waivers
   4 |10 | aaabaj   | Legislation
  54 | 4 | aaabajaa | Youth in Politics
  36 |10 | aaabak   | Statistics
   3 |10 | aaabal   | Media Coverage
  30 | 3 | aaabalaa | Success Stories
  19 |10 | aaabam   | Sarcastic Rants
   8 |10 | aaaban   | Web Services
  37 | 0 | aaag | Fund-raising
  46 |37 | aaagaa   | Grants
   9 | 0 | aaai | Design and Building

###

Answering a question like "Get me all descendants of the 'Propaganda'
category" becomes very easy:

SELECT category_id, name from cas_category WHERE sort_key like 'aaab%';

By using "LIKE" above, and checking the length of the sort_key, just
about any tree related query becomes easy, especially when you have the
parent_id as well. You can look at the Cascade source code for more
examples that use this.

The one 'drawback' to this system is that it doesn't support trees
of infinite size. If I'm doing my math right, I think the design above
'only' supports 676 children per node. I've never run into that
limitation. :) Of course, you could always make each piece of the
sort_key longer, if you needed to support more children per node.

Mark
















> 
> Regards,
> 
> Floyd Shackelford
> 4 Peaks Technology Group, Inc.
> VOICE: 334.735.9428
> FAX:   702.995.6462
> EMAIL: [EMAIL PROTECTED]
> ICQ #: 161371538
> PGP Key ID: 0x2E84F2F2
> PGP Fone at private.fwshackelford.com on request
> 
> Shackelford Motto: ACTA NON VERBA - Actions, not words
> 
> Alabama StateMotto: AUDEMUS JURA NOSTRA DEFENDERE - We Dare Defend Our
> Rights
> 
> The Philosophy of Liberty: http://www.isil.org/resources/introduction.swf
> 
> "We have allowed our constitutional republic to deteriorate into a virtually
> unchecked direct democracy. Today's political process is nothing more than a
> street fight between various groups seeking to vote themselves other
> people's money. Individual voters tend to support the candidate that
> promises them the most federal loot in whatever form, rather than the
> candidate who will uphold the rule of law." --Rep. Ron Paul
> 
> 
> ---(end of broadcast)---
> TIP 9: the planner will ignore your desire to choose an index scan if your
>   joining column's datatypes do not match
> 


-- 
--
http://mark.stosberg.com/ 


---(end of broadcast)---
TIP 8: explain analyze is your friend


[SQL] Unsubscribe

2003-10-01 Thread Mark Roberts
please remove my email from your database contacts.

Kind Regards,
Mark.

___
Disclaimer: Great Ormond Street Hospital for Children NHS Trust

SECURITY WARNING RE: PATIENT OR OTHER CONFIDENTIAL DATA. Please note
that Internet E-mail is simply not a secure communication medium.
We strongly advise that you understand & observe this lack of security
when e-mailing us.

This email and any files transmitted with it are intended solely for
the use of the individual to whom they are addressed.  If you have
received this email in error please notify your email administrator.

Any views or opinions are solely those of the author of this email
and do not represent those of Great Ormond Street Hospital for
Children NHS Trust unless specifically stated.

VIRUSES: This email message has been checked for the presence of
computer viruses by Sophos antivirus software.  However, this does
not guarantee that this email is free of viruses, and the recipient
should perform their own check. 


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [SQL] A tricky sql-query...

2003-11-02 Thread Mark Stosberg
On 2003-10-22, Timo <[EMAIL PROTECTED]> wrote:
>
> You can't have any recursion in an pure sql-query, can you?

It depends on how you think of recursion, I'd say. You join on the same
table a number of times, by giving it a different alias each time. You 
have to manually specify (or generate with application code) all these 
aliases and joins, though. Sometimes people use this technique to
implement tree structures in SQL.  

Mark

-- 
http://mark.stosberg.com/ 


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[SQL] Implementation of a bag pattern using rules

2004-02-09 Thread Mark Gibson
Hello,
   I've been playing around with a simple solution for a bag or sparse 
matrix using rules,
but have encountered a few problems I wish to discuss.
The bag pattern is commonly used for shopping baskets (item => quantity).
This sollution can also be used for a sparse matrix too (row,col => value).

Example:

CREATE TABLE bag_test
(
 item  text PRIMARY KEY,
 qty  integer
);
To add/modify/del items in the above table is tedious,
you need to first check for existence of an item then choose your SQL
statement (INSERT/UPDATE/DELETE/do nothing).
I want to be able to add/modify/del an item using only INSERT.
eg:
INSERT INTO bag_test VALUES ('apple', 1);
INSERT INTO bag_test VALUES ('apple', 12);
In the second statement, ee have a choice though, of whether to
increase the quantity of 'apple' by 12, or set the quantity of 'apple' 
to 12.

So, for the absolute option (set 'apple' to 12), we can use the 
following rule:

   CREATE RULE bag_abs AS ON INSERT TO bag_test
   WHERE
 EXISTS (SELECT 1 FROM bag_test WHERE item = NEW.item)
   DO INSTEAD
   UPDATE bag_test SET qty = NEW.qty WHERE item = NEW.item;
I also want the item to be deleted if it's quantity is <= 0:

   CREATE RULE bag_del AS ON UPDATE TO bag_test
   WHERE
 NEW.qty <= 0
   DO INSTEAD
   DELETE FROM bag_test WHERE item = NEW.item;
Alternatively, for the relative option (increase 'apple' by 12), replace 
the 'bag_abs' rule with:

   CREATE RULE bag_rel AS ON INSERT TO bag_test
   WHERE
 EXISTS (SELECT 1 FROM bag_test WHERE item = NEW.item)
   DO INSTEAD
   UPDATE bag_test SET qty = qty + NEW.qty WHERE item = NEW.item;
(You still need the 'bag_del' rule if you want quantity <= 0 to be deleted)

Unfortunately there is a problem with 'bag_rel':
When the item already exists, it works fine, the item's quantity
is increased by the amount given in the INSERT statement.
BUT, if the item doesn't exist it gets double the quantity given in the 
statement.
eg:

   > SELECT * FROM bag_test;
item | qty
   --+-
   (0 rows)
   > INSERT INTO bag_test VALUES ('apple', 12);
   INSERT 0 1
   > SELECT * FROM bag_test;
item  | qty
   ---+-
apple |  24
   (1 row)
This is double the expected value!

   > INSERT INTO bag_test VALUES ('apple', 12);
   INSERT 0 0
   > SELECT * FROM bag_test;
item  | qty
   ---+-
apple |  36
   (1 row)
But, this worked fine (increased by 12)!

   > INSERT INTO bag_test VALUES ('apple', -36);
   INSERT 0 0
   > SELECT * FROM bag_test;
item | qty
   --+-
   (0 rows)
Deleting works fine too.

Does anyone know how to prevent the problem with the initial insert?
I've read 'The Rule System' chapter several times, it's fairly heavy going,
and results in much head scratching, but I still can't work out how to 
fix it.

Any suggestions on improving the rules?
Other than the problem mentioned, can anyone see a flaw in this method?
Cheers

--
Mark Gibson 
Web Developer & Database Admin
Cromwell Tools Ltd.
Leicester, England.
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [SQL] Implementation of a bag pattern using rules

2004-02-09 Thread Mark Gibson
Robert Creager wrote:

When grilled further on (Mon, 09 Feb 2004 12:42:10 +),
Mark Gibson <[EMAIL PROTECTED]> confessed:
 

   CREATE RULE bag_abs AS ON INSERT TO bag_test
   WHERE
 EXISTS (SELECT 1 FROM bag_test WHERE item = NEW.item)
   DO INSTEAD
   UPDATE bag_test SET qty = NEW.qty WHERE item = NEW.item;
   CREATE RULE bag_rel AS ON INSERT TO bag_test
   WHERE
 EXISTS (SELECT 1 FROM bag_test WHERE item = NEW.item)
   DO INSTEAD
   UPDATE bag_test SET qty = qty + NEW.qty WHERE item = NEW.item;
   

I'm no expert, just up early.  I believe both of these rules are
tripping. 
bag_abs is likely going first, then bag_rel, so bag_abs is inserting the
record,
then bag_rel is updating it.  You could verify this by deleting the two
rules,
then re-creating in the opposite order, and see if your inserted values
change. 

How would you expect the system to choose one of the two rules, which is
what
you apparently expect?
 

I probably didn't make this clear enough:

The system doesn't choose, YOU choose EITHER 'bag_abs' OR 'bag_rel' 
depending
on which behaviour is most appropriate for your application.
'bag_del' can be used in combination with either, to remove empty items.

The 'bag_abs'/'bag_del' rules work perfectly - I've provided them for 
feedback,
and hopefully others will find them useful.

It's only the 'bag_rel' rule that is giving me a headache.

Also, IIRC, rules are applied in alphabetical order, NOT the order in 
which they were created.

Cheers.

--
Mark Gibson 
Web Developer & Database Admin
Cromwell Tools Ltd.
Leicester, England.
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[SQL] timestamptz - problems

2004-02-09 Thread Mark Roberts

Hi im using 'timestamptz' in a function called: 'getdate' to enter the
start and finish time into a db field, however the date appears to be
inserted into the db in a random format, i wish for it to only be
entered into the db as DD-MM- (European,UK). I understand this is
probably a very simple problem to resolve but thus far I have failed,
can anyone plz help.

  'getdate' function is as follows: ***

CREATE FUNCTION getdate() RETURNS timestamptz AS '
BEGIN
RETURN now();
END; '  LANGUAGE 'plpgsql';

 Inserted using the following function: *

CREATE FUNCTION newmess(int4, text, varchar) RETURNS varchar AS '
DECLARE
userid ALIAS for $1;
message ALIAS for $2;
touser ALIAS for $3;
enttime DATETIME;
touserid INTEGER;
rdset BIT;
from VARCHAR;

BEGIN
rdset = 0;
touserid=(select id from users where lastname=touser);
enttime=(select getdate());
from=(select lastname from users where id = userid);
INSERT INTO CallLog.message(message, fromuser, touser, txtime, rd,
fromusern) values(message. userid, touserid, enttime, rdset, from);
END;
'  LANGUAGE 'plpgsql';

*****

Kind Regards, Mark.  



___
Disclaimer: Great Ormond Street Hospital for Children NHS Trust

SECURITY WARNING RE: PATIENT OR OTHER CONFIDENTIAL DATA. Please note
that Internet E-mail is simply not a secure communication medium.
We strongly advise that you understand & observe this lack of security
when e-mailing us.

This email and any files transmitted with it are intended solely for
the use of the individual to whom they are addressed.  If you have
received this email in error please notify your email administrator.

Any views or opinions are solely those of the author of this email
and do not represent those of Great Ormond Street Hospital for
Children NHS Trust unless specifically stated.

VIRUSES: This email message has been checked for the presence of
computer viruses by Sophos antivirus software.  However, this does
not guarantee that this email is free of viruses, and the recipient
should perform their own check. 


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] Implementation of a bag pattern using rules

2004-02-09 Thread Mark Gibson
Richard Sydney-Smith wrote:

Mark,

love the idea, guess I should have read it somewhere but haven't. Obvious
and beautiful. Please let me know if you or someone else solves the initial
double value.
 

I used to use functions for this kind of thing,
and was thinking that what SQL really needed was an 'UPDATE OR INSERT' 
command,
then it suddenly came to me last night, it could be done with rules or 
triggers.
[I've posted a trigger solution for the relative values separately, in 
response to Tom Lanes help]

Got me thinking of all the places I cold have used this instead of coding
select/insert/update/delete.
Also have you worked a solutions where both the abs and relative inserts
apply to the same bag
eg insert another apple vs set apples to 5

 

Hmmm, yeah, I'm wondering about that one. It would be handy.
Custom datatype maybe - an integer with a flag to indicate absolute or 
relative???
eg:
INSERT INTO bag_test VALUES ('orange', '10 abs');
INSERT INTO bag_test VALUES ('orange', '-5 rel');

or views that modify an underlying table???
eg:
INSERT INTO bag_test_abs VALUES ('orange', 10);
INSERT INTO bag_test_rel VALUES ('orange', -5);
I have no idea yet whether these are possible though, any ideas?

Much of my attitude to triggers has been non-committal. Your example changes
that.
 

Triggers, rules and functions ROCK. It's allowed us to move all the 
business logic into the
database itself so we can create really simple clients easily in any 
language/environment.

Right, I'm off home now :)

Cheers

--
Mark Gibson 
Web Developer & Database Admin
Cromwell Tools Ltd.
Leicester, England.
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] Implementation of a bag pattern using rules

2004-02-09 Thread Mark Gibson
Tom Lane wrote:

Mark Gibson <[EMAIL PROTECTED]> writes:
 

   CREATE RULE bag_rel AS ON INSERT TO bag_test
   WHERE
 EXISTS (SELECT 1 FROM bag_test WHERE item = NEW.item)
   DO INSTEAD
   UPDATE bag_test SET qty = qty + NEW.qty WHERE item = NEW.item;
   

This can't work because an ON INSERT rule fires after the INSERT itself
is executed.
 

I suspected that it may be impossible with rules, but I thought I'd ask,
I'm still trying to get to grips with them.
I think you need to use a BEFORE INSERT trigger instead.
You could also extend the trigger to handle the
delete-upon-reaching-zero logic.
 

So, here's my proof-of-concept trigger for the relative quantities:

CREATE OR REPLACE FUNCTION bag_rel_trigger() RETURNS TRIGGER AS '
DECLARE
oldqty bag_test.qty%TYPE;
BEGIN
IF NEW.qty <> 0 THEN
 SELECT INTO oldqty qty FROM bag_test WHERE item = NEW.item;
 IF NOT FOUND AND NEW.qty > 0 THEN
  RETURN NEW;
 END IF;
 IF oldqty + NEW.qty <= 0 THEN
  DELETE FROM bag_test WHERE item = NEW.item;
 ELSE
  UPDATE bag_test SET qty = qty + NEW.qty WHERE item = NEW.item;
 END IF;
END IF;
RETURN NULL;
END;
' LANGUAGE 'plpgsql';
CREATE TRIGGER bag_rel BEFORE INSERT ON bag_test
FOR EACH ROW EXECUTE PROCEDURE bag_rel_trigger();
I think it should be possible to make the trigger generic for any table,
the quantity column could be passed as a parameter to the trigger,
but it would require some horribly complex code to determine
the primary key and lots of EXECUTE calls -
a lot of overhead each time the trigger is called :(
I was thinking maybe of a function thats create a trigger optimized for 
the table.
Any ideas?

Cheers

--
Mark Gibson 
Web Developer & Database Admin
Cromwell Tools Ltd.
Leicester, England.
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


[SQL] Date format problems

2004-02-16 Thread Mark Roberts

Hi im using the function below to insert data into my db; im using
now() to get the timestamptz, however when inserted in the db the format
seems to vary, the majority of the time its in the required European
style but does spontaniously change to various other type can anyone
throw any light on this problem. 

Further info:
DATESTYLE is currently set to European. 
db table type is 'timestamptz'

###

CREATE FUNCTION newmess(int4, text, varchar) RETURNS varchar AS '
DECLARE
userid ALIAS for $1;
message ALIAS for $2;
touser ALIAS for $3;
enttime DATETIME;
touserid INTEGER;
rdset BIT;
from VARCHAR;

BEGIN
rdset = 0;
touserid=(select id from users where lastname=touser);
enttime=(select now());
from=(select lastname from users where id = userid);
INSERT INTO CallLog.message(message, fromuser, touser, txtime, rd,
fromusern) values(message. userid, touserid, enttime, rdset, from);
END;
'  LANGUAGE 'plpgsql';

*

Im getting desperate, please help if you can, and thx to those that
replied to my previous mail.

Many Thanks in advance,

Kind Regards, Mark.  



___
Disclaimer: Great Ormond Street Hospital for Children NHS Trust

SECURITY WARNING RE: PATIENT OR OTHER CONFIDENTIAL DATA. Please note
that Internet E-mail is simply not a secure communication medium.
We strongly advise that you understand & observe this lack of security
when e-mailing us.

This email and any files transmitted with it are intended solely for
the use of the individual to whom they are addressed.  If you have
received this email in error please notify your email administrator.

Any views or opinions are solely those of the author of this email
and do not represent those of Great Ormond Street Hospital for
Children NHS Trust unless specifically stated.

VIRUSES: This email message has been checked for the presence of
computer viruses by Sophos antivirus software.  However, this does
not guarantee that this email is free of viruses, and the recipient
should perform their own check. 


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[SQL] Tip: a function for creating a remote view using dblink

2004-02-16 Thread Mark Gibson
Hello,
   I'm posting a function here in the hope others may find it useful
and/or correct my mistakes/make improvements :)
This creates a view of a remote table, using dblink:

CREATE OR REPLACE FUNCTION dblink_create_view(text, text, text)
RETURNS VOID
LANGUAGE plpgsql
STRICT
AS '
DECLARE
 connstr ALIAS FOR $1;
 remote_name ALIAS FOR $2;
 local_name  ALIAS FOR $3;
 schema_name text;
 table_name  text;
 rec RECORD;
 col_names   text := '''';
 col_defstext := '''';
 sql_str text;
BEGIN
 schema_name := split_part(remote_name, ''.'', 1);
 table_name := split_part(remote_name, ''.'', 2);
 FOR rec IN
   SELECT * FROM dblink(connstr,
 ''SELECT
 a.attname,
 format_type(a.atttypid, a.atttypmod)
   FROM
 pg_catalog.pg_class c INNER JOIN
 pg_catalog.pg_namespace n ON (c.relnamespace = n.oid) INNER JOIN
 pg_catalog.pg_attribute a ON (a.attrelid = c.oid)
   WHERE
 n.nspname = '' || quote_literal(schema_name) || '' AND
 c.relname = '' || quote_literal(table_name) || '' AND
 a.attisdropped = false AND
 a.attnum > 0'')
 AS rel (n name, t text)
 LOOP
   col_names := col_names || quote_ident(rec.n) || '','';
   col_defs  := col_defs  || quote_ident(rec.n) || '' '' || rec.t || '','';
 END LOOP;
 sql_str := ''CREATE VIEW '' || local_name ||
   '' AS SELECT * FROM dblink('' || quote_literal(connstr) || '','' ||
   quote_literal(''SELECT '' || trim(trailing '','' from col_names) ||
 '' FROM '' || quote_ident(schema_name) || ''.'' || 
quote_ident(table_name)) ||
   '') AS rel ('' || trim(trailing '','' from col_defs) || '')'';

 EXECUTE sql_str;
 RETURN;
END
';
Usage example:
SELECT dblink_create_view('host=... dbname=... user=...', 
'schema.remote_table', 'local_view');
SELECT * FROM local_view;

The schema MUST be specified for the remote table name.

Suggestions for improvement welcome. Any ideas?

Is there any existing site (a wiki for example) for posting PostgreSQL 
specific tips?
(Wasn't sure if pgsql-sql is the right place for this kind of thing)

--
Mark Gibson 
Web Developer & Database Admin
Cromwell Tools Ltd.
Leicester, England.
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [SQL] Date format problems

2004-02-16 Thread Mark Roberts
Sure, sorry;  Im using postgres version 7.2.1, and the column data type
is 'timestamptz'
Data examples: 

13/02/04 12:35:27 appears in the column as 02/04/13 12:35:27, or
13/04/02 12:35:27

70% of the time it is inserted in the correct format. 

The function shown in previous email is called by a C++ Builder 5
program using the postgresSQL ODBC driver version 7.02.00.05 (Insight
Distribution Systems) 

Any other info required ?

Sorry for the stupid questions but im a bit of a n00b, no excuse I
guess, but I just cant figure out whats going on.

Thanks for all your efforts,

Kind Regards,

Mark.


>>> Tom Lane <[EMAIL PROTECTED]> 02/16/04 03:34pm >>>
"Mark Roberts" <[EMAIL PROTECTED]> writes:
> Hi im using the function below to insert data into my db; im using
> now() to get the timestamptz, however when inserted in the db the
format
> seems to vary, the majority of the time its in the required European
> style but does spontaniously change to various other type can anyone
> throw any light on this problem. 

This is way too vague for anyone to help.  What PG version are you
using?  What is the actual datatype of the column you're inserting
into?
Can you provide a specific example of a misformatted data value?

regards, tom lane

___
Disclaimer: Great Ormond Street Hospital for Children NHS Trust

SECURITY WARNING RE: PATIENT OR OTHER CONFIDENTIAL DATA. Please note
that Internet E-mail is simply not a secure communication medium.
We strongly advise that you understand & observe this lack of security
when e-mailing us.

This email and any files transmitted with it are intended solely for
the use of the individual to whom they are addressed.  If you have
received this email in error please notify your email administrator.

Any views or opinions are solely those of the author of this email
and do not represent those of Great Ormond Street Hospital for
Children NHS Trust unless specifically stated.

VIRUSES: This email message has been checked for the presence of
computer viruses by Sophos antivirus software.  However, this does
not guarantee that this email is free of viruses, and the recipient
should perform their own check. 


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] [HACKERS] Materialized View Summary

2004-02-25 Thread Mark Gibson
Jonathan M. Gardner wrote:

You can view my summary at
http://jonathangardner.net/PostgreSQL/materialized_views/matviews.html
Comments and suggestions are definitely welcome.

Fantastic, I was planning on a bit of materialized view investigations 
myself
when time permits, I'm pleased to see you've started the ball rolling.

I was thinking about your problem with mutable functions used in a 
materialized view.

How about eliminating the mutable functions as much as possible from the 
underlying
view definition, and create another view on top of the materialized view 
that has the mutable bits!
Giving you the best of both worlds.

I haven't tried this or thought it through very much - too busy - but 
I'd thought I'd throw
it in for a bit o' head scratching, and chin stroking :)

Cheers
--
Mark Gibson 
Web Developer & Database Admin
Cromwell Tools Ltd.
Leicester, England.
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [SQL] Elegant way to monitor for changes in a trigger and migrate

2004-05-22 Thread Mark Gibson
David B wrote:
Folks,
Perhaps you can helphell I'm sure you can!
I want to monitor for changes in a table and migrate the OLD. record to
audit table.
Is there an elegant or generic way to do this so I can use across multiple
tables with little change.
 

You can use a rule to do this:
CREATE RULE cust_audit AS ON UPDATE OR DELETE TO cust DO
 INSERT INTO cust_hist SELECT OLD.*;
cust_hist should be identical to cust without a primary key or any 
constraints/foreign keys etc.

I'm currently working on an auditing system at present, and will be 
releasing it
soon if anyone is interested. It needs some cleaning up first, when I 
have time.

--
Mark Gibson 
Web Developer & Database Admin
Cromwell Tools Ltd.
Leicester, England.
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[SQL] create function atof?

2004-06-05 Thread Mark Tabash
Hello,

Is it possible to create a database function that mimics the C function atof?
I'm guessing it should look something like this:

create function atof(varchar) returns float
as '??'
language 
returns null on null input;

Thanks,

Mark

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [SQL] problems using phpPgAmin

2004-10-19 Thread Mark Gibson
beyaNet wrote:
Hi,
to those of you that may be using the above named admin tool, any ideas 
why I am unable to login with the postgres username even though I have 
amended the pg_hb file? Are there any other admin tools out there that i 
could use on a a unix box?
Have you restarted PostgreSQL?
Have you copied 'conf/config.inc.php-dist' to 'conf/config.inc.php'
and configured it?
BTW, there is a mailing list for phpPgAdmin at:
[EMAIL PROTECTED]
--
Mark Gibson 
Web Developer & Database Admin
Cromwell Tools Ltd.
Leicester, England.
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] Aggregate Function with Argument

2004-10-19 Thread Mark Gibson
David Siegal wrote:
I would like to create an aggregate function that returns a concatenation
of grouped values. It would be particularly useful if I could pass an
optional delimiter into the aggregate function.
I've managed to do this in two stages:
1. Collect the set of values into an array.
   This can be done using a custom aggregate function, array_accum,
   which is demonstrated within the PostgreSQL manual:
   http://www.postgresql.org/docs/7.4/interactive/xaggr.html
   But here it is again:
   CREATE AGGREGATE array_accum (
 sfunc = array_append,
 basetype = anyelement,
 stype = anyarray,
 initcond = '{}'
   );
   It makes me wonder why this isn't a built-in aggregate???
2. Convert the array to a string.
   Using the built-in function array_to_string:
   http://www.postgresql.org/docs/7.4/interactive/functions-array.html
Example:
  SELECT
team_number,
array_to_string(array_accum(member_name), ', ') AS members
  FROM team
  GROUP BY team_number;
You can also go full round-trip (delimited string -> set) using the
builtin function: string_to_array, and a custom pl/pgSQL function:
CREATE FUNCTION array_enum(anyarray) RETURNS SETOF anyelement AS '
  DECLARE
array_a ALIAS FOR $1;
subscript_v integer;
  BEGIN
FOR subscript_v IN array_lower(array_a,1) .. array_upper(array_a,1)
LOOP
  RETURN NEXT array_a[subscript_v];
END LOOP;
RETURN;
  END;
' LANGUAGE 'plpgsql'
STRICT IMMUTABLE;
Example:
  SELECT * FROM array_enum(string_to_array('one,two,three',','));
--
Mark Gibson 
Web Developer & Database Admin
Cromwell Tools Ltd.
Leicester, England.
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[SQL] Explicitly rolling back transaction from within a C-Language function

2004-11-24 Thread Mark Dilger
Hello,

I have a transactional system built on top of
BerkeleyDB which I would like to call from within
Postgres through a C-Language function.  The behavior
of the function will be read-only.  Even so, the
BerkeleyDB-based system's transaction will sometimes
fail.  What I would like to do is propogate that
transactional failure into the Postgres transaction
that called the C-Language function.  Is this
possible?

To clarify what I think I want:  I would like the
C-Language function to return an error condition to
Postgres that will cause the current Postgres
transaction to abort.  I do not know how to do this.

I am not particularly worried about needing to
rollback the BerkeleyDB transaction when the Postgres
transaction fails, because the BerkeleyDB transaction
was read-only anyway, and there are no updates to roll
back.  However, I might want to do this in the future
so information on this subject is also welcome.

Thank you for any help,

mark



__ 
Do you Yahoo!? 
The all-new My Yahoo! - What will yours do?
http://my.yahoo.com 

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[SQL] Updating column to link one table to another

2004-12-17 Thread Mark Collette
I have two tables, with a many to one relationship.
Table  InfrequentTable
- timestamp  ts
- int  infrequentId (newly added key)
Table  FrequentTable(Many FrequentTable entries per one 
InfrequentTable entry)
- timestamp  ts
- int  infrequentId (newly added column)

The link is chronological, in that when an InfrequentTable entry 
happens, then subsequent FrequentTable entries should be linked to it, 
until the next InfrequentTable event happens, in which case old 
FrequentTable entries are left alone, but new ones are linked to the 
newest InfrequentTable entry.

Now, I've added the infrequentId columns to both, as an optimization, 
so that I can quickly find the InfrequentTable entry for a given 
FrequentTable entry.  I've done this because I want to speed up 
SELECTs.  Any new INSERTs are working just fine.  But, all my legacy 
entries, are not linked up yet.  I need to know how I can UPDATE the 
FrequentTable rows, where their infrequentId is zero, to point at the 
last InfrequentTable entry whose timestamp ts is before its own 
timestamp ts.

Can anyone help me with this?  Thank you.
- Mark Collette
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


[SQL] Postgres 8 - Database access, new install.

2005-02-24 Thread Mark Roberts


Hi all, Ive just installed the latest version of Postgres 8 on a RedHat 9 server. The problem im having is than when I try to login to the database i.e. 'psql -U postgres template1' im getting the following message: 
 
psql: relocation error: psql: undefined symbol: PQsetErrorVerbosity
 
Upon reading an answer to a previous question this could be cause by a version conflict i.e using old 7.x librarys. So ive checked this, and yes there is another version 7.3 installed which is the default RedHat install. However when checking using 'ldd `which psql` the following libraries are displayed which I believe to be correct:
 
libpq.so.3 => /usr/lib/libpq.so.3 (0x40026000)    libpam.so.0 => /lib/libpam.so.0 (0x4003c000)    libssl.so.4 => /lib/libssl.so.4 (0x40044000)    libcrypto.so.4 => /lib/libcrypto.so.4 (0x40079000)    libcom_err.so.2 => /lib/libcom_err.so.2 (0x4016a000)    libz.so.1 => /usr/lib/libz.so.1 (0x4016c000)    libreadline.so.4 => /usr/lib/libreadline.so.4 (0x4017a000)    libtermcap.so.2 => /lib/libtermcap.so.2 (0x401a7000)    libcrypt.so.1 => /lib/libcrypt.so.1 (0x401ab000)    libresolv.so.2 => /lib/libresolv.so.2 (0x401d8000)    libnsl.so.1 => /lib/libnsl.so.1 (0x401ea000)    libdl.so.2 => /lib/libdl.so.2 (0x401ff000)    libm.so.6 => /lib/tls/libm.so.6 (0x40203000)    libc.so.6 => /lib/tls/libc.so.6 (0x4200)    libkrb5.so.3 => /usr/kerberos/lib/libkrb5.so.3 (0x40226000)    libgssapi_krb5.so.2 => /usr/kerberos/lib/libgssapi_krb5.so.2 (0x40284000)    libcom_err.so.3 => /usr/kerberos/lib/libcom_err.so.3 (0x40297000)    libk5crypto.so.3 => /usr/kerberos/lib/libk5crypto.so.3 (0x40299000)    /lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x4000)
Anyways im now desperately stuck, I have 2 versions of postgres install (my mistake), and I dont know how to remove them and start a fresh install because:
 
rpm -qa | grep postg  (shows current rpms)
postgresql-server-8.0.1-1PGDGpostgresql-8.0.1-1PGDG
 
Does anyone have any idea of how to resolve this problem, I can connect using PGAdminIII but I really would like to be able to login in properly from Command-line also.
 
Many Thx, Mark.


___
Disclaimer: Great Ormond Street Hospital for Children NHS Trust

SECURITY WARNING RE: PATIENT OR OTHER CONFIDENTIAL DATA. Please note
that Internet E-mail is simply not a secure communication medium.
We strongly advise that you understand & observe this lack of security
when e-mailing us.

FREEDOM OF INFORMATION ACT 2000: The information contained in this e-mail
may be subject to public disclosure under this Act.  Unless the information
is legally exempt from disclosure, the confidentiality of this e-mail and
your reply cannot be guaranteed.

This email and any files transmitted with it are intended solely for
the use of the individual to whom they are addressed.  If you have
received this email in error please notify your email administrator.

Any views or opinions are solely those of the author of this email
and do not represent those of Great Ormond Street Hospital for
Children NHS Trust unless specifically stated.

VIRUSES: This email message has been checked for the presence of
computer viruses by Sophos antivirus software.  However, this does
not guarantee that this email is free of viruses, and the recipient
should perform their own check. 



[SQL] default value for select?

2005-05-09 Thread Mark Fenbers




I want to update a column in myTable.  The value this column is set to
depends on a nested select statement which sometimes returns 0 rows
instead of 1.  This is a problem since the column I'm trying to update
is set to refuse nulls.  Here's a sample:

update myTable set myColumn = (Select altColumn from altTable where
altColumn != 'XXX' limit 1) where myColumn = 'XXX';

MyColumn cannot accept nulls, but sometimes "Select altColumn ..."
returns 0 rows, and thus, the query fails.  

Is there a way to set a default value to be inserted into myColumn if
and when "select altColumn ..." returns zero rows?

Mark


begin:vcard
fn:Mark Fenbers
n:Fenbers;Mark
org:DoC/NOAA/NWS/OHRFC
adr:;;1901 South SR 134;Wilmington;OH;45177-9708;USA
email;internet:[EMAIL PROTECTED]
title:Sr. HAS Meteorologist
tel;work:937-383-0430 x246
x-mozilla-html:TRUE
url:http://weather.gov/ohrfc
version:2.1
end:vcard


---(end of broadcast)---
TIP 8: explain analyze is your friend


  1   2   >