[SQL] Matching a column against values in code

2005-02-16 Thread Tim
Hello all.
I sometimes find myself needing an SQL query that will return all the 
rows of a table in which one column equals any one of a list of values I 
have in an array in code.

Does anyone know of a better way to do this than to loop through the 
array and append an "or" comparison to the sql statement, like this?

sqlString = sqlString + " or this_column='" + arrayOfValues[i] +"' ";
If someone knows a command or function I can look up in the docs, just 
say the name and I'll look there. 

Thanks a lot everyone.
--
Tim
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[SQL] Web Hosts (off-topic)

2000-09-13 Thread Tim Quinlan

The Hermit Hacker can ignore this (thanks).

I'm looking for a web host that provides Postgres, and runs on *nix or
*BSD.  Does anyone have any positive, un-solicited testimonials (or
visa-versa any horror stories)?  If you have a story to tell me, but you
don't feel that it is appropriate to post to the group, please don't
e-mail the group, just e-mail me.

Any feedback is welcome.




[SQL] PL/pgSQL "compilation error"

2001-03-14 Thread Tim Perdue

Hello all - I apologize for the newbie-esque question, but the debug output
from postgres when you have a bug in your PL/pgSQL procedure is none to
detailed.

I've created the following procedure and am getting an error when I try to
update the table. The error is something like "parse error near ; on line 50".
Line 50 is the last line.

There's probably something glaring wrong in here that I'm not seeing, but any
help would be appreciated.

I don't know if the \ at the end of the line is a problem, but those were
added late in the game and didn't change the error message ;-)

Tim

CREATE FUNCTION artifactgroup_update_agg () RETURNS OPAQUE AS '
BEGIN
   --
   -- see if they are moving to a new artifacttype
   -- if so, its a more complex operation
   --
   IF NEW.group_artifact_id <> OLD.group_artifact_id THEN
  --
  -- transferred artifacts always have a status of 1
  -- so we will increment the new artifacttypes sums
  --
  UPDATE artifact_counts_agg SET count=count+1, open_count=open_count+1 \
 WHERE group_artifact_id=NEW.group_artifact_id;

  --
  --  now see how to increment/decrement the old types sums
  --
  IF NEW.status_id <> OLD.status_id THEN 
 IF OLD.status_id = 2 THEN
UPDATE artifact_counts_agg SET count=count-1 \
   WHERE group_artifact_id=OLD.group_artifact_id;
 --
 --  no need to do anything if it was in deleted status
 --
 END IF;
  ELSE
 --
 --  Was already in open status before
 --
 UPDATE artifact_counts_agg SET count=count-1, open_count=open_count-1 \
WHERE group_artifact_id=OLD.group_artifact_id;
  END IF;
   ELSE
  --
  -- just need to evaluate the status flag and 
  -- increment/decrement the counter as necessary
  --
  IF NEW.status_id <> OLD.status_id THEN
 IF new.status_id = 1 THEN
UPDATE artifact_counts_agg SET open_count=open_count+1 \
   WHERE group_artifact_id=new.group_artifact_id;
 ELSE IF new.status_id = 2 THEN
UPDATE artifact_counts_agg SET open_count=open_count-1 \
   WHERE group_artifact_id=new.group_artifact_id;
 ELSE IF new.status_id = 3 THEN
UPDATE artifact_counts_agg SET open_count=open_count-1,count=count-1 \
   WHERE group_artifact_id=new.group_artifact_id;
 END IF;
  END IF; 
   END IF;
   RETURN NEW;
END;
' LANGUAGE 'plpgsql';

CREATE TRIGGER artifactgroup_update_trig AFTER UPDATE ON artifact
   FOR EACH ROW EXECUTE PROCEDURE artifactgroup_update_agg();


-- 
Founder - PHPBuilder.com / Geocrawler.com
Lead Developer - SourceForge
VA Linux Systems

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [SQL] PL/pgSQL "compilation error"

2001-03-14 Thread Tim Perdue

On Wed, Mar 14, 2001 at 10:33:49AM -0500, Bruce Momjian wrote:
> It is hard to test this without the table schema, but I think you are
> missing some END IF's in the code above.  Those are not elif's, but
> actual new IF's that need their own END IF, I think.

Oh wow - this is almost like going back to my COBOL days.

IF NEW.status_id <> OLD.status_id THEN
  IF new.status_id = 1 THEN
UPDATE artifact_counts_agg SET open_count=open_count+1 \
  WHERE group_artifact_id=new.group_artifact_id;
  ELSE 
IF new.status_id = 2 THEN 
  UPDATE artifact_counts_agg SET open_count=open_count-1 \
WHERE group_artifact_id=new.group_artifact_id;
ELSE 
  IF new.status_id = 3 THEN
UPDATE artifact_counts_agg SET open_count=open_count-1,count=count-1 \
  WHERE group_artifact_id=new.group_artifact_id;
  END IF;
END IF;
  END IF;
END IF; 

Yes - nesting those IFs did the trick. Are there more examples of PL/pgSQL 
anywhere beyond those on your website?

Thanks, Bruce!

Tim

-- 
Founder - PHPBuilder.com / Geocrawler.com
Lead Developer - SourceForge
VA Linux Systems

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[SQL] Rule/currval() issue

2001-03-14 Thread Tim Perdue

This is related to the plpgsql project I was working on this morning. I'm
trying to create a rule, so that when a row is inserted into a certain table,
we also create a row over in a "counter table". The problem lies in getting
the primary key value (from the sequence) so it can be inserted in that
related table.

I tried a couple different approaches. Neither works.

artifact_group_list is a table where the primary key, group_artifact_id is
SERIAL type. When I insert a row, I want to get that new value in my rule.

--
--  Define a rule so when you create a new ArtifactType
--  You automatically create a related row over in the counters table
--
CREATE RULE artifactgroup_insert_agg AS
ON INSERT TO artifact_group_list
DO INSERT INTO
artifact_counts_agg (group_artifact_id,count,open_count)
VALUES (currval('artifact_grou_group_artifac_seq'),0,0);

I get this:

ERROR: artifact_grou_group_artifac_seq.currval is not yet defined in this session


If I write the rule this way:

CREATE RULE artifactgroup_insert_agg AS
ON INSERT TO artifact_group_list
DO INSERT INTO
artifact_counts_agg (group_artifact_id,count,open_count)
VALUES (new.group_artifact_id,0,0);

...it doesn't fail with an error, but the sequence increments twice.

Tim

-- 
Founder - PHPBuilder.com / Geocrawler.com
Lead Developer - SourceForge
VA Linux Systems

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [SQL] Rule/currval() issue

2001-03-14 Thread Tim Perdue

On Wed, Mar 14, 2001 at 01:09:18PM -0500, Tom Lane wrote:
> Tim Perdue <[EMAIL PROTECTED]> writes:
> > This is related to the plpgsql project I was working on this morning. I'm
> > trying to create a rule, so that when a row is inserted into a certain table,
> > we also create a row over in a "counter table". The problem lies in getting
> > the primary key value (from the sequence) so it can be inserted in that
> > related table.
> 
> You probably should be using a trigger, not a rule at all.

OK - so another rule like this one, is probably ill-advised as well? It seems
a lot easier than going into the triggers:

CREATE RULE forum_delete_agg AS
ON DELETE TO forum
DO UPDATE forum_agg_msg_count SET count=count-1
WHERE group_forum_id=old.group_forum_id;

Tim

-- 
Founder - PHPBuilder.com / Geocrawler.com
Lead Developer - SourceForge
VA Linux Systems

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] PIVOT of data

2001-03-14 Thread Tim Perdue

Probably worth noting that this could be normalized into at least 3 tables
from what I can tell.

Tim



On Wed, Mar 14, 2001 at 11:03:01PM +, Oliver Elphick wrote:
> Srikanth Rao wrote:
>   >Hi, 
>   >I have a table like this:
>   >   location| numbrochures | marketing 
>   >---+--+---
>   > 101 Asheville, NC |4 | NEWS
>   > 101 Asheville, NC |1 | TV
>   > 101 Asheville, NC |3 | RADIO
>   > 101 Asheville, NC |2 | OTHER
>   > 101 Asheville, NC |3 | null
>   > 101 Asheville, NC |1 | TV
>   > 102 'Charlotte', 'NC' |1 | SIGN
>   > 104 'Colfax', 'NC'|5 | SIGN
>   > 109 'Moyock', 'NC'|1 | BROCHURE
>   >(9 rows)
>   >
>   >
>   >I want the headings to be  like:
>   >
>   >location | NEWS | TV | RADIO | OTHER | 
>   >
>   >How to get this done using sql for postgresql backend?
> 
> SELECT location,
>CASE WHEN marketing = 'NEWS'
> THEN numbrochures
> ELSE NULL
>END AS "NEWS",
>CASE WHEN marketing = 'TV'
> THEN numbrochures
> ELSE NULL
>END AS "TV",
>...
> 
> but it's a clumsy hack and won't work if you don't know the
> contents of "marketing" in advance.
> 
> -- 
> Oliver Elphick[EMAIL PROTECTED]
> Isle of Wight  http://www.lfix.co.uk/oliver
> PGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47  6B 7E 39 CC 56 E4 C1 47
> GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839  932A 614D 4C34 3E1D 0C1C
>  
>  "Let your light so shine before men, that they may see 
>   your good works, and glorify your Father which is in 
>   heaven." Matthew 5:16 
> 
> 
> 
> ---(end of broadcast)---
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to [EMAIL PROTECTED] so that your
> message can get through to the mailing list cleanly

-- 
Founder - PHPBuilder.com / Geocrawler.com
Lead Developer - SourceForge
VA Linux Systems

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



[SQL] Invalid (null) int8, can't convert to float8

2001-03-19 Thread Tim Pizey

Hi, 

I have been lurking for a while and am very appreciative of the effort 
put in by the answers on this list, and for psql itself. 

I am using Postgres version 7.0.2

I have a table defined thus:

CREATE TABLE "chapter" (
"id" int4 NOT NULL,
"book" int4 NOT NULL,
"sequence" int4 NOT NULL,
"textid_old" int4,
"indexdate" timestamp,
"title" text NOT NULL,
"path" text NOT NULL,
"filename" text NOT NULL,
"part" text,
"lastencached" timestamp,
"longtitle" text,
"layout" int4,
"messageboard" int4,
"textid" int8
);
CREATE UNIQUE INDEX "chapter_id_index" on "chapter" using btree ( "id"
"int4_ops" );
CREATE  INDEX "chapter_book_index" on "chapter" using btree ( "book"
"int4_ops" );
CREATE  INDEX "chapter_sequence_index" on "chapter" using btree (
"sequence"
"int4_ops" );
CREATE UNIQUE INDEX "chapter_textid_index" on "chapter" using btree (
"textid" "int8_ops" );   


If there is a a record with a Null textid in the table then psql reports
the error:

Invalid (null) int8, can't convert to float8

to a query of the form 

select id from chapter where textid = 9057599501;

It does seem as though the textid in the query needs to be large to
produce the error.

Can anyone suggest what is going on?

I have, temporary, fixed the problem by eliminating all records with a
null textid, but 
null is a sensible value, so it will re-occur. 

thanks in advance
timp

-- 
Member of http://www.paneris.org/

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] Invalid (null) int8, can't convert to float8

2001-03-19 Thread Tim Pizey

Thanks a lot Tom, 

Tom Lane wrote:
> 
> Tim Pizey <[EMAIL PROTECTED]> writes:
> We have had some discussions about teaching the parser to be smarter
> about choosing the type of numeric constants depending on context,
> but for now you need to force the issue:
> 
> select id from chapter where textid = 9057599501::int8;
> 
> If you want the index to be used then you'd better do this all the
> time, not only for values that are too big to be int4.
> 
This problem is actually happening through jdbc, using
PreparedStatements, 
shouldn't jdbc be able to deal with casting transparently?

I am calling this with a string "chapter = ?" which automatically quotes 
and escapes string values correctly.

Hope this isn't too wrong
timp


-- 
Member of http://www.paneris.org/

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[SQL] Vacuum Error

2001-03-30 Thread Tim Perdue

We recently upgraded sourceforge to 7.1b6 from our nice old stable november
cvs snapshot and we hit this error last nite:


NOTICE:  FlushRelationBuffers(groups, 2140): block 2057 is referenced (private
0, global 1)
Error: ERROR:  VACUUM (repair_frag): FlushRelationBuffers returned -2


I tend to fear any error message that says "Frag" in it ;-)

Tim

-- 
Founder - PHPBuilder.com / Geocrawler.com
Lead Developer - SourceForge
VA Linux Systems

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] Memory and performance

2001-04-05 Thread Tim Perdue

I thought this was an interesting thread because we're running into
problems with IO under 7.1 during vacuum and a couple of scheduled aggregation
jobs.

Our database is about 1GB in total size, the machine has 4GB, but the entire
system is only using 1.2 GB, even during vacuum or a daily, extremely large 
query that requires a lot of grouping and sorting.

Any suggestions as to how we can make more use of our RAM, paricularly during
vacuum, which is when we get killed? We've actually considered mounting the
database on a RAM drive in a halfway serious way.

Tim

-- 
Founder - PHPBuilder.com / Geocrawler.com
Lead Developer - SourceForge
VA Linux Systems

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] Memory and performance

2001-04-05 Thread Tim Perdue

On Thu, Apr 05, 2001 at 10:46:07AM -0400, Tom Lane wrote:
> Tim Perdue <[EMAIL PROTECTED]> writes:
> > Our database is about 1GB in total size, the machine has 4GB, but the entire
> > system is only using 1.2 GB, even during vacuum or a daily, extremely large 
> > query that requires a lot of grouping and sorting.
> 
> What have you got the SortMem parameter (backend -S switch) set to?
> That's about the only use I can think of for RAM beyond what's needed to
> cache the whole database ...

-i -o -F -S 32768 -B 2048 is our startup line.

It appears to me that when you're vacuuming, it's physically moving all the
bits around the disk. It doesn't read in the table, clean out the cruft, the
flush it out. So I think this is where we're getting IO bound. We only have a
5x36 RAID - must not be sufficient.

Tim

-- 
Founder - PHPBuilder.com / Geocrawler.com
Lead Developer - SourceForge
VA Linux Systems

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



[SQL] Re: select substr???

2001-04-09 Thread Tim Johnson

Hi,

I have postgres 6.x (where x is something).

I have the following list of data

data

ABC*
ABC
ABC-
ABC+
ABC
QWE~
ASD+
ASD#
KLASDHK-
KLASDHK+
KLASDHK
KLASDHK*


what I want to do is 'select distinct(data) [ignoring non alphanumeric
characters] order by data'

is there a way to do that? Changing the data stored in the table is not an
option as the suffixes are needed elsewhere..

Please help !!

Thanks,
Tim.


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl



[SQL] Full outer join

2001-04-12 Thread Tim Perdue

Looking at the docs for 7.1, it appears that full outer joins are supported:
http://www.postgresql.org/devel-corner/docs/user/queries.html

However, I keep getting this error, and I don't know what it means:
ERROR:  FULL JOIN is only supported with mergejoinable join conditions

I've tried several variations and keep getting the same error.

Anyone have any ideas?

Details below.

Thanks,

Tim


SELECT *
FROM (stats_http_downloads sh 
FULL OUTER JOIN stats_ftp_downloads sf USING (day,filerelease_id)) ;

ERROR:  FULL JOIN is only supported with mergejoinable join conditions

SELECT day,filerelease_id
FROM (stats_http_downloads sh
FULL OUTER JOIN stats_ftp_downloads sf USING (day,filerelease_id)) ;

ERROR:  FULL JOIN is only supported with mergejoinable join conditions


alexandria=# \d stats_http_downloads
  Table "stats_http_downloads"
   Attribute|  Type   |   Modifier   
+-+--
 day| integer | not null default '0'
 filerelease_id | integer | not null default '0'
 group_id   | integer | not null default '0'
 downloads  | integer | not null default '0'

alexandria=# \d stats_ftp_downloads
   Table "stats_ftp_downloads"
   Attribute|  Type   |   Modifier   
+-+--
 day| integer | not null default '0'
 filerelease_id | integer | not null default '0'
 group_id   | integer | not null default '0'
 downloads  | integer | not null default '0'


-- 
Founder - PHPBuilder.com / Geocrawler.com
Lead Developer - SourceForge
VA Linux Systems

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [SQL] Full outer join

2001-04-13 Thread Tim Perdue

On Fri, Apr 13, 2001 at 03:02:32AM -0400, Tom Lane wrote:
> Tim Perdue <[EMAIL PROTECTED]> writes:
> > However, I keep getting this error, and I don't know what it means:
> > ERROR:  FULL JOIN is only supported with mergejoinable join conditions
> 
> Works for me:
> 
> regression=# create table stats_http_downloads (day int, filerelease_id int);

create table stats_http_downloads (day int, filerelease_id int,group_id int,
downloads int);

> CREATE
> regression=# create table stats_ftp_downloads (day int, filerelease_id int);

create table stats_ftp_downloads (day int, filerelease_id int,group_id int, 
downloads int);

> CREATE
> regression=# SELECT * FROM (stats_http_downloads sh
> regression(# FULL OUTER JOIN stats_ftp_downloads sf USING (day,filerelease_id));
>  day | filerelease_id
> -+
> (0 rows)
> 
> 
> What version are you using?

That's RC3 and RC1.

I wonder if the problem is because stats_http_downloads and
stats_ftp_downloads both have group_id and downloads, which I don't want to
use as part of the join.

Does it still work with those revised CREATE statements?

Tim

-- 
Founder - PHPBuilder.com / Geocrawler.com
Lead Developer - SourceForge
VA Linux Systems

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [SQL] Full outer join

2001-04-13 Thread Tim Perdue

On Fri, Apr 13, 2001 at 11:11:26AM -0400, Tom Lane wrote:
> Tim Perdue <[EMAIL PROTECTED]> writes:
> > Does it still work with those revised CREATE statements?
> 
> Yes, works fine here.
> 
> >> What version are you using?
> 
> > That's RC3 and RC1.
> 
> Curious.  I'd expect this to happen for column datatypes whose '='
> operator isn't marked mergejoinable, but plain old int certainly doesn't
> have that problem.  I think there's something you haven't told us.  Is
> either of these tables actually a view?

Hehe - no. I sent the \d of both tables at the bottom of that email.

Tim

-- 
Founder - PHPBuilder.com / Geocrawler.com
Lead Developer - SourceForge
VA Linux Systems

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



[SQL] Fwd: line datatype

2002-07-14 Thread Tim Hart

Probably the most succinct explanation would be to copy & paste from the 
terminal...

tjhart=> create table a_line( foo line );
CREATE
tjhart=> insert into a_line ( foo ) values( '(0,0), (1,1)' );
ERROR:  line not yet implemented
tjhart=> select version();
version
-
  PostgreSQL 7.2.1 on powerpc-apple-darwin5.3, compiled by GCC 2.95.2
(1 row)


The documentation (datatype-geometric.html) indicates both a 'line' type 
and an 'lseg' type in the summary table at the top of the page. The same 
code above using the type 'lseg' in place of 'line' works just fine.

Why can I create a table with a column of type 'line' if I can't insert 
into it?


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[SQL] line datatype

2002-07-15 Thread Tim Hart

Probably the most succinct explanation would be to copy & paste from the 
terminal...

tjhart=> create table a_line( foo line );
CREATE
tjhart=> insert into a_line ( foo ) values( '(0,0), (1,1)' );
ERROR:  line not yet implemented
tjhart=> select version();
version
-
  PostgreSQL 7.2.1 on powerpc-apple-darwin5.3, compiled by GCC 2.95.2
(1 row)


The documentation (datatype-geometric.html) indicates both a 'line' type 
and an 'lseg' type in the summary table at the top of the page. The same 
code above using the type 'lseg' in place of 'line' works just fine.

Why can I create a table with a column of type 'line' if I can't insert 
into it?


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



[SQL] Latitude / Longitude

2002-09-12 Thread Tim Perdue

Hi Folks -

I'm working on a global weather database for a client and have hit an issue
which I'm sure has been solved before. Unfortunately, the site search doesn't
pull up anything.

Essentially, I've got two tables, one with city/county/lat/long and another
with lat/long/weather data.

None of the lat/longs in the two tables match up directly, so I can't do a
simple join of the two tables. I need to join on closest proximity on the
lat/long fields.

Any suggestions? It seems to me this will be pretty expensive on CPU resources
unless there's a really elegant trick uncovered.

Tim

-- 
Founder - PHPBuilder.com / Geocrawler.com / SourceForge
GPG Public Key: http://www.perdue.net/personal/pgp.php
Perdue, Inc. / Immortal LLC
515-554-9520



msg07378/pgp0.pgp
Description: PGP signature


Re: [SQL] Latitude / Longitude

2002-09-12 Thread Tim Perdue

On Thu, Sep 12, 2002 at 12:49:21PM -0500, Larry Rosenman wrote:
> Look at contrib/earthdistance, I **think** it does what you need.

Thanks, yeah this starts to get me into the realm of what I need. It still
has to run every possible permutation to figure out which one is closest for
each postalcode. What I'll do is run that once and build a 3rd table which can
be used to join the other two together using a view.

Thanks,

Tim

-- 
Founder - PHPBuilder.com / Geocrawler.com / SourceForge
GPG Public Key: http://www.perdue.net/personal/pgp.php



msg07387/pgp0.pgp
Description: PGP signature


[SQL] Foreign character struggles

2002-10-25 Thread Tim Perdue
I compiled postgres with --enable-multibyte and --enable-recode, and it
doesn't appear to help with my problem.

I have a database which contains "foreign" characters in city names, like "São
Paulo" (Sao Paulo). 

If an end-user types plain-english Sao Paulo, I want the database to pull up
"São Paulo", essentially just treating the accented characters as if they were 
regular ASCII.

select to_ascii(city) from latlong where ccode='BR';
ERROR:  pg_to_ascii(): unsupported encoding from SQL_ASCII

select convert(city,'UNICODE', 'LATIN1') from latlong where ccode='BR';
ERROR:  Could not convert UTF-8 to ISO8859-1

Also, my "Up Arrow" and "Delete" keys no longer work since I recompiled 7.2.3
on debian.

Thanks for any help,

Tim Perdue

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [SQL] Foreign character struggles

2002-10-25 Thread Tim Perdue
On Fri, Oct 25, 2002 at 10:37:59AM -0400, Tom Lane wrote:
> Tim Perdue <[EMAIL PROTECTED]> writes:
> > I compiled postgres with --enable-multibyte and --enable-recode, and it
> > doesn't appear to help with my problem.
> 
> I think this is a locale issue, not a character set issue.  You
> definitely need --enable-locale, but I doubt you need either of the
> above (unless you need to deal with Unicode or Far-Eastern languages).

I skipped --enable-locale because I feared I would have to dump/restore
all my databases and require re-testing the application. Is that unfounded?
 
> > Also, my "Up Arrow" and "Delete" keys no longer work since I recompiled 7.2.3
> > on debian.
> 
> You are missing libreadline.

Thanks. libreadline is there, it just isn't being picked up by psql. Any
suggestions?

Tim Perdue

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [SQL] Foreign character struggles

2002-10-25 Thread Tim Perdue
On Fri, Oct 25, 2002 at 12:24:43PM -0400, Tom Lane wrote:
> If you skipped enable-locale then you are outta luck.  The fact that
> there is a connection between "a" and "accented a" is purely a locale
> issue.

What I meant was, if I recompile --enable-locale and install over the current
builds, I would have to dump/restore everything and re-test the app. Or so I
wondered.

> >> You are missing libreadline.
> 
> > Thanks. libreadline is there, it just isn't being picked up by psql. Any
> > suggestions?
> 
> Do you have both libreadline and libreadline headers (libreadline-devel
> rpm, usually)?

Nope it wasn't, but it is now. When I get the clarification on the above, I'll
rebuild everything.

Tim

-- 
Founder - SourceForge.net / PHPBuilder.com / Geocrawler.com
Perdue, Inc.
515-554-9520

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [SQL] Different size in the DATA directory

2002-11-01 Thread Tim Perdue
Maurício Sessue Otta wrote:


Hi, in my production server a "du -hs" in the DATA directory
returns 21GB
in a newly installed server, after I restored the dumps from the
production server, the "du -hs" gives me just 8GB

why is there such a diference???

what should I do (besides buying bigger and bigger HDs) ???

[]'s Mauricio



I had this same problem with 7.1.x, even doing full vacuums on 
SourceForge.net last year.

I assumed after a while that there was some hidden bug where file bloat 
occurred, despite the vacuum. After 3 months or so, you'd have to 
dump/restore the entire db and you'd be golden for a while again. After 
the dump/restore process things seemed much snappier too, and vacuum ran 
almost instantly.

I haven't verified if this problem still occurs in 7.2.x, using vacuum full.

Tim


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org


[SQL] Old "Feature" - Copy table ignores column defaults

2002-11-13 Thread Tim Perdue
I see there are a handful of reports of this, but it does seem broken to 
me. If I am copying a table and a column is blank, surely it makes sense 
to substitute the column default from the table? Right now, it is 
substituting 0 for a null column, instead of -99 as I wished, and here I 
was complaining that it was the national weather service's bug.

Re: [HACKERS] Bug or feature? COPY ignores column defaults
http://archives.postgresql.org/pgsql-hackers/1999-01/msg00667.php
http://archives.postgresql.org/pgsql-hackers/1999-01/msg00650.php

Alas, I will have to run perl regex on this awful weather data to make 
it work right.

Tim


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[SQL] ON DELETE CASCADE

2002-12-12 Thread Tim Perdue
I'm trying to comb through my database and add ON DELETE CASCADE to a 
number of tables where I already have fkeys in place, but I'm having a 
hard time.

ALTER TABLE project_task DROP CONSTRAINT 
"project_task_group_project_id_f" RESTRICT;

ERROR:  ALTER TABLE / DROP CONSTRAINT: project_task_group_project_id_f 
does not exist

ALTER TABLE project_task
	ADD CONSTRAINT projecttask_groupprojectid_fk
	FOREIGN KEY (group_project_id)
	REFERENCES project_group_list(group_project_id) ON DELETE CASCADE;
NOTICE:  ALTER TABLE will create implicit trigger(s) for FOREIGN KEY 
check(s)
ALTER

That command works, but now I think I have 2x as many triggers as I 
want. How do I get rid of the original triggers?


alexandria=# \d project_task
Table "project_task"
  Column  |   Type   | Modifiers
--+--+---
 project_task_id  | integer  | not null default 
nextval('project_task_pk_seq'::text)
 group_project_id | integer  | not null default '0'
 summary  | text | not null default ''
 details  | text | not null default ''
 percent_complete | integer  | not null default '0'
 priority | integer  | not null default '0'
 hours| double precision | not null default '0.00'
 start_date   | integer  | not null default '0'
 end_date | integer  | not null default '0'
 created_by   | integer  | not null default '0'
 status_id| integer  | not null default '0'
Indexes: projecttask_projid_status
Primary key: project_task_pkey
Triggers: RI_ConstraintTrigger_51030049,
  RI_ConstraintTrigger_51030047,
  RI_ConstraintTrigger_4305858,
  RI_ConstraintTrigger_4305852,
  RI_ConstraintTrigger_4305846

After adding the new foreign key:

Triggers: RI_ConstraintTrigger_51364957, ***new
  RI_ConstraintTrigger_51030049,
  RI_ConstraintTrigger_51030047,
  RI_ConstraintTrigger_4305858,
  RI_ConstraintTrigger_4305852,
  RI_ConstraintTrigger_4305846



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] ON DELETE CASCADE

2002-12-15 Thread Tim Perdue
Marie G. Tuite wrote:

Here is a function that I use to list fk(triggers) on a table.  Execute the
function to get the trigger name and then - drop trigger "trigger_name" on
table_name;


Thanks for the function. Strangely enough, it appears the "extra" 
triggers are gone all by themselves. It seems postgres appropriately 
deletes or updates the old triggers when you alter the table the second 
time, even if you did not drop the prior triggers.

I'm trying to comb through my database and add ON DELETE CASCADE to a
number of tables where I already have fkeys in place, but I'm having a
hard time.

ALTER TABLE project_task DROP CONSTRAINT
"project_task_group_project_id_f" RESTRICT;

ERROR:  ALTER TABLE / DROP CONSTRAINT: project_task_group_project_id_f
does not exist

ALTER TABLE project_task
	ADD CONSTRAINT projecttask_groupprojectid_fk
	FOREIGN KEY (group_project_id)
	REFERENCES project_group_list(group_project_id) ON DELETE CASCADE;
NOTICE:  ALTER TABLE will create implicit trigger(s) for FOREIGN KEY
check(s)
ALTER

That command works, but now I think I have 2x as many triggers as I
want. How do I get rid of the original triggers?




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[SQL] pl/pgsql question

2002-12-17 Thread Tim Perdue
I have created a function in pl/pgsql to modify a row before it gets put 
into the database, but it seems my modification is being ignored, and 
the unmodified row is being inserted.

I have confirmed with this RAISE EXCEPTION that my "NEW" row is modified 
properly, however it is not being stored in the db.

NEW.start_date := NEW.start_date+delta;
--  RAISE EXCEPTION ''new start date: % '',NEW.start_date;
NEW.end_date := NEW.end_date+delta;

It's probably something very obvious, but I'm mystified.

Tim


--
--  Function to enforce dependencies in the table structure
--
CREATE OR REPLACE FUNCTION projtask_insert_depend () RETURNS OPAQUE AS '
DECLARE
dependent RECORD;
delta INTEGER;
BEGIN
--
--  First make sure we start on or after end_date of tasks
--  that we depend on
--
FOR dependent IN SELECT * FROM project_depend_vw
WHERE 
project_task_id=NEW.project_task_id LOOP
--
--  See if the task we are dependent on
--  ends after we are supposed to start
--
IF dependent.end_date > NEW.start_date THEN
delta := dependent.end_date-NEW.start_date;
--  RAISE EXCEPTION ''delta: % '',delta;
NEW.start_date := NEW.start_date+delta;
--  RAISE EXCEPTION ''new start date: % 
'',NEW.start_date;
NEW.end_date := NEW.end_date+delta;
END IF;

END LOOP;
RETURN NEW;
END;
' LANGUAGE 'plpgsql';

CREATE TRIGGER projtask_insert_depend_trig AFTER INSERT ON project_task
FOR EACH ROW EXECUTE PROCEDURE projtask_insert_depend();


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org


Re: [SQL] pl/pgsql question

2002-12-18 Thread Tim Perdue
Ludwig Lim wrote:


  Try changing the "AFTER" to "BEFORE"

CREATE TRIGGER projtask_insert_depend_trig BEFORE...

Changes made to the "NEW" will not be reflect in the
AFTER trigger since, the row is already inserted.


Thanks, however this seems to present a different problem now.

FOR dependon IN SELECT * FROM project_depend_vw
WHERE 
project_task_id=NEW.project_task_id LOOP


That loop apparently does not find any matching rows, which would have 
been inserted just before this row was, inside the same transaction.

It was successfully finding those rows before, when the trigger was 
AFTER INSERT. If I manually select those rows after the query is 
committed, I am able to pull up the matching rows.

Tim


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly


Re: [SQL] pl/pgsql question

2002-12-18 Thread Tim Perdue
Josh Berkus wrote:

Tim,



That loop apparently does not find any matching rows, which would
have been inserted just before this row was, inside the same
transaction.

It was successfully finding those rows before, when the trigger was
AFTER INSERT. If I manually select those rows after the query is
committed, I am able to pull up the matching rows.



I think that triggers are probably not a good strategy for the kind of
calculation you're doing.  I'd suggest instead a middleware module or a
"data push" function which would bundle all of the calculation logic
before calling any of the inserts.


Yeah, but this is so much cooler. ;-)

Essentially this would be like recursion to push back/pull forward tasks 
which are dependent on each other. The "UPDATE" trigger I wrote is about 
5x longer.

I guess I can push this back into the PHP code and do a recusive 
function call, but that seems less sexy.

Tim



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org


Re: [SQL] About primary keys.

2003-08-15 Thread Tim Andersen
I'm new to PostgreSQL but I am familiar with DB2,
Oracle and Sybase.  I must say, I am impressed with
PostgreSQL so far!

In order to compare databases across DBMS platforms,
we need to create a view that queries from the system
catalog tables.  This view returns all of the columns
in the database listed by schema, table, and
columnname with some additional information about the
column (such as a primary key indicator).

These are the columns in the view:
creator (schema), tname (tablename), cname
(columnname), coltype (datatype), nulls (nullable),
length, syslength (precision), in_primary_key, colno
(columnumber), default_value, comments

I looked in the archives at postgresql.com, and I
found someone else with the same problem that I had
but no solution was posted.

I have made some good progress on creating a view that
selects from system catalog tables, but I am having
trouble with the in_primary_key and the
length/precision columns.  Many of our tables have
complex primary keys.

The query I have so far only gets columns that are
part of a primary key.  I need to return all of the
columns listed and a Y/N indicator for whether or not
the column is a part of the tables primary key.
Here's what I have:
/*---//
// This view shows all rows that //
// are part of a primary key://
//---*/
select upper(pgt1.schemaname) as "creator",
   upper(pgt1.tablename) as "tname",
   upper(pga1.attname) as "cname",
   case smmtsys.v_datatype.typname
 when 'bpchar' then 'char'
 else smmtsys.v_datatype.typname
   end as "coltype",
   case pga1.attnotnull
 when true then 'N'
 when false then 'Y'
   end as "nulls",
   i.indisprimary as "in_primary_key",
   pga1.atttypmod as "length",
   pga1.attndims as "syslength",
   pga1.attnum as "colno"
  from pg_tables pgt1,
   pg_class pgc1,
   pg_attribute pga1,
   pg_attribute pga2,
   pg_type,
   smmtsys.v_datatype,
   pg_index i,
   pg_namespace n
 where pgc1.relname = pgt1.tablename
   and pg_type.typname = pgt1.tablename
   and pga1.attrelid = pgc1.relfilenode
   and  pga1.attnum > 0
   and pga1.atttypid = smmtsys.v_datatype.oid
   and pgc1.oid = i.indrelid
   and i.indisprimary = 't'
   and n.oid = pgc1.relnamespace
   and pgt1.tablename = pgc1.relname
   and pga2.attrelid = i.indexrelid
   and pga1.attrelid = i.indrelid
   and pga1.attnum = i.indkey[pga2.attnum-1];

/*---//
// this is a quick and dirty //
// view to get the datatypes //
// used in the above query:  //
//---*/ 
create view smmtsys.v_datatype as (
select oid, typname from pg_type)
;


__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] About primary keys.

2003-08-15 Thread Tim Andersen
I looked in the info.c on line 2891 of the
psqlodbc-7.2.5 to find this SQL logic (courtesy of Tom
Lane)
select ta.attname, ia.attnum
 from pg_attribute ta, pg_attribute ia, pg_class c,
pg_index i, pg_namespace n
 where c.oid = i.indrelid
 AND n.oid = c.relnamespace
 AND i.indisprimary = 't'
 AND ia.attrelid = i.indexrelid
 AND ta.attrelid = i.indrelid
 AND ta.attnum = i.indkey[ia.attnum-1];

The above SQL retrieves each and every column in the
database that is a part of a complex primary key.
I need to join this to a list of all of the columns in
the database so I can have the primary key indicator. 


Here's another variation of the above SQL that shows
schema, table, column, colum_num, and a primary key
indicator:

select pg_tables.schemaname, pg_tables.tablename,
ta.attname, ia.attnum, i.indisprimary
from pg_attribute ta, pg_attribute ia, pg_class c,
pg_index i, pg_namespace n , pg_tables
where c.oid = i.indrelid
AND n.oid = c.relnamespace
AND i.indisprimary = 't'
AND ia.attrelid = i.indexrelid
AND ta.attrelid = i.indrelid
AND ta.attnum = i.indkey[ia.attnum-1]
AND pg_tables.tablename = c.relname;

so, shouldn't there be an easy way to retrieve all of
the columns for all tables with a primary key
indicator using this strategy?

If creating another view will simplify syntax, that's
fine too.


__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] About primary keys -- made some progress

2003-08-15 Thread Tim Andersen
This might not be the cleanest solution, but it runs
fast and it retrieved the information I need.
I broke it down into pieces and created several views
to query from to simplify it for myself.
The first four statements are views and the last one
is the query I was originally trying to get.  (note
that smmtsys is a schema I created, everything else is
dealing with system catalog tables)
Here's the SQL:
---
create view smmtsys.v_datatype as (
select oid, typname from pg_type)
;

create view smmtsys.v_primarykeys as(
select pg_tables.schemaname, pg_tables.tablename,
ta.attname, ta.attrelid, ia.attnum, i.indisprimary
from pg_attribute ta, pg_attribute ia, pg_class c,
pg_index i, pg_namespace n , pg_tables
where c.oid = i.indrelid
AND n.oid = c.relnamespace
AND i.indisprimary = 't'
AND ia.attrelid = i.indexrelid
AND ta.attrelid = i.indrelid
AND ta.attnum = i.indkey[ia.attnum-1]
AND pg_tables.tablename = c.relname
AND (pg_tables.schemaname = 'summit' or 
pg_tables.schemaname = 'uhelp' or 
pg_tables.schemaname = 'smmtsys' or 
pg_tables.schemaname = 'smmtsec' or 
pg_tables.schemaname = 'smmtccon' )
and ta.attname > 0
)
;

create view smmtsys.v_allcolumns as (
select pg_tables.schemaname,
   pg_tables.tablename,
   pg_attribute.attname
from pg_tables, 
 pg_class, 
 pg_attribute, 
 smmtsys.v_datatype 
where (schemaname = 'smmtccon' or 
   schemaname = 'smmtsec' or 
   schemaname = 'smmtsys' or 
   schemaname = 'summit' or 
   schemaname = 'uhelp' ) and 
  pg_class.relname = pg_tables.tablename and 
  pg_type.typname = pg_tables.tablename and 
  pg_attribute.attrelid = pg_class.relfilenode and

  pg_attribute.attnum > 0 and 
  pg_attribute.atttypid = smmtsys.v_datatype.oid
)
;

create view smmtsys.v_primarykeyind as (
select cols.schemaname ,
   cols.tablename ,
   cols.attname,
   case pks.indisprimary
 when true then 'Y'
 else 'N'
   end as in_primary_key
from smmtsys.v_allcolumns cols left outer join
smmtsys.v_primarykeys pks
on (cols.schemaname = pks.schemaname
and cols.tablename = pks.tablename
and cols.attname= pks.attname)
);

select upper(tbls.schemaname) as "creator", 
   upper(tbls.tablename) as "tname", 
   upper(cols.attname) as "cname", 
   case smmtsys.v_datatype.typname
 when 'bpchar' then 'char'
 else smmtsys.v_datatype.typname
   end as "coltype", 
   case cols.attnotnull
 when true then 'N'
 when false then 'Y'
   end as "nulls",
   length(cols.attrelid) as "length",  
   cols.attndims as "syslength", 
   vpk.in_primary_key,
   cols.attnum as "colno"
from pg_tables tbls, 
 pg_class, 
 pg_attribute cols, 
 pg_type, 
 smmtsys.v_datatype,
 smmtsys.v_primarykeyind vpk
where (tbls.schemaname = 'smmtccon'
   or tbls.schemaname = 'smmtsec'
   or tbls.schemaname = 'smmtsys'
   or tbls.schemaname = 'summit'
   or tbls.schemaname = 'uhelp')
  and pg_class.relname = tbls.tablename
  and pg_type.typname = tbls.tablename
  and cols.attrelid = pg_class.relfilenode
  and cols.attnum > 0
  and cols.atttypid = smmtsys.v_datatype.oid
  and vpk.schemaname = tbls.schemaname
  and vpk.tablename = tbls.tablename
  and vpk.attname = cols.attname
;

This retrieves all of the columns and shows a primary
key indicator for each column.  If someone could put
this logic all into one SQL query, I'd really like to
see it!

I still have a question about how to get the
information about length and precision of a column
from pg_attributes.atttypmod.  are there built-in
functions for PostgreSQL to extract this information?
Additionally, I need to get the column default value
and the comments on the column, but I think I can
figure that out with a little more time.



__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] Migration from db2 to postgres'

2003-08-19 Thread Tim Andersen
> I wanted to know how can I migrate all my data
> including the schema and their definitions,from
> IBM's db2 to the postgres.
> 

I have done this with other DBMSs before by writing a
program (using PowerBuilder).  The core concept is
this:

two ODBC datasources (source, target)

for (loop through source's schemas){
  for (loop through source's tables){
for(loop through source's rows){
  "select [row] from [source table]"
  "insert into [target table]"
}
  }
}

The key is this:  I have a view in both databases to
compare system catalog tables across DBMS platforms.
It is actually the view I was trying to create if you
look back at the SQL questions I asked to this mailing
list a few days ago. the subject was: "Re: [SQL] About
primary keys -- made some progress"

In the past, I ran into several problems with
different datatypes and have had to write workarounds
for those (mainly blobs and timestamps).

Concerning referential integrity:  Two options.  You
could wait to add all of your foreign keys until after
all of the data has transferred from the source to the
target.  The other option is to run the nested loops
several times (this "fills in the gaps" and allows
other tables to be filled each time you run it).  I
prefer the second.

I also have a feature to compare the rowcounts of the
source and target so I can be sure all of my data has
been transferred.

I have not yet implemented PostgreSQL into this
application, but I intend to do that over the next few
months.

Tim


__
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] Performance issue

2005-09-12 Thread Tim Goodaire
On Tue, Aug 30, 2005 at 03:38:52PM +0700, Ricky Sutanto wrote:
> I use Apache Web Server and PostgreSQL 7.3 to collect data everyday. Now it
> has been 5 month since I install that server. 
> 
> I wonder why now my web very slow to retrieve and display data? 
> When I check the memory, I found that postgreSQL client seem not release
> after allocate. I try to find bug on my script, but everything look clean to
> me. 
> 
> Anyone have experience like me.. please share info with me

You haven't really provided much information on your problem. Are you
vacuuming regularly?

Tim

> 
> Thanks anyway, 
> 
> 
> Ricky Sutanto
> 
> 
> 
> ---(end of broadcast)---
> TIP 3: Have you checked our extensive FAQ?
> 
>http://www.postgresql.org/docs/faq
> 

-- 
Tim Goodaire416-673-4126[EMAIL PROTECTED]
Database Administrator, Afilias Canada Corp.


signature.asc
Description: Digital signature


Re: [SQL] How to get list of days between two dates?

2006-06-06 Thread Tim Middleton
This is going to be ugly, and I can't even say for sure it's right (and if by 
chance it is right, I imagine it still might be more efficient broken up in a 
function), but intrigued by learning about generate_series() from Scott 
Marlows response I fiddled until I got the results specified like this...

SELECT dt, event_name 
FROM (
SELECT (mn.d + s.d) AS dt 
FROM (
SELECT min(start_time) FROM test_events) AS mn(d),
generate_series(0, (
SELECT (extract('epoch' from age(max(end_time),
min(start_time)))/86400)::integer 
FROM test_events)) 
AS s(d)) 
AS x 
JOIN test_events AS y ON (dt BETWEEN start_time AND end_time) 
ORDER BY dt, event_name;

 dt |  event_name
+--
 2006-05-01 | First Event
 2006-05-02 | First Event
 2006-05-02 | Second Event
 2006-05-03 | First Event
 2006-05-04 | First Event
 2006-05-04 | Third Event
 2006-05-05 | Third Event
 2006-05-07 | Fourth Event
(8 rows)

-- 
Tim Middleton | Vex.Net| "Who is Ungit?" said he, still holding
[EMAIL PROTECTED] | VexTech.ca | my hands. --C.S.Lewis (TWHF)


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[SQL] Partitioned tables not using index for min and max 8.2.7?

2009-07-01 Thread Tim Haak

Hi

I am running the following query again a partitioned table in 8.2.7. It 
does index scans which is unexpected as there are indexes for the 
log_date column.


min(log_date) from data.table

"Aggregate  (cost=739932.02..739932.02 rows=1 width=8)"
"  ->  Append  (cost=0.00..685106.21 rows=21930321 width=8)"
"->  Seq Scan on table  (cost=0.00..33827.10 rows=1215710 width=8)"
"->  Seq Scan on table_yy2009mm03 table  (cost=0.00..88056.39 
rows=438839 width=8)"
"->  Seq Scan on table_yy2009mm04 table  (cost=0.00..204606.67 
rows=7344967 width=8)"
"->  Seq Scan on table_yy2009mm05 table  (cost=0.00..159210.91 
rows=5735091 width=8)"
"->  Seq Scan on table_yy2009mm06 table  (cost=0.00..199393.74 
rows=7195574 width=8)"
"->  Seq Scan on table_yy2009mm07 table  (cost=0.00..11.40 
rows=140 width=8)"


though if i run it only agains the one table it is significantly faster 
and uses the index


select min(log_date) from only data.table

"Result  (cost=0.06..0.07 rows=1 width=0)"
"  InitPlan"
"->  Limit  (cost=0.00..0.06 rows=1 width=8)"
"  ->  Index Scan using idx_table_log_date_only on table  
(cost=0.00..68272.93 rows=1215710 width=8)"

"Filter: (log_date IS NOT NULL)"

Am I doing something wrong or is this expected.

I tried the old method of

SELECT col FROM table ORDER BY col DESC LIMIT 1

But it does not work either.



--
Tim Haak
Email:  t...@lucidview.net
Tel:  +27 12 658 9019

begin:vcard
fn:Timothy Haak
n:Haak;Timothy
email;internet:t...@lucidview.net
tel;work:+27 12 658 9019
tel;cell:+27 83 778 7100
x-mozilla-html:TRUE
version:2.1
end:vcard


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Partitioned tables not using index for min and max 8.2.7?

2009-07-02 Thread Tim Haak
Cool thanks then not something I'm doing wrong :). Is this going to be 
changed or is changed in a latter version of postgres. (Do I need to do 
and upgrade or write a work around :) )


Tom Lane wrote:

Tim Haak  writes:
  
I am running the following query again a partitioned table in 8.2.7. It 
does index scans which is unexpected as there are indexes for the 
log_date column.



The index min/max optimization only works on single tables at the
moment.  Sorry.

regards, tom lane

  


--
Tim Haak
Email:  t...@lucidview.net
Tel:  +27 12 658 9019

begin:vcard
fn:Timothy Haak
n:Haak;Timothy
email;internet:t...@lucidview.net
tel;work:+27 12 658 9019
tel;cell:+27 83 778 7100
x-mozilla-html:TRUE
version:2.1
end:vcard


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] Comparing a string against an XPath result set

2009-07-16 Thread Tim Landscheidt
Hi,

I'm trying to check whether a given string is part of an
XPath result set. I have encountered some performance prob-
lems in the process, but before I track those down further,
I'd like to see if I've chosen the right path to begin with:

| SELECT * FROM $TABLE
|   WHERE $COLUMN = ANY((xpath($EXPR, xmlparse(DOCUMENT $DOC)))::TEXT[]);

In this case, xpath() returns an array of XML fragments made
of a "pure" text child each. I cannot compare XML fragments
with anything, so the conversion to TEXT[] seems to be the
only way I could check whether $COLUMN is a part of those.

  Is there any other way I could tackle this?

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Tweak sql result set... ?

2009-07-28 Thread Tim Landscheidt
(anonymous) wrote:

> I have a problem where I want to tweak a simple select in an
> "unobtrusive way". Imagine I have the following select statement:
> "SELECT name FROM customer LIMIT 1" and I get a normal result set from
> this. But, could I,maybe by defining some other function or similar,
> change the result set *without* changing the query? Suppose I get the
> result from the query above, saying: "Peter Peterson". I would
> (sometimes) like to get the result "Peter Peterson" but I
> should not have to change the original query.

> I know I could write "SELECT '' || name || '' as name FROM
> customer" but then I have altered the original query and I cannot do
> this since it is supposed to function different in two different
> situations.

> Any ideas on how to achieve this? I would like to let the original sql
> code stay original. I can prepare postgres before executing the sql if
> this makes it easier to acheive the goal

Have a look at CREATE RULE.

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Tweak sql result set... ?

2009-07-30 Thread Tim Landscheidt
(anonymous) wrote:

>> > Any ideas on how to achieve this? I would like to let the original sql
>> > code stay original. I can prepare postgres before executing the sql if
>> > this makes it easier to acheive the goal

>> Have a look at CREATE RULE.

> Ok, you mean I could create a rule for the table, then I let the
> script go into my "black box",
> do the original selects, but get a manipulated result set back, then I
> drop the rule so
> that the blackbox of scripts can get the not manipulated result set
> back?

No, I meant that you should have a look at CREATE RULE. From
a design perspective, I'd probably rename the old table and
put an updatable view in its place.

> I need to sometimes get the result "output from query" and
> sometimes not
> and it would be awesomw to get it with the same query, but maybe by
> setting
> a rule or similar. Best from performance view would be to make
> something
> more permanent. Is it possible to activate/deactivate a rule?

Of course, DROP RULE.

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] mail alert

2009-08-13 Thread Tim Landscheidt
Alvaro Herrera  wrote:

>> > It's on Windows

>> I'd go with notify and a listener written in C using c-client to send
>> emails, but only because I've used those before.

> I wouldn't write it in C but rather Perl or Python, but whatever suits
> your fancy should work (Visual Basic anyone?).  The advantages to using
> a listener program instead of doing it in a trigger or something like
> that are:

> - transaction semantics are kept; you don't send an email only to find
> out your transaction has been rolled back for whatever reason, and then
> send a second email when the transaction is replayed

> - you don't block the database system just because your mail server is
> down

> - the email can be sent on whatever schedule fits the listener program

> - the listener client can run elsewhere, not only in the database server

> - any further external processing can take place at that time, without
> bothering the database server

> - other stuff I don't recall ATM

The main disadvantage in using a listener is that it is your
responsibility to make sure that the listener is listening
24/7 - from before the database accepts other connections,
through network failures, bugs, etc. - otherwise notifica-
tions will be lost. Therefore I find it much more reliable
(and easier to program) to copy the relevant data to a table
"mailqueue" (or whatever) and then process that queue every
other minute.

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] simple? query

2009-08-14 Thread Tim Landscheidt
"Relyea, Mike"  wrote:

>> The goal is, where uid not equals to 'janvleuven10' a new
>> record should be inserted with the uid, and registered=0

> So if a record is found you want to update it and if a record isn't
> found you want to insert it.  I think you'll probably want to use
> plpgsql http://www.postgresql.org/docs/8.4/static/plpgsql.html or some
> other language like Jasen suggested.  I don't know of a way to do this
> with straight sql.

Something along the lines of:

| UPDATE table SET attribute = 'something' WHERE primary_key = 'id';
| INSERT INTO table (primary_key, attribute) SELECT 'id', 'something' WHERE 
'id' NOT IN (SELECT primary_key FROM table);

should achieve that.

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] FW: simple? query

2009-08-18 Thread Tim Landscheidt
Jan Verheyden  wrote:

> Thanks for the suggestion, the only problem is, if primary key is used then 
> each row should be unique what is not true; since I have a column 
> 'registered' what only can be 1 or 0...
> [...]

I have no idea what you are trying to say.

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Lag and lead window functions order by weirdness

2009-10-18 Thread Tim Landscheidt
Thom Brown  wrote:

> [...]
> Having a look around, it looks as if Postgres might be misbehaving.
>  According to this page,
> http://my.safaribooksonline.com/0596004818/sqlnut2-CHP-4-SECT-3, the ORDER
> BY in the window function's clause shouldn't be having this ordering effect:

> "Furthermore, the order within these groups is defined by an ordering
> clause, but that order only affects function evaluation, and has no effect
> on the order in which rows are returned by the query."

> The behaviour is unexpected from my perspective, but obviously there are
> workarounds.  Is anyone able to confirm any of this?

AFAIR, others have already pointed out that without an "OR-
DER BY" clause PostgreSQL can return the result set in *any*
"order" it deems fit. So why don't you use one?

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] loading a file into a field

2009-12-31 Thread Tim Landscheidt
Brian Sherwood  wrote:

> I am looking to play around with the postgres XML functions, but can't
> seem to find a way to load my data.

> I have a number of XML files: file1.xml, file2.xml, etc  All files
> follow the same xml format (DTD?)

> I would like to create the following table:

> CREATE TABLE configs  (
> filenamevarchar(80) PRIMARY KEY,
> config  xml
> );


> and load each file into the xml field, but can't seem to find a way.
> (I think I need something similiar to lo_import, but have not found anything)

There is no function to do that in the standard distribu-
tion. You have to solve that in your client application.
Theoretically, you could try to do it in a server-side user
function, but permissions, local vs. remote file systems &
Co. usually only create major headaches.

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Proper case function

2009-12-31 Thread Tim Landscheidt
Michael Gould  wrote:

> Based on what I read it wouldn't handle cases where the result should be

> MacDonald from macdonald.  There are other cases such as the sentence below

> ('it''s crazy! i couldn''t believe kate mcdonald, leo dicaprio, (terrence)
> trent d''arby (circa the 80''s), and jada pinkett-smith all showed up to
> [cHris o''donnell''s] party...donning l''oreal lIpstick! They''re heading to
> o''neil''s pub later on t''nite. the_underscore_test. the-hyphen-test)

> This wouldn't handle the quotes and proper case all of the words.
> [...]

Based on your requirements, you want to hire some cheap Eng-
lish native speaker with lots of cultural knowledge.

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] complex join question

2010-02-12 Thread Tim Landscheidt
Louis-David Mitterrand  wrote:

> Here is my schema:

> cruise_line -> ship(id_line) -> cruise_type(id_ship) -> cruise(id_cruise_type)
>   -> price(id_cruise, id_currency) <- currency (USD,GBP,EUR,CAD)

> (a 'cruise' is a 'cruise_type' + a date)

> I am trying to display a count of cruise's for each ship and each
> currency even if that count is 0.

> But I am having trouble building the query, as some 'cruise's might not
> (yet) have a 'price' in all currencies and so no link to 'currency'.

You're probably looking for a "LEFT JOIN".

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Can i force deletion of dependent rows?

2010-02-12 Thread Tim Landscheidt
Shruthi A  wrote:

> I have 2 tables (A and B) where the table B has a foreign key reference to
> table A.   Like this:

> create table A (x int primary key);
> create table B (y int primary key, z int references A (x) );

> As you can see, i've not specified any further 'on delete' clause to the
> foreign key constraint like 'on delete cascade' or 'on delete restrict' or
> 'on delete set null' or 'on delete set default'.   This means that the
> default behavior is followed ie 'on delete restrict' holds by default.

> Now I want to delete a particular row from table A.  Is there any way I can
> force deletion of all the dependent rows in table B?

> Note that the tables have already been created in the above manner and
> populated.  I want to do this deletion through DML only ie without any
> change in the table definition.

"DELETE FROM B WHERE z = 'your x';"?

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Can i force deletion of dependent rows?

2010-02-13 Thread Tim Landscheidt
Shruthi A  wrote:

> Thanks people, but the trigger doesnt really solve my problem.  You see,
> there is actually a CHAIN of dependencies where B references A, C references
> B, D and E reference C and so on...   So if i start writing triggers for all
> these, there'll be a huge process to follow.  I'm not strictly against it, i
> might use it as a last resort, but i'm asking if a simple DML statement
> would solve my problem.

> Dear Shoaib, I did mention in a note that I don't want to recreate the 2
> tables with the 'on delete cascade' clause.  The tables are ALREADY created
> and populated.  And they are huge.  So i cannot afford to drop and recreate
> them.  I want an option in DML only.  If that is possible that is.

You don't have to drop and recreate them, you could just
temporarily change the foreign key definition (untested):

| BEGIN WORK;
| ALTER TABLE B DROP CONSTRAINT B_z_FKey;
| ALTER TABLE B ADD CONSTRAINT B_z_FKey FOREIGN KEY(z) REFERENCES A(x) ON 
DELETE CASCADE;
| DELETE FROM A WHERE x = 'your x';
| ALTER TABLE B DROP CONSTRAINT B_z_FKey;
| ALTER TABLE B ADD CONSTRAINT B_z_FKey FOREIGN KEY(z) REFERENCES A(x) ON 
DELETE NO ACTION;
| COMMIT WORK;

But obviously this is no better than a simple "DELETE FROM B
WHERE z = 'your x';" as you still have to name the dependen-
cy chain yourself.

  You can query the information_schema to build this chain
(e. g. cf.
http://sqlserver2000.databases.aspfaq.com/schema-how-do-i-find-all-the-foreign-keys-in-a-database.html>),
but I would invest more time in rethinking your processes.

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] UNION or LEFT JOIN?

2010-02-16 Thread Tim Landscheidt
Louis-David Mitterrand  wrote:

> Here is the basic schema:

>   -->id_ship>---
>   ||
> [SHIP]->id_ship->[CABIN]->id_cabin->[PRICE]<-id_cruise<-[CRUISE]

> It's a database of cruise prices.

> Each 'price' object has a reference to 'cabin' and 'cruise'

> 'cabin' belongs to a 'ship', so does 'cruise'

> I'm trying to select all cabins of cruise N°1 with prices OR nothing if
> there is no price (meaning cabin not available). I want all cabins
> listed, price or no price.

> Also when doing the query I don't have the id_ship, only the id_cruise.

> What is the best way of doing it? UNION or LEFT JOIN? I tried the latter
> without success and am unsure on how do do the former.

Was does "without success" mean? The objective seems to be
straight-forward:

- Select all cabins that belong to the ship that belongs to
  the cruise id_cruise.
- Left join that with the prices of the cruise id_cruise.

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] How to max() make null as biggest value?

2010-04-20 Thread Tim Landscheidt
Feixiong Li  wrote:

> I am newbie for sql, I have a problem when using max()
> function, I need get null when there are null in the value
> list, or return the  largest value as usual, who can do
> this?

> i.e.  max([1,2,3,4,5]) => 5
> max([1,2,3,4,5,null]) => null

You can cheat a bit:

| tim=# CREATE TABLE MaxTest (i INT);
| CREATE TABLE
| tim=# INSERT INTO MaxTest (i) VALUES (1), (2), (3), (4), (5), (NULL);
| INSERT 0 6
| tim=# SELECT Maxi FROM (SELECT i AS Maxi FROM MaxTest ORDER BY i DESC NULLS 
FIRST LIMIT 1) AS SubSelect;
|   maxi
| 
|  (null)
| (1 Zeile)

| tim=# DELETE FROM MaxTest WHERE i IS NULL;
| DELETE 1
| tim=# SELECT Maxi FROM (SELECT i AS Maxi FROM MaxTest ORDER BY i DESC NULLS 
FIRST LIMIT 1) AS SubSelect;
|  maxi
| --
| 5
| (1 Zeile)
| tim=#

You can also use FIRST_VALUE() (or LAST_VALUE()) if that's
more to your liking. Be careful though with empty sets:

| tim=# DELETE FROM MaxTest;
| DELETE 5
| tim=# SELECT Maxi FROM (SELECT i AS Maxi FROM MaxTest ORDER BY i DESC NULLS 
FIRST LIMIT 1) AS SubSelect;
|  maxi
| ------
| (0 Zeilen)
| tim=#

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] regexp_replace and search/replace values stored in table

2010-04-27 Thread Tim Landscheidt
Leif Biberg Kristensen  wrote:

> [...]
> So far, so good. But when I try to do the actual expansion, I'm stumped.

> pgslekt=> select regexp_replace((select source_text from sources where
> source_id=23091), (select quote_literal(short_link) from short_links where
> link_type = 'sk'), (select quote_literal(long_link) from short_links where
> link_type = 'sk'), 'g');
>  regexp_replace
> 
>  [sk=25658|67|side 66a]. Vabakken under Klyve vestre i Solum 07.07.1784:
> [p=6947|Isach Jonsen].
> (1 row)

> What am I missing?

For starters, omit the call to quote_literal ().

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] import ignoring duplicates

2010-05-16 Thread Tim Landscheidt
Mark Fenbers  wrote:

> I am using psql's \copy command to add records to a database
> from a file.  The file has over 100,000 lines.
> Occasionally, there is a duplicate, and the import ceases
> and an internal rollback is performed.  In other words, no
> data is imported even if the first error occurs near the end
> of the file.

> I am looking for an option/switch to tell psql (or the \copy
> command) to skip over any duplicate key constraint
> viloations and continue to load any data that doesn't
> violate a duplicate key constraint.  Is there such an
> option?

No. You can either disable the constraint temporarily, im-
port the data, fix any duplicates and re-enable the con-
straint, or you can load the data in a temporary table and
then transfer only the valid data. With only 100000 records
I would opt for the latter.

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] How to get CURRENT_DATE in a pl/pgSQL function

2010-05-18 Thread Tim Landscheidt
Kenneth Marshall  wrote:

> It works using 'now' and I assume that since curtime is
> of type DATE that the assignment casts the return automatically
> to type DATE. Thank you for the ideas.
> [...]

What's wrong with Pavel's correct and to-the-point answer?

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] How to get CURRENT_DATE in a pl/pgSQL function

2010-05-19 Thread Tim Landscheidt
Brian Modra  wrote:

>>> It works using 'now' and I assume that since curtime is
>>> of type DATE that the assignment casts the return automatically
>>> to type DATE. Thank you for the ideas.
>>> [...]

>> What's wrong with Pavel's correct and to-the-point answer?

> No need actually to cast... just use current_date without the quotes.
> Its not a string.

Where did Pavel suggest to cast or use a string?

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] safely exchanging primary keys?

2010-05-24 Thread Tim Landscheidt
Louis-David Mitterrand  wrote:

> I have this function which swaps primary keys for cabin_types (so that
> id_cabin_type ordering reflects natural data ordering):

>   CREATE OR REPLACE FUNCTION swap_cabin_types(id1 integer, id2 integer) 
> RETURNS integer
>   AS $$
>   declare
>   tmp integer;
>   begin
>   tmp := nextval('cabin_type_id_cabin_type_seq');
>   update cabin_type set id_cabin_type=tmp where id_cabin_type=id1;
>   update cabin_type set id_cabin_type=id1 where id_cabin_type=id2;
>   update cabin_type set id_cabin_type=id2 where id_cabin_type=tmp;
>   return tmp;
>   end;
>   $$
>   LANGUAGE plpgsql;

> 'id_cabin_type' is a foreign key for two other tables, 'cabin_category'
> and 'alert_cabin_type', which have an "on update cascade" clause.

> When I run that function it seems the foreign keys are not properly
> updated and the data ends up in a mess.

> Did I forget something?

What does "are not properly updated" mean? Anyhow, why don't
you use something simple like (untested):

| UPDATE cabin_type
|   SET id_cabin_type =
| CASE
|   WHEN id_cabin_type = id1 THEN
| id2
|   ELSE
| id1
| END
|   WHERE id_cabin_type IN (id1, id2);

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] safely exchanging primary keys?

2010-05-24 Thread Tim Landscheidt
Louis-David Mitterrand  wrote:

>> > you use something simple like (untested):
>> >
>> > | UPDATE cabin_type
>> > |   SET id_cabin_type =
>> > | CASE
>> > |   WHEN id_cabin_type = id1 THEN
>> > | id2
>> > |   ELSE
>> > | id1
>> > | END
>> > |   WHERE id_cabin_type IN (id1, id2);

>> Nice, thanks.

> Ah, but this won't work as the UNIQUE PK constraint is in force.

Oh, yes, you're right, I didn't have that premise in mind.

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] inner join and limit

2010-05-26 Thread Tim Landscheidt
Michele Petrazzo - Unipex  wrote:

> I have two table that are so represented:
> t1:
> id int primary key
> ... other

> t2:
> id int primary key
> t1id int fk(t1.id)
> somedate date
> ... other

> data t1:
> 1 | abcde
> 2 | fghi

> data t2:
> 1 | 1 | 2010-05-23
> 2 | 1 | 2010-05-24
> 3 | 1 | 2010-05-25
> 4 | 2 | 2010-05-22
> 5 | 2 | 2010-05-26

> I'm trying to create a query where the data replied are:
> join t1 with t2 and return only the LIMIT 1 (or N) of the
> t2, with date order (of t2).
> Data should be:

> t1.id | t2.id | t2,somedate
> 1 | 3 | 2010-05-25
> 2 | 5 | 2010-05-26

> As said, I'm trying, but without success...
> Can be done for you?

In addition to Luigi's and Rolando's responses, there are of
course the always glorious "DISTINCT ON" for the "LIMIT 1"
case:

| SELECT DISTINCT ON (t1.id) t1.id, t2.id, t2.somedate FROM t1 JOIN t2 ON t1.id 
= t2.t1id ORDER BY t1.id, somedate DESC;

and window functions for the generic one:

| SELECT t1_id, t2_id, t2_somedate FROM
|   (SELECT t1.id AS t1_id,
|   t2.id AS t2_id,
|   t2.somedate AS t2_somedate,
|   ROW_NUMBER() OVER (PARTITION BY t1.id ORDER BY t2.somedate DESC) AS 
rn
|  FROM t1 JOIN t2 ON t1.id = t2.t1id) AS SubQuery
|   WHERE rn <= 2;

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Extending Regular Expression bounds limit of 255.

2010-05-29 Thread Tim Landscheidt
Jasen Betts  wrote:

>> It looks like most of our tools are using the Perl version of regular
>> expressions with an upper limit of a bound being 32766.  Is there any way to
>> change this in PG?  Or can I change from POSIX to Perl?

> perhaps you can do something in pl-perl?

> posix regular expressions are different to perl regular expressions in
> several ways.

Another last resort possibility would of course be to "pre-
compile" the regular expressions from "A{2000}" to
"A{255}A{255}A{255}A{255}A{255}A{255}A{255}A{215}" (with the
headaches of "A{1000,2000}" left as an exercise to the read-
er :-)).

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] what does this do

2010-06-10 Thread Tim Landscheidt
John  wrote:

> I have a piece of python code that excutes a SQL statement:

> apitempCur.execute("select * from jfcs_balancedue('%s') f(enrolleeid varchar,
> course_cost decimal, paid_amt decimal)" % (enrollIds,));

> The "enrollids" is a list of primary keys and the "jfcs_balancedue" is a user
> defined function.  What I don't understand is the "f(enrolleeid
> varchar, ...)"   I have no idea what it's for?  Would some kind soul educate
> me.

You can omit the "AS" from "table_name AS alias
(column_alias, ...)", but AFAIK PostgreSQL doesn't support
specifying a data type for each column. Which DBMS is this
code used for?

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] what does this do

2010-06-10 Thread Tim Landscheidt
Richard Broersma  wrote:

>>> I have a piece of python code that excutes a SQL statement:

>>> apitempCur.execute("select * from jfcs_balancedue('%s') f(enrolleeid 
>>> varchar,
>>> course_cost decimal, paid_amt decimal)" % (enrollIds,));

>>> The "enrollids" is a list of primary keys and the "jfcs_balancedue" is a 
>>> user
>>> defined function.  What I don't understand is the "f(enrolleeid
>>> varchar, ...)"   I have no idea what it's for?  Would some kind soul educate
>>> me.

>> You can omit the "AS" from "table_name AS alias
>> (column_alias, ...)", but AFAIK PostgreSQL doesn't support
>> specifying a data type for each column. Which DBMS is this
>> code used for?

> Well, it doesn't support data-types in the alias declaration for all
> set returning relations with the exception of a set returning function
> (i.e. store procedure).  The from clause has a give-away that this is
> a set returning function: "jfcs_balancedue('%s')" since it has a
> parameter.

> Notice the function name section taken from the from clause:

> http://www.postgresql.org/docs/9.0/static/sql-select.html#SQL-FROM

Another lesson learned :-). But it applies strictly to *re-
cord* returning functions, doesn't it? Because I had tested
generate_series() prior to my reply:

| tim=# SELECT ID FROM generate_series(1, 2) AS G(ID);
|  id
| 
|   1
|   2
| (2 Zeilen)

| tim=# SELECT ID FROM generate_series(1, 2) AS G(ID INT);
| ERROR:  a column definition list is only allowed for functions returning 
"record"
| ZEILE 1: SELECT ID FROM generate_series(1, 2) AS G(ID INT);
| ^
| tim=#

but didn't follow the (now obvious) clue ...

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] error on line 1 trying to execute a script using psql

2010-06-21 Thread Tim Landscheidt
Steven Dahlin  wrote:

> I am attempting to generate a temporary function to execute some dml with
> the following script:

> create or replace function setuplicense() returns integer as $$
> declare
>   hwcustid  integer := 0;
>   retvalinteger := 0;
> begin
>   insert into license.customer
>   ( customer_id ) values ( hwcustid );
>   commit;
>   return retval;
> end;
> $$
> LANGUAGE plpgsql;

> select setuplicense();

> When running this with psql I get the following:

> Error: syntax error at or near "create"

> Does anyone have a suggestion as to what is causing this?

Your editor prepends the file with a byte-order mark ("BOM")
that PostgreSQL chokes on (bug #5398). This will be fixed in
9.0 (cf.
http://developer.postgresql.org/pgdocs/postgres/release-9-0.html#AEN99331>);
until then you either have to configure your editor not to
save the BOM or chop off the first three bytes yourself
(with tail, sed, Perl & Co.).

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] enforcing constraints across multiple tables

2010-06-25 Thread Tim Landscheidt
Andrew Geery  wrote:

> [...]
> My questions are:
> (A) Is there a way to check (2) above using a constraint and not a trigger?
> (B) Is there an easier way to solve this problem?  Does the complicated
> nature of the solution make the design poor?
> (C) Should I not worry about this constraint at the DB level and just
> enforce it at the application level?
> [...]

I don't know about (A) and (B), but (C) is the solution of
choice in most complex cases. It's usually *much* easier to
enforce than with constraints and triggers (you have to do
it at application level anyway as well as you probably don't
want to pass PostgreSQL's errors directly to the user) and
your mind doesn't become twisted too much. I would schedule
regular tests on the data set though to ensure that you no-
tice problems early on.

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] How to select text field as interger

2010-06-29 Thread Tim Landscheidt
Jerrel Kemper  wrote:

> My table consist off the following fields

> CREATE TABLE test
> (
>   id bigserial NOT NULL,
>   name character varying(100),
>CONSTRAINT logs_pkey PRIMARY KEY (log_id)
> )

> with value :

> IDName
> 1  001
> 201
> 3  1
> 411

> Select name from test where name = '1' ;

> results in :

> Name
>   1

> If the 'Name' field was define as a serial the result set ought to be

> Name
>   001
> 01
>   1

> Question?

> How can I have the same result set as above

> Name
>   001
> 01
>   1

> with the name field defined as character varying.

Ah! Your excentric formatting and the reference to "serial"
had me wondering if the indentation had any meaning. You can
select the rows where the "Name" field is numerically equal
to 1 by casting it to an integer and then comparing it to 1,
i. e.:

| SELECT Name FROM Test WHERE Name::INT = 1;

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] How to find events within a timespan to each other?

2010-07-07 Thread Tim Landscheidt
(anonymous) wrote:

> is there a way to find events in a log that happen within a
> certain timespan to each other?

> Log is like this
> event_idinteger   not null   default 0  primary key
> event_type_idinteger   not null   default
> user_idinteger   not null   default 0
> event_ts  timestamp(0)

> I need every event of a type that happened more often than
> one time within 5 minutes of another one of the same user.

> 173 1 ... 12:00
> 182 
> 193 1 ... 13:03
> 203 2 ... 13:03
> 213 1 ... 13:04
> 222.
> 233 1 ... 13:05
> 242 1 ... 13:06

> E.g. the checked event_typ_id may be 3 then the result
> should be line 19, 21, 23

You can use window functions and check whether the preceding
or following timestamp is within range:

| tim=# SELECT event_id, event_type_id, user_id, event_ts
| tim-#   FROM (SELECT event_id,
| tim(#    event_type_id,
| tim(#        user_id,
| tim(#event_ts,
| tim(#LAG(event_ts) OVER (PARTITION BY user_id, event_type_id 
ORDER BY event_ts ASC)  AS PrecedingTimestamp,
| tim(#LAG(event_ts) OVER (PARTITION BY user_id, event_type_id 
ORDER BY event_ts DESC) AS FollowingTimestamp
| tim(#   FROM TestEvents) AS SubQuery
| tim-#   WHERE (PrecedingTimestamp IS NOT NULL AND event_ts - 
PrecedingTimestamp <= '5 minutes') OR
| tim-# (FollowingTimestamp IS NOT NULL AND FollowingTimestamp - 
event_ts <= '5 minutes');
|  event_id | event_type_id | user_id |  event_ts
| --+---+-+-
|23 | 3 |   1 | 2010-01-01 13:05:00
|21 | 3 |   1 | 2010-01-01 13:04:00
|    19 | 3 |   1 | 2010-01-01 13:03:00
| (3 Zeilen)

| tim=#

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] grouping subsets

2010-07-22 Thread Tim Landscheidt
Richard Huxton  wrote:

>>> What I want to get is the values grouped by "subset", where a subset is a 
>>> set of rows with identical column until the colum changes.
>>> Is there a way to get

>>> | 2 | B |
>>> | 4 | C |
>>> | 4 | B |
>>> | 3 | D |

>>> by SQL only?

>> I think, the problem is that there are 2 identical groups. I think, you
>> can write a pl/pgsql-proc, selecting all ordered by the date-field and
>> walking through the result to do the grouping, checking if the 2nd
>> column is different from the previous.

>> With plain SQL it's maybe possible too, but i don't know how ...

> It should be do-able in 8.4 onwards, look into windowing
> functions. In particular the lag() function:

> SELECT
> mycode,
> mydate,
> lag(mycode) OVER (ORDER BY mydate) AS prev_code
> FROM
> mytable
> ORDER BY mydate;

> It should be possible to use that as a subquery with an
> outer query that compares mycode=prev_code to get a run
> length.

Hmmm. Can the outer query be done without using "WITH
RECURSIVE"?

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] grouping subsets

2010-07-22 Thread Tim Landscheidt
Rainer Stengele  wrote:

> yes, the date is always incremented - but anyway the date
> column is not really the point! Actually the first tow
> columns are relevant. I want them gouped together as
> indicated, adding up column 1 in the blocks with identical
> second column, but not adding up over all the rows.
> [...]

If the date column wasn't relevant, how would you group the
first two columns?

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Aggregates (last/first) not behaving

2010-07-22 Thread Tim Landscheidt
Wes Devauld  wrote:

> I believe I lost the flavour of what I'm doing when I constructed this
> example.  I'm not interested in the timepoint as much as the value that is
> attached to it.  I need to be able to find the last chronological record for
> a given day.

> I can get the value for which I am looking in two steps:

> select max(t) as t into table last_of_day from test group by extract(day
> from t);
> select last_of_day.t, test.v from last_of_day, test where last_of_day.t =
> test.t;

> I was fairly happy when first() and last() were discovered, as these two
> steps could be merged, until the table grows too large and the query planner
> decides to sort the results before they are aggregated.

> I was searching for a way to keep using last() and keeping the extraction to
> a single step, although the more I fight with it, the less I think that it
> is worth it.  If you have any further suggestions, I would appreciate
> hearing them.
> [...]

Perhaps you could /concisely/ rephrase your problem. Finding
the first/last value per group with/without window functions
is a common problem, and there are lots of solutions to it.
But few people will wade through lots of text to find out
what's bothering you.

  For example, you can query the "last" values per day along
the lines of (untested):

| SELECT EXTRACT(day FROM t), v
|   FROM test
|   WHERE t IN (SELECT MAX(t) FROM test
| GROUP BY EXTRACT(day FROM t));

Obviously, this doesn't "keep using last()", so I don't know
whether it's good or bad for you.

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] Generating Rows from a date and a duration

2010-09-03 Thread Tim Schumacher
Hi List,

I'm kinda stuck situation, I have a timestamp which resambles a
startdate and a duration in days and I want to bloat this, so I have a
row for every day beginning from the startdate. I have created an
example bellow, maybe I'm doing it on the wrong angle and you can come
up with some better ideas:

BEGIN TRANSACTION;

CREATE TABLE example
(
   id serial NOT NULL,
   startdate timestamp without time zone,
   duration int_unsigned NOT NULL,
   CONSTRAINT pq_example_id PRIMARY KEY (id)
) WITH (OIDS=FALSE)
;

insert into example(id,startdate,duration) values (1,'2010-09-03',4);
insert into example(id,startdate,duration) values (2,'2010-09-03',6);

CREATE OR REPLACE FUNCTION bloat_duration(IN id integer,
  IN startdate  timestamp
without time zone,
  IN duration integer,
  OUT id integer,
  OUT duration_date date)
RETURNS SETOF RECORD AS
$$
BEGIN
  RETURN QUERY SELECT
id,to_date(to_char(startdate,'-MM-DD'),'-MM-DD')+s.a AS
stockdate FROM generate_series(0,duration-1) AS s(a);
END;
$$
LANGUAGE 'plpgsql';

-- This works, but not what I want
SELECT * FROM bloat_duration(1,'2010-09-03',4);

-- This does not work

SELECT * FROM example AS ex
INNER JOIN bloat_duration(ex.id,ex.startdate,ex.duration) AS bd ON bd.id
= ex.id

ROLLBACK TRANSACTION;


greetings

Tim



signature.asc
Description: OpenPGP digital signature


Re: [SQL] Generating Rows from a date and a duration

2010-09-08 Thread Tim Schumacher
Hi Brian, Hi List,


At Sat, 4 Sep 2010 09:20:53 -0400,
Brian Sherwood wrote:

> On Fri, Sep 3, 2010 at 5:40 AM, Tim Schumacher <
> tim.daniel.schumac...@gmail.com> wrote:
> 
> > I'm kinda stuck situation, I have a timestamp which resambles a
> > startdate and a duration in days and I want to bloat this, so I have a
> > row for every day beginning from the startdate. I have created an
> > example bellow, maybe I'm doing it on the wrong angle and you can come
> > up with some better ideas:
> >
> > BEGIN TRANSACTION;
> >
> > CREATE TABLE example
> > (
> >   id serial NOT NULL,
> >   startdate timestamp without time zone,
> >   duration int_unsigned NOT NULL,
> >   CONSTRAINT pq_example_id PRIMARY KEY (id)
> > ) WITH (OIDS=FALSE)
> > ;
> >
> > insert into example(id,startdate,duration) values (1,'2010-09-03',4);
> > insert into example(id,startdate,duration) values (2,'2010-09-03',6);
> >
> > CREATE OR REPLACE FUNCTION bloat_duration(IN id integer,
> >  IN startdate  timestamp
> > without time zone,
> >  IN duration integer,
> >  OUT id integer,
> >  OUT duration_date date)
> >RETURNS SETOF RECORD AS
> > $$
> > BEGIN
> >  RETURN QUERY SELECT
> > id,to_date(to_char(startdate,'-MM-DD'),'-MM-DD')+s.a AS
> > stockdate FROM generate_series(0,duration-1) AS s(a);
> > END;
> > $$
> > LANGUAGE 'plpgsql';
> >
> > -- This works, but not what I want
> > SELECT * FROM bloat_duration(1,'2010-09-03',4);
> >
> > -- This does not work
> >
> > SELECT * FROM example AS ex
> > INNER JOIN bloat_duration(ex.id,ex.startdate,ex.duration) AS bd ON bd.id
> > = ex.id
> >
> > ROLLBACK TRANSACTION;

> Take a look at the generate_series function in the "set returning functions"
> section of the manual.
> http://www.postgresql.org/docs/8.4/interactive/functions-srf.html

As you can see in my example, I'm already using it and this is my
dilemma. Since I can not bring the values of the FROM-Table to the
parameters of my function.

Greetings

Tim

-- 
Compassion -- that's the one things no machine ever had.  Maybe it's
the one thing that keeps men ahead of them.
-- McCoy, "The Ultimate Computer", stardate 4731.3

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Duplicates Processing

2010-10-08 Thread Tim Landscheidt
Gary Chambers  wrote:

> I've been provided a CSV file of parts that contains duplicates of
> properties (e.g. resistors have a wattage, tolerance, and temperature
> coefficient property) of those parts that differ by a manufacturer
> part number.  What I'd like to do is to process this file and, upon
> encountering one of the duplicates, take that part with its new part
> number and move it to a part substitutes table.  It seems like it
> should be pretty simple, but I can't seem to generate a query or a
> function to accomplish it.  I'd greatly appreciate any insight or
> assistance with solving this problem.  Thank you very much in advance.

You can - for example - create a query with a call to
ROW_NUMBER() and then process the matching rows (untested):

| INSERT INTO substitutes ([...])
|   SELECT [...] FROM
| (SELECT *,
| ROW_NUMBER() OVER (PARTITION BY wattage, tolerance, temperature
| ORDER BY part_number) AS RN
|  FROM parts) AS SubQuery
|   WHERE RN > 1;

| DELETE FROM parts
| WHERE primary_key IN
|   (SELECT primary_key FROM
| (SELECT *,
| ROW_NUMBER() OVER (PARTITION BY wattage, tolerance, temperature
|ORDER BY part_number) AS RN
|      FROM parts) AS SubQuery
|WHERE RN > 1);

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] How to remove a set of characters in text-columns ?

2011-06-30 Thread Tim Landscheidt
(anonymous) wrote:

> how can I remove a set of characters in text-columns ?
> Say I'd like to remove { } ( ) ' " , ; . : !
> Of course I can chain replace ( replace ( replace ( replace
> ( ... , '' )  and replace the chars one by one against
> an empty string ''.

> There might be a more elegant way.
> Is there ?

Besides the regexp_replace() solution mentioned by Charlie
and Steve, you can also use TRANSLATE():

| tim=# SELECT TRANSLATE('a{b''c"d!f', '{}()''",;.:!', '');
|  translate
| ---
|  abcdf
| (1 Zeile)

| tim=#

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Help with regexp-query

2011-08-02 Thread Tim Landscheidt
Johann Spies  wrote:

> I am struggling a bit to do the following type of update in a table.

> I want the content of a field updated like this:

> Original:
> '0894396e-16bf-4e63-aa52-97fe7031eec9||50a6b47c-f69c-414d-bcb6-14bbe403de5f|||'

> After update:
>  '|0894396e-16bf-4e63-aa52-97fe7031eec9|50a6b47c-f69c-414d-bcb6-14bbe403de5f|'

> in other words: change all multiple adjacent occurences of '|' to only 1.

> I have tried the following query but it fails:

> select id, regexp_replace(category, (E'\|{2,}'), E'\|') as category from
> akb_articles limit 100

> This ends with 'ERROR: invalid regular expression: quantifier operand
> invalid'.

> I would apreciate some help with this one please.

You need to double the backslashes (e. g. "E'\\|{2,}'");
otherwise the parser will "eat" the first backslash and pass
just "|{2,}" as the second argument to regexp_replace().

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Confused about writing this stored procedure/method.

2011-08-24 Thread Tim Landscheidt
(anonymous) wrote:

> [...]
> I intend pass the number of the month(say 1 for January , 2 for February
> etc>) as the parameter to this method and return a number that corresponds
> to the index of the month in the new order , say I pass 8 for August , I
> return 11 , in order to get the  index of August in he financial year
> calendar somewhat like this
> [...]

You don't need any function for that, just use "ORDER BY
Month < 4, Month".

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] need magic to shuffle some numbers

2011-08-24 Thread Tim Landscheidt
(anonymous) wrote:

> there is a table that has among others a integer primary key
> "id" and another integer column "prio" as well as an integer
> "group_id".

> I'd like to invert the values of the prio-column for one of the groups.
> The prio numbers start with 3 and there are 1159 different
> prios in this group.
> At least every value appeares only once.   :)

> Is there an elegant way to switch the prio values around so
> that every record with the first prio gehts the last and
> vice versa?
> Then the records with the second smallest prio get the
> second-to-last biggest value and v.v.

If you just want to reverse the priorities:

| UPDATE TestTable
|   SET prio = (SELECT MIN(prio) FROM TestTable WHERE group_id = 'testgroup') +
|  (SELECT MAX(prio) FROM TestTable WHERE group_id = 'testgroup') -
|  prio
|   WHERE group_id = 'testgroup';

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Add one column to another

2011-08-25 Thread Tim Landscheidt
(anonymous) wrote:

> I have to deal with a table which contains:

> first_name
> surname
> email1
> email2

> ... and I would like to create a view which combines both email columns thus:

> first_name
> surname
> email

> It looks simple but I can't think of an obvious query.

Try:

| SELECT first_name, surname, email1 AS email FROM testtable WHERE email1 IS 
NOT NULL UNION ALL
| SELECT first_name, surname, email2 AS email FROM testtable WHERE email2 IS 
NOT NULL;

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Window function sort order help

2011-09-16 Thread Tim Landscheidt
Dianna Harter  wrote:

> Having trouble getting a window function to sort correctly.

> Given this data

> consumer_id | move_date  | history_timestamp
> -++
>12345| 2008-01-05 | 2007-12-11 06:02:26.842171
>12345| 2008-02-29 | 2008-02-05 07:22:38.04067
>12345| 2008-02-29 | 2008-07-11 09:03:42.44044
>23456| 2009-01-01 | 2008-11-12 07:33:32.656658 <--
>23456| 2009-01-28 | 2008-11-14 01:57:40.264335
>23456| 2009-01-01 | 2008-12-04 17:14:20.27 <--
>23456| 2009-01-01 | 2008-12-31 00:33:37.204968
>23456| 2009-01-01 | 2011-06-08 04:16:41.646521
>34567| 2010-05-07 | 2010-06-08 05:14:43.842172

> I'm trying to get the timestamp when the consumer last changed their 
> move_date.  (Notice consumer_id 23456 set their move_date to 2009-01-01 then 
> changed and then changed it back.  In the end, I want the timestamp from when 
> they changed it to 2009-01-01 the second time.)

> My thought was to do an intermediary step to find the timestamp for each time 
> it switched.  From there I can grab the max(timestamp) for each consumer.

> [...]

> Any suggestions to get the order by to occur first then the partition by or 
> maybe there another approach that I could use?

If I understand the question correctly, try:

| SELECT DISTINCT ON (consumer_id) consumer_id, move_date, history_timestamp
|FROM (SELECT consumer_id, move_date, history_timestamp,
| LAG(move_date) OVER (PARTITION BY consumer_id ORDER BY 
consumer_id, history_timestamp) AS previous_move_date
| FROM consumer_hist) AS SubQuery
|WHERE move_date IS DISTINCT FROM previous_move_date
|ORDER BY consumer_id, history_timestamp DESC;

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Passing function parameters to regexp_replace

2011-09-17 Thread Tim Landscheidt
Leif Biberg Kristensen  wrote:

>> UPDATE sources SET source_text = regexp_replace(source_text,
>> E'n="(.*?)$1(.*?)"', E'n="\\1$2\\2"', 'g') where source_text like
>> '%n="%$2%">%';

> Sorry, I pasted a literal replacement, and substituted the parameters by hand.
> The expression should of course be

> UPDATE sources SET source_text = regexp_replace(source_text,
> E'n="(.*?)$1(.*?)"', E'n="\\1$2\\2"', 'g') where source_text like
> '%n="%$1%">%'

Try:

> UPDATE sources SET source_text = regexp_replace(source_text,
> CONCAT(E'n="(.*?)', $1, E'(.*?)"'), CONCAT(E'n="\\1', $2, '\\2"', 'g') where 
> source_text like
> CONCAT('%n="%', $1, '%">%')

If $1 and $2 (can) include meta characters, you have to es-
cape them properly.

  Please consider that regexp_replace() uses POSIX Regular
Expressions while LIKE uses a different syntax. If possible,
I would replace the LIKE expression with its "~" equivalent
so chances of confusion are minimized.

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Passing function parameters to regexp_replace

2011-09-17 Thread Tim Landscheidt
Leif Biberg Kristensen  wrote:

> On Saturday 17. September 2011 19.07.03 Tim Landscheidt wrote:
>> Leif Biberg Kristensen  wrote:

>> > UPDATE sources SET source_text = regexp_replace(source_text,
>> > E'n="(.*?)$1(.*?)"', E'n="\\1$2\\2"', 'g') where source_text like
>> > '%n="%$1%">%'

>> Try:
>> > UPDATE sources SET source_text = regexp_replace(source_text,
>> > CONCAT(E'n="(.*?)', $1, E'(.*?)"'), CONCAT(E'n="\\1', $2, '\\2"', 'g')
>> > where source_text like CONCAT('%n="%', $1, '%">%')

> The function CONCAT doesn't exist i PostgreSQL.

Eh, yes, of course.

> And I can't get it to work
> with EXECUTE and standard concatenation either:
> [...]

What do you need the EXECUTE for? Just drop it.

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] how to calculate differences of timestamps?

2011-09-26 Thread Tim Landscheidt
(anonymous) wrote:

> How could I calculate differences of timestamps in a log-table?

> Table log ( user_id integer, login boolean, ts timestamp )

> So login = true would be a login-event and login = false a logout.
> Is there a way to find the matching login/logout to
> calculate the difference?

This is similar to the question Dianna asked some time ago:

| SELECT user_id,
|prev_ts AS login_ts,
|ts AS logout_ts
|FROM (SELECT user_id,
| LAG(login) OVER (PARTITION BY user_id ORDER BY ts) AS 
prev_login,
| LAG(ts) OVER (PARTITION BY user_id ORDER BY ts) AS 
prev_ts,
| login,
| ts FROM log) AS SubQuery
|WHERE prev_login AND NOT login;

> Or is there a better table "design" to do this?

That depends on your requirements and your application de-
sign. The query above requires a full table scan which may
kill performance in some circumstances.

  Of course, any design has to deal with the possibility of
an event not having been logged, multiple logins, etc. The
query above just forms pairs based on temporal proximity.

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] postgres sql help

2011-10-17 Thread Tim Landscheidt
Harald Fuchs  wrote:

>> hi, i am fairly new in postgresql, so if anyone can help me would be great
>> if i simply do:

>> select ver_no
>> from version
>> order by ver_no

>> the result will be something like this:

>> .1.3.1
>> .1.3.2.5.
>> .1.4.1.7.12
>> .1.4.11.14.7.
>> .1.4.3.109.1.
>> .1.4.8.66.

>> so as you can see first 3 lines are ok, but how to make 1.4.3.109 come
>> before 1.4.11 because the third level "3" is smaller than "11".

> The query

>   SELECT ver_no
>   FROM version
>   ORDER BY string_to_array(ver_no, '.', '')::int[]

> should do what you want.

Really neat! :-) For pre-9.1, you have to "ORDER BY
string_to_array(TRIM('.' FROM ver_no), '.')::int[];",
though.

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] Getting a list of a table's attributes that are sortable sorted by uniqueness

2011-12-29 Thread Tim Landscheidt
Hi,

I frequently use pg_dump to dump databases and compare them
with diff.  To get rid of most "false positives", I'd like
to patch pg_dump to sort the table so that its dumped order
isn't changed more than necessary by insertions & Co.

  So I'm looking for a query that will return a list of a
table's attributes that are sortable (e. g. no XML fields)
and sorted by "uniqueness", i. e. first attributes repre-
senting the primary key, then other unique keys, then the
rest.

  Before I dive into the depths of PostgreSQL's system cata-
logues, has anyone already solved this problem?

TIA,
Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] lo_import

2012-01-03 Thread Tim Landscheidt
Maurício Cruz  wrote:

> I'm trying to use lo_import to import a file into my
> database, if I execute from postgres runing in my local
> machine
> it works perfectly, but if I do it in the postgres runing in
> the server, it says "No such file or directory"

> I Guess postgres only see file on the machine it is runing
> and not through the network...
> I will have to upload the file into the server and then use
> import  ?   is there any other way ?

Create a large object and then write to it?  What driver are
you using?

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Fwd: i want small information regarding postgres

2012-01-12 Thread Tim Landscheidt
Alvaro Herrera  wrote:

>> would someone with the appropriate authority please unsubscribe this
>> person's email address from this list so we don't all get a bounce message
>> after every email we send to the list?  Thanks.

> Just did it.  In the future, please email sysadm...@postgresql.org with
> mailing list complaints, as I don't read this list (or indeed many
> others)

I had reported this to pgsql-sql-ow...@postgresql.org.
Where do that end up?

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] Token separation

2012-01-15 Thread Tim Landscheidt
Hi,

I just tried to input a hexadecimal number in PostgreSQL
(8.4) and was rather surprised by the result:

| tim=# SELECT 0x13;
|  x13
| -
|0
| (1 Zeile)

| tim=# SELECT 0abc;
|  abc
| -
|0
| (1 Zeile)

| tim=#

The documentation says:

| A token can be a key word, an identifier, a quoted identifi-
| er, a literal (or constant), or a special character symbol.
| Tokens are normally separated by whitespace (space, tab,
| newline), but need not be if there is no ambiguity (which is
| generally only the case if a special character is adjacent
| to some other token type).

Is this behaviour really conforming to the standard?  Even
stranger is what MySQL (5.1.59) makes out of it:

| mysql> SELECT 0x40;
| +--+
| | 0x40 |
| +--+
| | @|
| +--+
| 1 row in set (0.00 sec)

| mysql> SELECT 0abc;
| ERROR 1054 (42S22): Unknown column '0abc' in 'field list'
| mysql>

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Token separation

2012-01-16 Thread Tim Landscheidt
Tom Lane  wrote:

>> [ "0x13" is lexed as "0" then "x13" ]

>> Is this behaviour really conforming to the standard?

> Well, it's pretty much the universal behavior of flex-based lexers,
> anyway.  A token ends when the next character can no longer sensibly
> be added to it.

I know, but - off the top of my head - in most other lan-
guages "0abc" will then give a syntax error.

> Possibly the documentation should be tweaked to mention the
> number-followed-by-identifier case.

Especially if you consider such cases:

| tim=# SELECT 1D1; SELECT 1E1; SELECT 1F1;
|  d1
| 
|   1
| (1 Zeile)

|  ?column?
| --
|10
| (1 Zeile)

|  f1
| 
|   1
| (1 Zeile)

| tim=#

I don't think it's common to hit this, but the documentation
surely could use a caveat.  I will write something up and
submit it to -docs.

Thanks,
Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Getting a list of a table's attributes that are sortable sorted by uniqueness

2012-01-26 Thread Tim Landscheidt
I wrote:

> I frequently use pg_dump to dump databases and compare them
> with diff.  To get rid of most "false positives", I'd like
> to patch pg_dump to sort the table so that its dumped order
> isn't changed more than necessary by insertions & Co.

>   So I'm looking for a query that will return a list of a
> table's attributes that are sortable (e. g. no XML fields)
> and sorted by "uniqueness", i. e. first attributes repre-
> senting the primary key, then other unique keys, then the
> rest.

>   Before I dive into the depths of PostgreSQL's system cata-
> logues, has anyone already solved this problem?

Progress report: The query:

| SELECT attname,
|attnum,
|keyrank,
|columnrankinkey
| FROM pg_attribute
| LEFT JOIN
|   (SELECT DISTINCT ON (tableid,
|columnnr) indrelid as tableid,
|  indkey [subscript] AS columnnr,
| CASE
| WHEN indisprimary THEN 0
| WHEN indisunique THEN 1
| ELSE 2
| END as keyrank,
| subscript as columnrankinkey
|FROM
|  (SELECT indrelid,
|  indkey,
|  generate_subscripts(indkey, 1) as subscript,
|  indisprimary,
|  indisunique
|   FROM pg_index
|   ORDER BY indrelid,
|indkey,
|indisprimary DESC, indisunique DESC) AS s
|ORDER BY tableid, columnnr, CASE
|WHEN indisprimary THEN 0
|WHEN indisunique THEN 1
|ELSE 2
|END, columnrankinkey) AS s2 ON attrelid = 
tableid
| AND attnum = columnnr
| WHERE attrelid = 'tablename'::regclass
|   AND NOT attisdropped
|   AND attnum > 0
| ORDER BY keyrank,
|  columnrankinkey,
|  attnum;

does almost what I want except:

- Attributes that can't be sorted (XML) aren't skipped, and
- "UNIQUE(A, B)" and "UNIQUE(C, D)" would give "A, C, B, D"
  (untested) so the "rank" of a non-primary key has yet to
  be included.

Stay tuned.

Tim
(looking forward to "UNNEST ... WITH ORDINALITY")


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] time interval math

2012-02-08 Thread Tim Landscheidt
"Edward W. Rouse"  wrote:

> Hehe, thanks, I played around and ended up with this:

> round(SUM(extract('epoch' from (time_out - time_in

> I will have to do the division outside of the query, but that's really a
> minor issue.

You can always use subqueries.

>  Knowing the total in seconds was the big roadblock. And
> converting back is easier (a / 3600 + ":" + a / 60 + ":" + a % 60)
> [...]

PostgreSQL has also:

| tim=# SELECT 3661::TEXT::INTERVAL;
|  interval
| --
|  01:01:01
| (1 Zeile)

| tim=#

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] How to split up phone numbers?

2012-02-21 Thread Tim Landscheidt
(anonymous) wrote:

> is there a way to split up phone numbers?
> I know that's a tricky topic and it depends on the national
> phone number format.
> I'm especially interested in a solution for Germany, Swizerland and Austria.

> I've got everything in a phone number column that makes hardly sense like:
> +49432156780
> 0049 4321 5678 0
> 04321/5678-0
> and so on...
> Those 3 samples are actually the same number in different notations.

> Aim would be to get a normalized number split up in 4 seperate columns
> nr_nation
> nr_city
> nr_main
> nr_individual

> so I end up with
> 49   4321   5678   0 for central
> 49   4321   5678   42   for Mr. Smith

> Is this doable?

> It would be a start to at least split off nr_nation and nr_city.

I seem to faintly remember that there are some anomalies in
the German area codes, i. e. "+49 xy" may be city A, with
"+49 xyz" being city B.  Obviously, you will also not be
able to differentiate between normal and extension numbers
if they aren't properly marked up in the input.

Tim



-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Can I read the data without commit

2012-03-23 Thread Tim Landscheidt
John Fabiani  wrote:

> Yes I understand - but I want to know is it possible?  Can I read the data I
> just inserted without a commit.
> [...]

Are you talking about a web application?  Then no, you'll
have to code that yourself.

Tim


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] Re: Matching and Scoring with multiple fields

2000-07-10 Thread Tim Johnson

I have a problem. Ok I'll rephrase that, a challenge.

I have a table like this:

a,b,c,d,e,f,g,h
---
2,5,3,4,4,5,2,2
1,1,1,1,1,1,1,1
5,5,5,5,5,5,5,5
3,3,2,4,5,1,1,3
1,1,5,5,5,5,1,4
1,5,5,5,4,4,2,1
5,5,5,5,1,1,1,1
1,1,1,1,5,5,5,5
(rows 8)

a to h are of type int.


I want to take input values which relate to this table say:
how do you feel about a:
how do you feel about b:
how do you feel about c:
...

and the answers will be 1 to 5.

Now I want to take those answers for my incoming a to h and scan down the
table pulling out the closest matches from best to worst. There will be
about 2000 rows in the final table and I will LIMIT the rows in blocks of 10
or so.

I can do the limiting stuff, but not the matching. My first thought was to
sum each row and match by that until I came out of my mental coma and
noticed that the last two lines have the same sum and are complete
opposites.

So, where to from here? I thought I could go through line by line selecting
with a tolerance on each value say +-1 to begin with, then again with +-2
but that will take hours and I'm not entirely sure it'll work or how I'll do
it.

I know general netequitte says that I shouldn't just dump my problem here,
but I am truly stumped by this one - if anybody can give me a pointer in the
right direction I'd greatly appreciate it.


Thanks,
Tim Johnson
---
http://www.theinkfactory.co.uk




RE: [SQL] Re: Matching and Scoring with multiple fields

2000-07-12 Thread Tim Johnson

Thanks to all of you that replied. I think Oliver's idea (which is pretty
close to Stephan's) will probably do the trick I think.

I will maybe look in the future to add the ability to allow users to weight
fields with more priority. So customers could number the top five most
important fields and then pick how they feel. I still worry about the
results being skewed by extreme data in certain fields but I guess there's
no way around that.

Thanks again.

Tim Johnson,
-- http://www.theinkfactory.co.uk

-Original Message-
From: Oliver Mueschke [mailto:[EMAIL PROTECTED]]
Sent: 10 July 2000 21:15
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Re: [SQL] Re: Matching and Scoring with multiple fields


I'm not sure, but it seems you could calculate a column like:
SELECT a,b,c,...,
abs(-a)+abs(-b)+abs(-c)+... AS weight
FROM t
ORDER BY weight

This way the closest matches would come first.

On Mon, Jul 10, 2000 at 07:56:08PM +0100, Tim Johnson wrote:
> I have a problem. Ok I'll rephrase that, a challenge.
>
> I have a table like this:
>
> a,b,c,d,e,f,g,h
> ---
> 2,5,3,4,4,5,2,2
> 1,1,1,1,1,1,1,1
> 5,5,5,5,5,5,5,5
> 3,3,2,4,5,1,1,3
> 1,1,5,5,5,5,1,4
> 1,5,5,5,4,4,2,1
> 5,5,5,5,1,1,1,1
> 1,1,1,1,5,5,5,5
> (rows 8)
>
> a to h are of type int.
>
>
> I want to take input values which relate to this table say:
> how do you feel about a:
> how do you feel about b:
> how do you feel about c:
> ...
>
> and the answers will be 1 to 5.
>
> Now I want to take those answers for my incoming a to h and scan down the
> table pulling out the closest matches from best to worst. There will be
> about 2000 rows in the final table and I will LIMIT the rows in blocks of
10
> or so.




Re: [PERFORM] [SQL] OFFSET impact on Performance???

2005-01-31 Thread Leeuw van der, Tim
Hi,

What you could do is create a table containing all the fields from your SELECT, 
plus a per-session unique ID. Then you can store the query results in there, 
and use SELECT with OFFSET / LIMIT on that table. The WHERE clause for this 
temp-results table only needs to contain the per-session unique id.

This of course gives you a new problem: cleaning stale data out of the 
temp-results table. And another new problem is that users will not see new data 
appear on their screen until somehow the query is re-run (... but that might 
even be desirable, actually, depending on how your users do their work and what 
their work is).

And of course better performance cannot be guaranteed until you try it.


Would such a scheme give you any hope of improved performance, or would it be 
too much of a nightmare?

cheers,

--Tim




-Original Message-
From: [EMAIL PROTECTED] on behalf of Andrei Bintintan
Sent: Wed 1/26/2005 11:11 AM
To: [EMAIL PROTECTED]; Greg Stark
Cc: Richard Huxton; pgsql-sql@postgresql.org; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] [SQL] OFFSET impact on Performance???
 
The problems still stays open.

The thing is that I have about 20 - 30 clients that are using that SQL query 
where the offset and limit are involved. So, I cannot create a temp table, 
because that means that I'll have to make a temp table for each session... 
which is a very bad ideea. Cursors somehow the same. In my application the 
Where conditions can be very different for each user(session) apart.

The only solution that I see in the moment is to work at the query, or to 
write a more complex where function to limit the results output. So no 
replace for Offset/Limit.

Best regards,
Andy.


- Original Message - 
From: "Greg Stark" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Cc: "Richard Huxton" ; "Andrei Bintintan" 
<[EMAIL PROTECTED]>; ; 

Sent: Tuesday, January 25, 2005 8:28 PM
Subject: Re: [PERFORM] [SQL] OFFSET impact on Performance???


>
> Alex Turner <[EMAIL PROTECTED]> writes:
>
>> I am also very interesting in this very question.. Is there any way to
>> declare a persistant cursor that remains open between pg sessions?
>> This would be better than a temp table because you would not have to
>> do the initial select and insert into a fresh table and incur those IO
>> costs, which are often very heavy, and the reason why one would want
>> to use a cursor.
>
> TANSTAAFL. How would such a persistent cursor be implemented if not by
> building a temporary table somewhere behind the scenes?
>
> There could be some advantage if the data were stored in a temporary table
> marked as not having to be WAL logged. Instead it could be automatically
> cleared on every database start.
>
> -- 
> greg
>
> 


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match