Re: [GENERAL] Any Plans for cross database queries on the same server?

2007-01-30 Thread Gregory S. Williamson
I actually disagree, mildly.

Our system uses two variants of two types of data.

Client data has a presence in the billing database, but has an incarnation in 
our runtime servers to allow for authentication. Not the same databases, since 
we can't afford the extra time for the hop, which might be scores of miles away 
and not necessarily available. Not exactly the same data, and not all of the 
billing stuff goes to runtime.

Spatial data has a representation in our backroom servers which support 
processing incoming imagery. Runtime has a similar representation (with some 
serious handwaving for speed) of the spatial data. And there's some links 
between content management and billing to allow for royalties. Again, similar 
but not identical data/purposes.

Informix has a capability (a synonym) to make a table in another instance 
appear as a local table; certain operations aren't possible [remote index 
structures aren't visible IIRC and a few data manipulations]. I could use a 
synonym to do joins and updates on the remote tables in native SQL; with 
postgres I need to do a lot more handwaving -- actually pulling logic out of 
the databases and putting it into applications. (Yes, db-link would work but it 
seemed 

Sorry for top-posting but this interface doesn't do graceful quoting, etc.

Greg Williamson
DBA
GlobeXplorer LLC, a DigitalGlobe company

Confidentiality Notice: This e-mail message, including any attachments, is for 
the sole use of the intended recipient(s) and may contain confidential and 
privileged information and must be protected in accordance with those 
provisions. Any unauthorized review, use, disclosure or distribution is 
prohibited. If you are not the intended recipient, please contact the sender by 
reply e-mail and destroy all copies of the original message.

(My corporate masters made me say this.)

-Original Message-
From:   [EMAIL PROTECTED] on behalf of Joshua D. Drake
Sent:   Tue 1/30/2007 6:15 PM
To: Peter Eisentraut
Cc: pgsql-general@postgresql.org; Tony Caduto
Subject:Re: [GENERAL] Any Plans for cross database queries on the same 
server?

Peter Eisentraut wrote:
 This has been discussed about ten thousand times, and the answer is 
 still no.

   
Actually the answer is: Check the TODO list. It is listed under Exotic 
features, so the answer is, no we can't yes we would like to.

That being said, I think it is a dumb feature. If you have data in one 
database, that requires access to another database within the same
cluster. You designed your database incorrectly and should be using schemas.

If you have data in one database that requires access to another 
database that is not in the same cluster (which happens alot) use dbi-link.

Joshua D. Drake




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


---
Click link below if it is SPAM [EMAIL PROTECTED]
https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=45bff9ca316118362916074[EMAIL
 PROTECTED]retrain=spamtemplate=historyhistory_page=1
!DSPAM:45bff9ca316118362916074!
---






---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Spam from EnterpriseDB?

2007-01-18 Thread Gregory S. Williamson
I got one as well ... not a big deal in my mind since it's only happened once 
(weekly would be offensive), and it's not entirely out of the realm of 
possibility that we'd have use for their services (I try to keep an update list 
of possible resources for my employers in the event that I get hit by a meteor 
or a bus or something equally debilitating).

Still, I think there might be an appropriate mailing list for this sort of 
commercial announcement which would be better than the retail approach.

My $0.02 worth ...

Greg Williamson
DBA
GlobeXplorer LLC, a DigitalGlobe company

Confidentiality Notice: This e-mail message, including any attachments, is for 
the sole use of the intended recipient(s) and may contain confidential and 
privileged information and must be protected in accordance with those 
provisions. Any unauthorized review, use, disclosure or distribution is 
prohibited. If you are not the intended recipient, please contact the sender by 
reply e-mail and destroy all copies of the original message.

(my bosses made me say that)



-Original Message-
From:   [EMAIL PROTECTED] on behalf of Steve Atkins
Sent:   Thu 1/18/2007 3:54 PM
To: pgsql general
Cc: 
Subject:[GENERAL] Spam from EnterpriseDB?

Anyone else get spam from EnterpriseDB today, talking about  
Postgresql Support Services?

I got one to an email address most likely harvested from one of the  
Postgresql mailing lists. Ingrid Catlin over at EnterpriseDB  
confirmed that they sent the mail, but that Rich Romanik provided  
them with the list of email addresses so it isn't spam.

Cheers,
   Steve


---(end of broadcast)---
TIP 6: explain analyze is your friend


---
Click link below if it is SPAM [EMAIL PROTECTED]
https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=45b00777302212117817174[EMAIL
 PROTECTED]retrain=spamtemplate=historyhistory_page=1
!DSPAM:45b00777302212117817174!
---






---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Spam from EnterpriseDB?

2007-01-18 Thread Gregory S. Williamson
In 2007 however, the punishment ought to be ... ??
;-)

First NameGreg/First Name Williamson
DBA
GlobeXplorer LLC, a DigitalGlobe Company

Confidentiality Notice: This e-mail message, including any attachments, is for 
the sole use of the intended recipient(s) and may contain confidential and 
privileged information and must be protected in accordance with those 
provisions. Any unauthorized review, use, disclosure or distribution is 
prohibited. If you are not the intended recipient, please contact the sender by 
reply e-mail and destroy all copies of the original message.

-Original Message-
From:   [EMAIL PROTECTED] on behalf of Alan Hodgson
Sent:   Thu 1/18/2007 4:57 PM
To: pgsql-general@postgresql.org
Cc: 
Subject:Re: [GENERAL] Spam from EnterpriseDB?

On Thursday 18 January 2007 16:44, Ron Johnson [EMAIL PROTECTED] 
 Harsh, aren't we?

 Rich and Garland weren't peddling pr0n or a pump-and-dump stock
 scam.  The fact that they've lost some (a lot of?) respect from
 potential customers will be pain enough.


Spam is spam.  I don't care what they're selling.  Anyone dumb enough to 
send spam in 2006 should be fired on the spot.  

-- 
`Gun-wielding recluse gunned down by local police isn't the epitaph
I want. I am hoping for Witnesses reported the sound up to two hundred
kilometers away or Last body part finally located.' --- James Nicoll


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


---
Click link below if it is SPAM [EMAIL PROTECTED]
https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=45b0160f308471804284693[EMAIL
 PROTECTED]retrain=spamtemplate=historyhistory_page=1
!DSPAM:45b0160f308471804284693!
---






---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Performance with very large tables

2007-01-15 Thread Gregory S. Williamson
Limit is somewhat magical ... at least to a degree. Not sure about cursors 
since I am not currently using them.

select count(*) from bill_rpt_work;
  count
-
 2317451
(1 row)

Time: 1709.829 ms

billing=# \d bill_rpt_work
  Table reporting.bill_rpt_work
Column | Type  | Modifiers
---+---+---
 report_id | integer   |
 client_id | character varying(10) |
 contract_id   | integer   | not null
 rate  | numeric   | not null
 appid | character varying(10) | not null
 userid| text  | not null
 collection_id | integer   | not null
 client_name   | character varying(60) |
 use_sius  | integer   | not null
 is_subscribed | integer   | not null
 hits  | numeric   | not null
 sius  | numeric   | not null
 total_amnt| numeric   | not null
 royalty_total | numeric   |
Indexes:
billrptw_ndx UNIQUE, btree (report_id, client_id, contract_id, rate, 
appid, userid, collection_id)
billrpt_cntrct_ndx btree (report_id, contract_id, client_id)
billrpt_collid_ndx btree (report_id, collection_id, client_id, 
contract_id)
Foreign-key constraints:
$1 FOREIGN KEY (report_id) REFERENCES billing_reports(report_id)
$2 FOREIGN KEY (client_id) REFERENCES work.clients(client_id)

billing=# select * from bill_rpt_work order by report_id, client_id, 
contract_id, rate, appid, userid, collection_id;
Cancel request sent

after more than 10 seconds

THEN:

select * from bill_rpt_work order by report_id, client_id, contract_id, rate, 
appid, userid, collection_id limit 1000;
 report_id | client_id  | contract_id |  rate  |  appid   |
userid
   | collection_id | client_name | 
use_sius | is_subscr
ibed |hits |  sius  | total_amnt |  
 royalty_total

---++-++--+
---+---+-+--+--
-+-+++-
---
...deleted details...
Time: 52.745 ms

THEN:

billing=#  select * from bill_rpt_work order by report_id, client_id, 
contract_id, rate, appid, userid, collection_id limit 10;
 report_id | client_id  | contract_id |   rate   |  appid   |   
  u
serid | collection_id | 
client_name
 | use_sius | is_subscribed | hits |  sius  
| total
_amnt |   royalty_total
---++-+--+--+--
--+---+
-+--+---+--++--
--+
...deleted details...
Time: 1043.582 ms

Noticibly longer but not bad ...

But with no limit it takes quite a while:
select * from bill_rpt_work order by report_id, client_id, contract_id, rate, 
appid, userid, collection_id;
...
Time: 132033.171 ms

but with a limit, even a fairly large one (a little less than half the table) 
it was markedly faster. Maybe try more than half, see if there's a limit to 
what you can do ...

YMMV, HTH, yadda yadda ...

Greg Williamson
DBA
GlobeXplorer LLC (part of Digital Globe Inc.) 

-Original Message-
From:   [EMAIL PROTECTED] on behalf of Jan van der Weijde
Sent:   Mon 1/15/2007 4:44 AM
To: Alban Hertroys
Cc: pgsql-general@postgresql.org
Subject:Re: [GENERAL] Performance with very large tables

That is exactly the problem I think. However I do not deliberately
retrieve the entire table. I use the default settings of the PostgreSQL
installation and just execute a simple SELECT * FROM table. 
I am using a separate client and server (both XP in the test
environment), but that should not make much difference.
I would expect that the default behavior of PostgreSQL should be such
that without LIMIT, a SELECT returns records immediately. 

Thank you,
Jan

-Original Message-
From: Alban Hertroys [mailto:[EMAIL PROTECTED] 
Sent: Monday, January 15, 2007 12:49
To: Jan van der Weijde
Cc: Richard Huxton; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Performance with very large tables

Jan van der Weijde wrote:
 Thank you. 
 It is true he want to have the first few record quickly and then 
 continue with the next records. However without LIMIT it already takes

 a very long time before the first record is returned.
 I reproduced this with a table with 1.1 million records on an XP 
 machine and in my case it took about 25 seconds before the 

[GENERAL] Confused by misleading error message on SP creation

2007-01-09 Thread Gregory S. Williamson
This is PostgreSQL 8.1.4, and I am utterly lost.

This function started in Informix and worked perfectly. Migrating to postgres 
resulted in about a *10,000* x slow down as it resorts to doing a sequential 
scan. In the sql below the addition of client id and report id to the initial 
select are an attempt to force an index; without them the SQL always results in 
a dog of a plan.

The error message:
psql:e.sql:54: ERROR:  syntax error at or near $2 at character 15
QUERY:  SELECT   $1   $2  :=  $3
CONTEXT:  SQL statement in PL/PgSQL function client_year_usage_det near line 
29
psql:e.sql:54: LINE 1: SELECT   $1   $2  :=  $3
psql:e.sql:54:  


Is of *NO* use. Could someone please explain what the hell it means ? There is 
no construct $1 $2 := $3 *anywhere*. There is no SELECT on line 54. Or line 29. 
What the heck is going here ?

This is one of the few things about postgres that I absolutely despise. The SPL 
support is crappy and horribly hard to debug.

The hapless .sql file:
--CREATE TYPE clyud_t AS ( sp_rptdate DATE, sp_appid CHAR(10), sp_is_subs 
INTEGER, sp_use_siu INTEGER, sp_hits DECIMAL, sp_s
ius DECIMAL, sp_amount DECIMAL, sp_contractid INTEGER);
--DROP TYPE clyud_read_t;
--CREATE TYPE clyud_read_t AS (report_id INTEGER, sp_rptdate DATE, client_id 
CHAR(10), contract_id INTEGER, appid CHAR(10),
sp_is_subs INTEGER, sp_use_siu INTEGER, hits DECIMAL, sius DECIMAL, total_amnt 
DECIMAL);
CREATE OR REPLACE FUNCTION client_year_usage_det (CHAR(10), INTEGER, INTEGER) 
RETURNS SETOF clyud_t AS '
DECLARE
p_client_id ALIAS FOR $1;
p_year ALIAS FOR $2;
p_showall ALIAS FOR $3;
sp_year INTEGER;
sp_tyr INTEGER;
sp_sdate DATE;
sp_edate DATE;
sp_is_subs INTEGER;
sp_use_siu INTEGER;
clyud_rec clyud_t;
clyu_inrec clyud_read_t;
BEGIN
IF ((p_year IS NULL) OR (p_year = 0)) THEN
sp_year := (SELECT EXTRACT(YEAR FROM CURRENT_DATE));
ELSE
sp_year := p_year;
END IF;
sp_tyr := sp_year + 1;
sp_sdate := sp_year || ''-'' ||''01-01'';
sp_edate := sp_tyr || ''-'' ||''01-01'';
RAISE NOTICE ''showall is %, sdate is % and edate is %'', p_showall, sp_sdate, 
sp_edate;
FOR clyu_inrec IN SELECT 
w.report_id,b.report_s_date,w.client_id,w.contract_id,w.appid,w.is_subscribed,w.use_sius,SUM(w.hits),SUM(w.sius),SUM(w.total_amnt)
 FROM reporting.crs_rpt_work w, reporting.billing_reports b WHERE w.report_id 
IN (SELECT b.report_id FROM reporting.billing_reports WHERE b.report_s_date = 
sp_sdate AND b.report_s_date  sp_edate) AND w.client_id = p_client_id GROUP BY 
1, 2, 3, 4, 5, 6, 7 ORDER BY 1 DESC, 2 DESC, 2, 3, 4 LOOP
  RAISE NOTICE ''a) date %  appid %   hits %'',clyu_inrec.sp_rptdate, 
clyu_inrec.appid, clyu_inrec.hits;
  clyud_rec.sp_rptdate := clyu_inrec.sp_rptdate;
  clyud_rec.sp_appid := clyu_inrec.appid;
  clyud_rec.sp_is_subs := clyu_inrec.sp_is_subs
  clyud_rec.sp_use_siu := clyu_inrec.sp_use_siu;
  clyud_rec.sp_hits := clyu_inrec.hits;
  IF (clyu_inrec.sp_use_siu  1) THEN
  clyud_rec.sius := clyu_inrec.hits;
  ELSE
  clyud_rec.sp_sius := clyu_inrec.sius;
  END IF;
  clyud_rec.sp_contractid := clyu_inrec.contract_id;
  RETURN NEXT clyud_rec;
  END LOOP;
RETURN;
END;
' LANGUAGE 'plpgsql';



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Confused by misleading error message on SP creation

2007-01-09 Thread Gregory S. Williamson
Doh !  Thanks for the swift response ... bet you are right. Getting punch drunk 
on too many electrons ...

G


-Original Message-
From:   Martijn van Oosterhout [mailto:[EMAIL PROTECTED]
Sent:   Tue 1/9/2007 3:55 AM
To: Gregory S. Williamson
Cc: pgsql-general@postgresql.org
Subject:Re: [GENERAL] Confused by misleading error message on SP 
creation

On Tue, Jan 09, 2007 at 03:42:47AM -0800, Gregory S. Williamson wrote:
 This is PostgreSQL 8.1.4, and I am utterly lost.

snip

 The error message:
 psql:e.sql:54: ERROR:  syntax error at or near $2 at character 15
 QUERY:  SELECT   $1   $2  :=  $3
 CONTEXT:  SQL statement in PL/PgSQL function client_year_usage_det near 
 line 29
 psql:e.sql:54: LINE 1: SELECT   $1   $2  :=  $3
 psql:e.sql:54:  
 
 Is of *NO* use. Could someone please explain what the hell it means ?
 There is no construct $1 $2 := $3 *anywhere*. There is no SELECT on
 line 54. Or line 29. What the heck is going here ?

Anything with a dollar sign is a parameter substituted by pl/pgsql.

Anyway, at line 29 of the function I see this:
   clyud_rec.sp_is_subs := clyu_inrec.sp_is_subs

Missing semicolon at end of line. I think the $1 is the last paramater
there, and $2 and $3 the command on the next line.

As for speed decrease, you should trying working out which bit is
slow... If it's choosing a seq scan, maybe you havn't ANALYZE'd?

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] queueing via database table?

2007-01-03 Thread Gregory S. Williamson
Mark --

As others have indicated, there may be some blocking issues with the approach 
you outlined.

A variant I have seen used in the past uses a table with a unique id for the 
job, the work queue it is in, a status flag, priority and at least one time 
stamp (and perhaps space for a process id).

Each client that wants work issues a request (SELECT FOR UPDATE) to get the 
next job in its queue that has a status flag of Available ordered by priority 
or initial time of creation, etc.; update that entry with the current timestamp 
(and perhaps the process id of the client) and set the status flag to show the 
job is now being worked on all in one transaction.

This releases the job but now with a changed status flag so other processes 
pulling work from the same queue won't see it anymore.

When the job finishes it selects its entry and updates the status flag and 
timestamp (and probably clears its process id). Logic for how to bump a job to 
the next step can be embedded in the client or in another process, depending on 
your needs.

It is useful to have a daemon or some other process to sweep the queue table 
and at least send an alert about stale or frozen jobs.

HTH,

Greg Williamson
DBA
GlobeXplorer LLC
-Original Message-
From:   [EMAIL PROTECTED] on behalf of Mark Harrison
Sent:   Tue 1/2/2007 10:34 PM
To: pgsql-general@postgresql.org
Cc: 
Subject:[GENERAL] queueing via database table?

I have a cluster of CPUs generating thumbnails for
a render farm.  I would like to place thumbnail
requests on a queue, and have the cluster of client
dequeue the requests and process them.

Of course, each request should be only dequeued once...
if a thumbnail is being processed by one CPU, it
shouldn't be processed by another CPU.

Does the following sound like a reasonable approach?
If not, what's a good way to go?

The processes generating the requests will insert into
a queue table.  They may add a priority and timestamp.

The several processes servicing the requests will do a
SELECT FOR UPDATE where ... limit 1, generate thumbnail,
delete the record and commit.

Comments and suggestions welcome,
Mark

-- 
Mark Harrison
Pixar Animation Studios

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


---
Click link below if it is SPAM [EMAIL PROTECTED]
https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=459b5025191744846743324[EMAIL
 PROTECTED]retrain=spamtemplate=historyhistory_page=1
!DSPAM:459b5025191744846743324!
---






---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] MySQL drops support for most distributions

2006-12-13 Thread Gregory S. Williamson
FWIW, there is a follow-up note on the original posting from a MySQL person:

we are just starting to roll out [Enterprise] binaries... We don't build 
binaries for Debian in part because the Debian community does a good job 
themselves... If you call MySQL and you have support we support you if you are 
running Debian (the same with Suse, RHEL, Fedora, Ubuntu and others)... someone 
in Sales was left with the wrong information

Greg Williamson
DBA
GlobeXplorer LLC

-Original Message-
From:   [EMAIL PROTECTED] on behalf of Scott Marlowe
Sent:   Wed 12/13/2006 10:11 AM
To: Alvaro Herrera
Cc: David Goodenough; pgsql general
Subject:Re: [GENERAL] MySQL drops support for most distributions

On Wed, 2006-12-13 at 12:01, Alvaro Herrera wrote:
 Scott Marlowe wrote:
  On Wed, 2006-12-13 at 10:50, David Goodenough wrote:
   http://developers.slashdot.org/article.pl?sid=06/12/13/1515217from=rss
   
   MySQL quietly deprecated support for most Linux distributions on October 
   16, 
   when its 'MySQL Network' support plan was replaced by 'MySQL Enterprise.' 
   MySQL now supports only two Linux distributions — Red Hat Enterprise 
   Linux 
   and SUSE Linux Enterprise Server. We learned of this when MySQL declined 
   to 
   sell us support for some new Debian-based servers. Our sales rep 'found 
   out 
   from engineering that the current Enterprise offering is no longer 
   supported 
   on Debian OS.' We were told that 'Generic Linux' in MySQL's list of 
   supported 
   platforms means 'generic versions of the implementations listed above'; 
   not 
   support for Linux in general.
  
  So, in a similar vein, which PostgreSQL support companies support
  Debian, for instance?
 
 I bet Credativ does.
 
 The good thing is that there are several companies supporting Postgres,
 so whatever one of them does it does not affect the market as a whole.

I was kinda thinking the same thing.  Man, must suck to be tied to the
one true company for your database when they stop supporting your OS
etc...

And what about MySQL windows flavor?  

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


---
Click link below if it is SPAM [EMAIL PROTECTED]
https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=458041d0161931045513543[EMAIL
 PROTECTED]retrain=spamtemplate=historyhistory_page=1
!DSPAM:458041d0161931045513543!
---






---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] a question on SQL

2006-12-13 Thread Gregory S. Williamson
Bruce Momjian wrote:
 
 Tom Lane wrote:
  Greg Sabino Mullane [EMAIL PROTECTED] writes:
   It's the single most useful non-standard SQL feature postgresql has. It
   is thus simultaneously bad (from a portatbility aspect) and brilliant
   (because it's a million times easier and faster than the alternatives).
  
   You mean second-most useful. LIMIT/OFFSET is the champion, hand down. :)
  
  Yeah, but that one's only quasi-non-standard ... several other DBMSes
  have it too.
 
 I know MySQL has it, and SQL Lite added it.  Which other ones?  Someone
 asked me recently.  I see this chart from Perl documentation:
 

 http://search.cpan.org/~davebaird/SQL-Abstract-Limit-0.12/lib/SQL/Abstract/Limit.pm#DESCRIPTION
 
 Oh, and Rasmus Lerdorf told me he invented LIMIT for mSQL, and MySQL
 then added it, and that MySQL added the limit option.
 
 This was interesting in the MySQL manuals:
 
   For compatibility with PostgreSQL, MySQL also supports the LIMIT
   row_count OFFSET offset syntax.
 
 Did we add the OFFSET _keyword_.  I remember we had the comma-ed numbers
 backwards, and we had OFFSET, but I thought that keyword came from
 MySQL.  Obviously, they don't think so.

Informix provides the FIRST syntax to get the leading rows of a set; I think 
you have to use cursors to get further offsets though (been a while since I 
have had to use it), e.g. SELECT FIRST 10 col1, col2, col3 FROM foo WHERE 
 No LAST either (just tried).

They have had this since at least IDS 8 and I thing the 7.x series had it as 
well. No idea where they got it from; I learned on Informix so I actually 
thought it was standard, until reality disabused me of the notion.

Greg Williamson
DBA
GlobeXplorer LLC


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] pg_dump/pg_Restore 8.1.5 db - 8.2 tsearch2 problem

2006-12-07 Thread Gregory S. Williamson
Henrik --

I have only dabbled in tsearch2 so I can'toffer direct advise, but this looks 
like the problem when upgrading the postGIS stuff ... tsearch2 might have an 
upgrade functionality (postGIS does), but you can also do a more lbaorious 
method that strips out the unwanted tsearch2 definitions from the old version; 
leave the newly compiled tsearch2 in place.

a) dump the DDL for your database
b) dump the data as its own file
c) edit the ddl sql file and break it up into three parts:
1) All definitions *except* tsearch2 related ones; postGIS stuff is always 
clumped together and easy to identify; don't know about tsearch2 though.
2) the tsearch2 related stuff
3) the indexes, constraints and other stuff best applied after data is 
loaded
d) run the ddl with just the table, type, etc definitions
c) load the data
e) run the ddl sql that definex indexes etc.
f) run vacuum analyze, tweak config settings and start testing!

HTH a little,

Greg Williamson
DBA
GlobeXplorer LLC

-Original Message-
From:   [EMAIL PROTECTED] on behalf of Henrik Zagerholm
Sent:   Thu 12/7/2006 1:35 AM
To: pgsql-general@postgresql.org
Cc: 
Subject:[GENERAL] pg_dump/pg_Restore 8.1.5 db - 8.2 tsearch2 problem

Hi list,

I've downloaded and compiled the new 8.2 to a new debian box.
I also compile and installed tsearch2 support.

Now I have a db on a 8.1.5 box with tsearch2 support.

How do a dump and restore my database to the new 8.2 box?

I get all kinds of errors when trying to restore the db.
Should I uninstall tsearch2 before dumping or?

pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 67; 1255 26918  
FUNCTION snb_ru_init(internal) postgres
pg_restore: [archiver (db)] could not execute query: ERROR:  could  
not find function snb_ru_init in file /usr/local/pgsql/lib/ 
tsearch2.so
 Command was: CREATE FUNCTION snb_ru_init(internal) RETURNS internal
 AS '$libdir/tsearch2', 'snb_ru_init'
 LANGUAGE c;
pg_restore: [archiver (db)] could not execute query: ERROR:  function  
public.snb_ru_init(internal) does not exist
 Command was: ALTER FUNCTION public.snb_ru_init(internal) OWNER  
TO postgres;
WARNING: errors ignored on restore: 2

Regards,
Henrik

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


---
Click link below if it is SPAM [EMAIL PROTECTED]
https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=4577e027268986467114494[EMAIL
 PROTECTED]retrain=spamtemplate=historyhistory_page=1
!DSPAM:4577e027268986467114494!
---






---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] select query not using index

2006-12-02 Thread Gregory S. Williamson
Vivek --

If you could let people know what version of postgres, and which OS, it might 
help.

A guess: the planner sees that there are very few rows and decides that a 
sequential scan is faster (this is because a sequential scan on a table with 
only a few rows is probably done in one operation; retrieving index values and 
the actual data rows involves more trips to disk, potentially. You could test 
this by turning off seq scan as a user option and re-running the query.

I note that it is casting vivek as text and the underlying column varchar; in 
earlier versions of postgres this might cause a mismatch and confuse the 
planner; try casting as WHERE username = 'vivek'::varchar and see if that is 
an improvement.

HTH,

Greg Williamson
DBA
GlobeXplorer LLC

-Original Message-
From:   [EMAIL PROTECTED] on behalf of [EMAIL PROTECTED]
Sent:   Sat 12/2/2006 3:05 AM
To: pgsql-general@postgresql.org
Cc: 
Subject:[GENERAL] select query not using index

Dear Friends, 
I have a table as
 \d userpref;
   Table public.userpref
   Column|  Type  |   Modifiers
-++
 username| character varying(101) | not null
 email   | character varying(255) | not null
 context | character varying(32)  | not null default 'from_box'::character 
varying
 Indexes:
userpref_user_idx btree (username)
Foreign-key constraints:
userpref_username_fkey FOREIGN KEY (username, email) REFERENCES 
users(username, email)

The index was created before the table was populated. There are 3 rows in the 
table for 3 different users. Now when I do a 

EXPLAIN  SELECT * from userpref where username = 'vivek';
QUERY PLAN
---
 Seq Scan on userpref  (cost=0.00..1.26 rows=1 width=349)
   Filter: ((username)::text = 'vivek'::text)

EXPLAIN ANALYZE  SELECT * from userpref where username = 'vivek';
 QUERY PLAN

 Seq Scan on userpref  (cost=0.00..1.04 rows=1 width=70) (actual 
time=0.060..0.071 rows=1 loops=1)
   Filter: ((username)::text = 'vivek'::text)
 Total runtime: 0.216 ms
(3 rows)


It shows seq scan . It is not using the index perhaps. But I fail to understand 
why does it not use the index created? I have tried vacuuming the database, 
reindexing the table, running analyze command. 
Can anyone tell me what am I doing wrong?






With warm regards.

Vivek J. Joshi.

[EMAIL PROTECTED]
Trikon Electronics Pvt. Ltd.

All science is either physics or stamp collecting.
-- Ernest Rutherford




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


---
Click link below if it is SPAM [EMAIL PROTECTED]
https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=457169e3251904846743324[EMAIL
 PROTECTED]retrain=spamtemplate=historyhistory_page=1
!DSPAM:457169e3251904846743324!
---






---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] postgresql bug

2006-11-27 Thread Gregory S. Williamson
Best to always use the list as a resource, not individuals (since a person such 
as yourself would then be able to find the latest related issues when doing a 
web-search). I've cc-ed the general list on principle.

The error I got had to do with a failing disk in a linux system. I have no 
clues about windows operations, but other posters have indicated that excluding 
postgres from virus scanners is A Good Thing, but as I say I don't do windows, 
really.

Greg Williamson
DBA
GlobeXplorer LLC

-Original Message-
From:   Wyatt Tellis [mailto:[EMAIL PROTECTED]
Sent:   Mon 11/27/2006 4:27 PM
To: Mark Leet; Gregory S. Williamson; [EMAIL PROTECTED]
Cc: 
Subject:Re: postgresql bug

Hi Mark,

It seems our problem was our Legato/EMC Networker backup process. Due to an 
incorrect exclusion directive, it was locking the data files during the nightly 
backup. I suspect this is similar to what you're experiencing. I haven't tried 
filing a bug report. One of the responses to my post suggested I switch to a 
real OS like Linux, so I wouldn't be surprised if this got rejected as an OS 
bug instead of a PSQL one.

-Wyatt


On Tue, 28 Nov 2006 09:27:27 +1000
 Mark Leet [EMAIL PROTECTED] wrote:
Hi guys,

One of my customers had this problem:
ERROR: could not open relation 1663/16907/2601: Invalid argument.

So I googled it and came across your discussion on the postgres mail archives:
http://archives.postgresql.org/pgsql-general/2006-09/msg01270.php

This error seems to be limited to people running PostgreSQL under Windows, but 
not all of them.

Is there any chance that your anti-virus was the culprit?  Because the error 
doesn't happen on all machines, I wondered out loud what might be different 
between my machine and my customer's.  Anti-virus was one.  In his anti-virus 
program (eTrust EZ Antivirus), he excluded the partition he had PostgreSQL on, 
and he no longer receives the error.

I use Symantec Antivirus and don't receive the error.  Another computer was 
running Computer Associates Antivirus and had the error.  I changed it to 
Symantec Antivirus and the error no longer appears.

Next question, how do we get this theory to the bug fixers?

Thanks,

Mark.







---
Click link below if it is SPAM [EMAIL PROTECTED]
https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=456b824a198381804284693[EMAIL
 PROTECTED]retrain=spamtemplate=historyhistory_page=1
!DSPAM:456b824a198381804284693!
---






---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Eliminating bad characters from a database for upgrading from 7.4 to 8.1

2006-11-17 Thread Gregory S. Williamson
Thanks to you and Russell -- the iconv trick had eluded me for some reason. 
Serious pain on such a large file, but at least it works (test of the small 22 
gig sample)! A little splitting, a little converting, some diff-ing, reassmbly 
and load. piece o' cake!

Thanks again ... sorry for wasting bandwidth for what seems to have an RFTM 
question!

G

-Original Message-
From:   mike [mailto:[EMAIL PROTECTED]
Sent:   Thu 11/16/2006 7:49 PM
To: Gregory S. Williamson
Cc: Russell Smith; pgsql-general@postgresql.org
Subject:Re: [GENERAL] Eliminating bad characters from a database for  
upgrading  from 7.4 to 8.1

The manual suggests: iconv -c -f UTF-8 -t UTF-8 -o cleanfile.sql
dumpfile.sql. The -c option removes invalid character sequences. A diff
of the two files will show the sequences that are invalid. iconv reads
the entire input file into memory so it might be necessary to use split
to break up the dump into multiple smaller files for processing.

On Thu, 2006-11-16 at 19:38 -0800, Gregory S. Williamson wrote:
 Thanks for the suggestion ... since the data involved came from different 
 source, I suspect there may be more than one encoding, but this has great 
 promise.
 
 Greg
 
 
 -Original Message-
 From: Russell Smith [mailto:[EMAIL PROTECTED]
 Sent: Thu 11/16/2006 7:27 PM
 To:   Gregory S. Williamson
 Cc:   pgsql-general@postgresql.org
 Subject:  Re: [GENERAL] Eliminating bad characters from a database for 
 upgrading  from 7.4 to 8.1
 
 Gregory S. Williamson wrote:
  Dear list,
 
  I have been banging my head against a problem for a few days now, and 
  although I am making progress it is painfully slow, and I am hoping that 
  some one out there can steer me in a better way.
 
  I've got a medium size database (7.4), about 64 gigs of data, about 1/3rd 
  of which is in one table, which has ~32 million rows (22 gigs when dumped). 
  This largish table has about 20 different columns of varchar or text.
 
  There are some records that have illegal characters in them, according to 
  postgres 8.1.5, which imposes stricter standards on UTF encoding.
 
  I've been using copy to dump the big table to disk, then try to load it 
  into my new table. When it fails, I use split to break the file into 
  managable chunks and then use vi to find the offending line, then figure 
  out the column. Then I use something like:
 
  create table bad_char_gids as select gid from parcels where position('Ñ' in 
  s_street)  0;
 
  And so create a table with the ids of the bad records; and then use replace 
  to either replace or eliminate the offending characters from that column. 
  This example got 5001 records, but often it is one record in the whole DB 
  will have some other offending character. I fix the problem in the loaddata 
  as well, and continue.
 
  The problem is that there are errors in quite a few of the columns (but 
  only a few tens of thousands of records), and the offending characters are 
  all quite different (wierd diacritics and characters, upper and lower 
  case). And so this is a very slow process.
 
  Is there any way to get a list of records, even if done repeatedly for each 
  column, that would let me find the offending records in 7.4 which have any 
  invalid UTF chars? I am feeling stupid for not seeing one ... I can find 
  any individual bad character, but I want to find them all at once, if 
  possible.

 Try converting the dump files encoding to UTF-8.  before 8.1 you could 
 insert invalid characters into the DB because it accepted other 
 encodings.  It will also dump other encoding.  For example, converting 
 something with windows characters in it.
 
 iconv -f WINDOWS-1251 -t UTF-8 dump_file  converted_dump_file
 
 And import the converted file.  you may need to try a couple of 
 different input encodings if you aren't sure what encoding was used when 
 inserting data into the DB.
 
 Russell.
 
  TIA,
 
  Greg Williamson
  DBA
  GlobeXplorer LLC
 
 
  ---(end of broadcast)---
  TIP 6: explain analyze is your friend
 
 

 
 
 
 ---
 Click link below if it is SPAM [EMAIL PROTECTED]
 https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=455d2cee144961034217237[EMAIL
  PROTECTED]retrain=spamtemplate=historyhistory_page=1
 
 ---
 
 
 
 
 
 
 ---(end of broadcast)---
 TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



---
Click link below if it is SPAM [EMAIL PROTECTED]
https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=455d31d7146531336712104[EMAIL
 PROTECTED]retrain=spamtemplate=historyhistory_page=1
!DSPAM

[GENERAL] Eliminating bad characters from a database for upgrading from 7.4 to 8.1

2006-11-16 Thread Gregory S. Williamson
Dear list,

I have been banging my head against a problem for a few days now, and although 
I am making progress it is painfully slow, and I am hoping that some one out 
there can steer me in a better way.

I've got a medium size database (7.4), about 64 gigs of data, about 1/3rd of 
which is in one table, which has ~32 million rows (22 gigs when dumped). This 
largish table has about 20 different columns of varchar or text.

There are some records that have illegal characters in them, according to 
postgres 8.1.5, which imposes stricter standards on UTF encoding.

I've been using copy to dump the big table to disk, then try to load it into my 
new table. When it fails, I use split to break the file into managable chunks 
and then use vi to find the offending line, then figure out the column. Then 
I use something like:

create table bad_char_gids as select gid from parcels where position('Ñ' in 
s_street)  0;

And so create a table with the ids of the bad records; and then use replace to 
either replace or eliminate the offending characters from that column. This 
example got 5001 records, but often it is one record in the whole DB will have 
some other offending character. I fix the problem in the loaddata as well, and 
continue.

The problem is that there are errors in quite a few of the columns (but only a 
few tens of thousands of records), and the offending characters are all quite 
different (wierd diacritics and characters, upper and lower case). And so this 
is a very slow process.

Is there any way to get a list of records, even if done repeatedly for each 
column, that would let me find the offending records in 7.4 which have any 
invalid UTF chars? I am feeling stupid for not seeing one ... I can find any 
individual bad character, but I want to find them all at once, if possible.

TIA,

Greg Williamson
DBA
GlobeXplorer LLC


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Eliminating bad characters from a database for upgrading from 7.4 to 8.1

2006-11-16 Thread Gregory S. Williamson
Thanks for the suggestion ... since the data involved came from different 
source, I suspect there may be more than one encoding, but this has great 
promise.

Greg


-Original Message-
From:   Russell Smith [mailto:[EMAIL PROTECTED]
Sent:   Thu 11/16/2006 7:27 PM
To: Gregory S. Williamson
Cc: pgsql-general@postgresql.org
Subject:Re: [GENERAL] Eliminating bad characters from a database for 
upgrading  from 7.4 to 8.1

Gregory S. Williamson wrote:
 Dear list,

 I have been banging my head against a problem for a few days now, and 
 although I am making progress it is painfully slow, and I am hoping that some 
 one out there can steer me in a better way.

 I've got a medium size database (7.4), about 64 gigs of data, about 1/3rd of 
 which is in one table, which has ~32 million rows (22 gigs when dumped). This 
 largish table has about 20 different columns of varchar or text.

 There are some records that have illegal characters in them, according to 
 postgres 8.1.5, which imposes stricter standards on UTF encoding.

 I've been using copy to dump the big table to disk, then try to load it into 
 my new table. When it fails, I use split to break the file into managable 
 chunks and then use vi to find the offending line, then figure out the 
 column. Then I use something like:

 create table bad_char_gids as select gid from parcels where position('Ñ' in 
 s_street)  0;

 And so create a table with the ids of the bad records; and then use replace 
 to either replace or eliminate the offending characters from that column. 
 This example got 5001 records, but often it is one record in the whole DB 
 will have some other offending character. I fix the problem in the loaddata 
 as well, and continue.

 The problem is that there are errors in quite a few of the columns (but only 
 a few tens of thousands of records), and the offending characters are all 
 quite different (wierd diacritics and characters, upper and lower case). And 
 so this is a very slow process.

 Is there any way to get a list of records, even if done repeatedly for each 
 column, that would let me find the offending records in 7.4 which have any 
 invalid UTF chars? I am feeling stupid for not seeing one ... I can find any 
 individual bad character, but I want to find them all at once, if possible.
   
Try converting the dump files encoding to UTF-8.  before 8.1 you could 
insert invalid characters into the DB because it accepted other 
encodings.  It will also dump other encoding.  For example, converting 
something with windows characters in it.

iconv -f WINDOWS-1251 -t UTF-8 dump_file  converted_dump_file

And import the converted file.  you may need to try a couple of 
different input encodings if you aren't sure what encoding was used when 
inserting data into the DB.

Russell.

 TIA,

 Greg Williamson
 DBA
 GlobeXplorer LLC


 ---(end of broadcast)---
 TIP 6: explain analyze is your friend


   



---
Click link below if it is SPAM [EMAIL PROTECTED]
https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=455d2cee144961034217237[EMAIL
 PROTECTED]retrain=spamtemplate=historyhistory_page=1
!DSPAM:455d2cee144961034217237!
---






---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] planer picks a bad plan (seq-scan instead of index)

2006-11-09 Thread Gregory S. Williamson
I am admittedly speaking up somewhat late here, and may be completely off base, 
but it seems to me that the LIKE operation is almost always going to be a 
loser, performance-wise, when there is an initial wildcard, e.g. %superman 
re% will require a sequential scan, while superman re% would not (assuming 
proper indexes matching case and type).

I'd suggest tsearch2, possibly, which uses GIST indexes and may perhaps be a 
better match for this sort of problem.

HTH,

Greg Williamson
DBA
GlobeXplorer LLC

-Original Message-
From:   [EMAIL PROTECTED] on behalf of Richard Huxton
Sent:   Thu 11/9/2006 1:22 AM
To: Thomas H.
Cc: pgsql-general@postgresql.org
Subject:Re: [GENERAL] planer picks a bad plan (seq-scan instead of 
index)

Thomas H. wrote:
 
 SELECT * FROM shop.dvds
 LEFT JOIN oldtables.movies ON mov_id = dvd_mov_id
 LEFT JOIN shop.data_soundmedia ON sm_info_ean = dvd_ean
 WHERE (lower(mov_name) LIKE '%superman re%' OR lower(dvd_name) like 
 '%superman re%' OR lower(dvd_edition) LIKE '%superman re%')
 

Try putting your conditions as part of the join:
SELECT * FROM shop.dvds
LEFT JOIN
   oldtables.movies
ON
   mov_id = dvd_mov_id
   AND (
 lower(mov_name) LIKE '%superman re%'
 OR lower(dvd_name) like '%superman re%'
 OR lower(dvd_edition) LIKE '%superman re%'
   )
LEFT JOIN shop.data_soundmedia ON sm_info_ean = dvd_ean

I'd also be tempted to look at a tsearch2 setup for the word searches.
-- 
   Richard Huxton
   Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


---
Click link below if it is SPAM [EMAIL PROTECTED]
https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=4552efed289104295495211[EMAIL
 PROTECTED]retrain=spamtemplate=historyhistory_page=1
!DSPAM:4552efed289104295495211!
---






---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] converting Informix outer to Postgres

2006-11-08 Thread Gregory S. Williamson
Perhaps a more recent version of postgres (8.1 or mayber even look at 8.2 ...); 
lots of improvements since 7.1, IIRC in the area of joins specifically, but I 
don't know the answer to your question specifically.

HTH,

Greg Williamson (a [mostly] former Informix user, but not, alas, with such 
queries)
DBA
GlobeXplorer LLC


-Original Message-
From:   [EMAIL PROTECTED] on behalf of H.J. Sanders
Sent:   Wed 11/8/2006 4:27 AM
To: Martijn van Oosterhout; pgsql-general@postgresql.org
Cc: 
Subject:Re: [GENERAL] converting Informix outer to Postgres

Hi.

From some documentation:

In INFORMIX SQL, outer tables are defined in the FROM clause with the OUTER 
keyword :

SELECT ... FROM a, OUTER(b)
 WHERE a.key = b.akey

SELECT ... FROM a, OUTER(b,OUTER(c))
 WHERE a.key = b.akey
   AND b.key1 = c.bkey1
   AND b.key2 = c.bkey2 

PostgreSQL 7.1 supports the ANSI outer join syntax :

SELECT ... FROM cust LEFT OUTER JOIN order
 ON cust.key = order.custno

SELECT ...
  FROM cust LEFT OUTER JOIN order
 LEFT OUTER JOIN item
 ON order.key = item.ordno
ON cust.key = order.custno
 WHERE order.cdate  current date







Any help?

Henk


 -Oorspronkelijk bericht-
 Van: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] Martijn van Oosterhout
 Verzonden: woensdag 8 november 2006 11:42
 Aan: [EMAIL PROTECTED]
 CC: pgsql-general@postgresql.org
 Onderwerp: Re: [GENERAL] converting Informix outer to Postgres
 
 
 On Tue, Nov 07, 2006 at 06:35:05PM -0500, [EMAIL PROTECTED] wrote:
  Hi all,
  I have been working on this Informix SQL query which has an outer join.
  I have attached Informix query and my supposedly solution to this query
  but I cannot get the same count. I appreciate for any help.
  Thanks.
 
 I don't know what the Informix outer join is, but is it like the SQL
 FULL OUTER JOIN? Have you tried using that?
 
 Have a nice day,
 -- 
 Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
  From each according to his ability. To each according to his ability to 
  litigate.
 

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


---
Click link below if it is SPAM [EMAIL PROTECTED]
https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=4551ca60161213366512726[EMAIL
 PROTECTED]retrain=spamtemplate=historyhistory_page=1
!DSPAM:4551ca60161213366512726!
---






---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] [GENERAL] Index greater than 8k

2006-10-31 Thread Gregory S. Williamson
I hesitate to mention it, since it's retrograde, uses OIDS, may not handle your 
locale/encoding correctly, may not scale well for what you need etc., etc.

But we've used fti (in the contrib package) to do fast searches for any bit of 
text in people's names ... we didn't go with tesearch2 because we were a bit 
worried about the need to search for fragments of names, and that names don't 
follow stemming rules and the like very well. Still it might be a way of 
handling some of the uglier data. It was a bit of a pain to set up but seems to 
work well. Of course, users can ask for something commonplace and get back 
gazillions of rows, but apparently that's ok for the application this is part 
of. Caveat: only about 32 million rows in this dataset, partitioned into 
unequal grouings (about 90 total).

HTH (but doubt it for reasons that undoubtedly be made clear ;-)

Greg Williamson
DBA
GlobeXplorer LLC


-Original Message-
From:   [EMAIL PROTECTED] on behalf of Joshua D. Drake
Sent:   Tue 10/31/2006 7:46 PM
To: Teodor Sigaev
Cc: Darcy Buskermolen; PgSQL General; PostgreSQL-development
Subject:Re: [HACKERS] [GENERAL] Index greater than 8k

Teodor Sigaev wrote:
 The problem as I remember it is pg_tgrm not tsearch2 directly, I've
 sent a self contained test case directly to  Teodor  which shows the
 error.
 'ERROR:  index row requires 8792 bytes, maximum size is 8191'
 Uh, I see. But I'm really surprised why do you use pg_trgm on big text?
 pg_trgm is designed to find similar words and use technique known as
 trigrams. This will  work good on small pieces of text such as words or
 set expression. But all big texts (on the same language) will be similar
 :(. So, I didn't take care about guarantee that index tuple's size
 limitation. In principle, it's possible to modify pg_trgm to have such
 guarantee, but index becomes lossy - all tuples gotten  from index
 should be checked by table's tuple evaluation.

We are trying to get something faster than ~ '%foo%';

Which Tsearch2 does not give us :)

Joshua D. Drake



 
 If you want to search similar documents I can recommend to have a look
 to fingerprint technique (http://webglimpse.net/pubs/TR93-33.pdf). It's
 pretty close to trigrams and metrics of similarity is the same, but uses
 another signature calculations. And, there are some tips and trics:
 removing HTML marking,removing punctuation, lowercasing text and so on -
 it's interesting and complex task.


-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


---
Click link below if it is SPAM [EMAIL PROTECTED]
https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=454815f5242304846743324[EMAIL
 PROTECTED]retrain=spamtemplate=historyhistory_page=1
!DSPAM:454815f5242304846743324!
---







---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Question with tsearch2 (or it might be a general one too)

2006-10-20 Thread Gregory S. Williamson
Ritesh --

You are correct in thinking that @@ is a special operator defined for 
tesearch2; it uses the GIST indexes to do a search, but more than that I can't 
say, since I am not really familiar with tsearch2. (In the postGIS world there 
is a vaguely equivalent operator, , again using postgres' ability to define 
ones own data types and functions.

HTH,

Greg Williamson
DBA
GlobeXplorer LLC


-Original Message-
From:   [EMAIL PROTECTED] on behalf of Ritesh Nadhani
Sent:   Thu 10/19/2006 11:38 AM
To: pgsql-general@postgresql.org
Cc: 
Subject:[GENERAL] Question with tsearch2 (or it might be a general one 
too)

Hello

A newbie to PostgreSQL from MySQL and just trying to learn tsearch2. In 
one of the examples at:

http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/docs/tsearch-V2-intro.html

the query given is:

SELECT intindex, strTopic FROM tblmessages
 WHERE idxfti @@ to_tsquery('default', 'gettysburg  
address')
 AND strMessage ~* '.*men are created equal.*';

What does the '@@' in the query means?

I did a search at:

http://search.postgresql.org/www.search?ul=http%3A%2F%2Fwww.postgresql.org%2Fdocs%2F8.1%2Finteractive%2F%25fm=oncs=utf-8q=%40%40

an it dosnt return any result.

Is this specific to tsearch2? What does that mean?

Ritesh

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


---
Click link below if it is SPAM [EMAIL PROTECTED]
https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=453969c4232531465134470[EMAIL
 PROTECTED]retrain=spamtemplate=historyhistory_page=1
!DSPAM:453969c4232531465134470!
---






---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] old duplicate emails

2006-10-20 Thread Gregory S. Williamson
I've seeing a few ... one just now dated 10/17 on conversion and Oracle ...
G

-Original Message-
From:   [EMAIL PROTECTED] on behalf of Richard Broersma Jr
Sent:   Fri 10/20/2006 8:43 PM
To: General PostgreSQL List
Cc: 
Subject:[GENERAL] old duplicate emails

It is just me, or is everyone getting duplicated old emails from a couple of 
day ago?

Regards,

Richard Broersma Jr.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


---
Click link below if it is SPAM [EMAIL PROTECTED]
https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=453996c7253166672479766[EMAIL
 PROTECTED]retrain=spamtemplate=historyhistory_page=1
!DSPAM:453996c7253166672479766!
---






---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Maximum size of database

2006-10-18 Thread Gregory S. Williamson
Roopa,

I think that by defintion a SELECT * FROM sometable; will always take 
longer as the table increases in size (but if anyone who is more versed in 
theory of searches, sorts, etc. contradicts me I'll happily listen!). Note that 
the database could increase enormously with no direct effect on speed; but if 
one necessary table keeps growing in size that will effect speed.

Possible solutions are limited, since such a query *has* to check every row to 
see if it is live and retrieve the data.

Might be helped by:
 (a) reducing the core table to a minimum of columns, with esp. large ones 
placed in an another table where you will only get them when you need them -- 
reduces total amount of data being moved, but may be minimal if you still have 
lots of data [but if can reduce each row from say 1k bytes to 200 bytes, as 
long as you don't always need the data in the new table, you gain some in 
efficiency]. Heavily dependant on use of the data and the time you have to 
de-normalize the table.

 (b) more RAM -- if everything is in RAM and the server does not have to hit 
disk it's faster, but is obviously limited by $ and capacity. Our runtime 
servers have 4 gigs each and I'm worried about needing 8 sooner rather than 
later (we have lots of connections as well as lots of data).

 (c) faster/different disks -- RAIDED with battery backed cache as others have 
suggested. We have had poor experience with some Dell disks (I don't recall 
which off hand) as database servers -- I've seen similar posts by other users, 
probably in the performance list archives.

 (d) migrate to *nix (yeah, I know, probably not possible, but Windows has 
issues, and the postgres port is relatively newer on that platform) -- untested 
assumption which may warrant a flame, but I think in general the *Nix OS 
versions may have better disk I/O.

Greg W.


-Original Message-
From:   [EMAIL PROTECTED] on behalf of roopa perumalraja
Sent:   Wed 10/18/2006 12:41 AM
To: pgsql-general@postgresql.org
Cc: Michael Fuhr; louis gonzales
Subject:Re: [GENERAL] Maximum size of database

Hi
   
  Thanks for your reply.
   
  explain select * from tk_20060403;
QUERY PLAN
--
 Seq Scan on tk_20060403  (cost=0.00..95561.30 rows=3609530 width=407)
(1 row)

  will this help?
   
  louis gonzales [EMAIL PROTECTED] wrote:
  also, run
EXPLAIN
on any command, show the results of this. In particular, if you have 
some commands that are taking 'even longer?'



roopa perumalraja wrote:

 Thanks for your reply.
 
 I have answered your questions below.
 
 1  2) System: Microsoft Windows XP Professional
 Version 2002
 Computer: Intel Pentium CPU 3.40GHz, 960MB of RAM
 
 3) shared_buffers = 2
 autovaccum = on
 
 4) Yes, I am vacuuming  analyzing the database once every day.
 
 5) No concurrent activities, means I run one command at a time.
 
 6) Nothing else running on the box other than Postgres.
 
 I hope these answers will try to solve my problem. Thanks again.
 
 Roopa
 */Michael Fuhr /* wrote:

 On Tue, Oct 17, 2006 at 07:26:25PM -0700, roopa perumalraja wrote:
  I would like to know that what can be the maximum size of
 database in
  postgres 8.1.4.

 http://www.postgresql.org/docs/faqs.FAQ.html#item4.4

  Currently my database size is 37GB  its pretty slow. I wonder if
  its b'cos of huge amount of data in it.

 37GB isn't all that huge; as the FAQ mentions, much larger databases
 exist. Without more information we'll have to ask some of the
 standard questions:

 What's your hardware configuration?
 What operating system and version are you using?
 What are your non-default postgresql.conf settings?
 Are you vacuuming and analyzing the database regularly?
 How much concurrent activity do you have?
 Does anything other than PostgreSQL run on the box?

 If you have a specific query that's slow then please post the EXPLAIN
 ANALYZE output. Also, you might get more help on the pgsql-performance
 list.

 -- 
 Michael Fuhr




 signature

 
 Get your own web address for just $1.99/1st yr 
 

-
Do you Yahoo!?
 Everyone is raving about the  all-new Yahoo! Mail.


---
Click link below if it is SPAM [EMAIL PROTECTED]
https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=4535d8c8103071076418835[EMAIL
 PROTECTED]retrain=spamtemplate=historyhistory_page=1
!DSPAM:4535d8c8103071076418835!
---




---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [SQL] [GENERAL] How to split a table?

2006-10-17 Thread Gregory S. Williamson
Perhaps something like:

CREATE TABLE foo2 AS SELECT * FROM foo WHERE (rand() = 0.60);

?

HTH,

Greg Williamson
DBA
GlobeXplorer LLC


-Original Message-
From:   [EMAIL PROTECTED] on behalf of Andreas Kretschmer
Sent:   Tue 10/17/2006 1:34 AM
To: pgsql-general@postgresql.org; pgsql-sql@postgresql.org; [EMAIL 
PROTECTED]
Cc: 
Subject:Re: [SQL] [GENERAL] How to split a table?

Felix Zhang [EMAIL PROTECTED] schrieb:

 Hi,
  
 I want to split a table to 2 small tables. The 1st one contains 60% records
 which are randomly selected from the source table.
 How to do it?

Why do you want to do this?


Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
If I was god, I would recompile penguin with --enable-fly.(unknow)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


---
Click link below if it is SPAM [EMAIL PROTECTED]
https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=4534936b271274356172766[EMAIL
 PROTECTED]retrain=spamtemplate=historyhistory_page=1
!DSPAM:4534936b271274356172766!
---






---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Postgres Database Slow

2006-10-17 Thread Gregory S. Williamson
Please do not simply repost your obscure and almost meaningless original 
question.

Please respond to the earlier posts asking for more information. People might 
be willing to help, but they can't unless you respond to them.

Greg Williamson
DBA
GlobeXplorer LLC


-Original Message-
From:   [EMAIL PROTECTED] on behalf of roopa perumalraja
Sent:   Tue 10/17/2006 9:49 PM
To: pgsql-general@postgresql.org
Cc: 
Subject:[GENERAL] Postgres Database Slow

Hi
   
  Currently my database size is 38GB and it is pretty slow in whatever I do 
with it like take a backing up, vaccuming, reindexing, running all queries. Why 
is that? Is it possible to improve the performance.
   
  Thanks in advance
  Roopa
   


-
Yahoo! Messenger with Voice. Make PC-to-Phone Calls to the US (and 30+ 
countries) for 2¢/min or less.


---
Click link below if it is SPAM [EMAIL PROTECTED]
https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=4535b0a088521470421014[EMAIL
 PROTECTED]retrain=spamtemplate=historyhistory_page=1
!DSPAM:4535b0a088521470421014!
---




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] postgres' web site malfunctional ?

2006-10-15 Thread Gregory S. Williamson
Then I go to http://www.postgresql.org/ I get a blank page ?!? No bytes ... 
has lost my marbles, my browser (FireFox 1.5.0.7) lost its electrons, or ??

Thanks,

Greg Williamson
DBA
GlobeXplorer LLC


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] postgres' web site malfunctional ?

2006-10-15 Thread Gregory S. Williamson
Nope, neither Firefox nor IE 6.0 get anything from this site.

:-(

G


-Original Message-
From:   Chris [mailto:[EMAIL PROTECTED]
Sent:   Sun 10/15/2006 6:53 PM
To: Gregory S. Williamson
Cc: pgsql-general@postgresql.org
Subject:Re: [GENERAL] postgres' web site malfunctional ?

Gregory S. Williamson wrote:
 Then I go to http://www.postgresql.org/ I get a blank page ?!? No bytes ... 
 has lost my marbles, my browser (FireFox 1.5.0.7) lost its electrons, or ??
 
 Thanks,

I get stuff now.. maybe you got it in the middle of a reboot or something?

-- 
Postgresql  php tutorials
http://www.designmagick.com/


---
Click link below if it is SPAM [EMAIL PROTECTED]
https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=4532e47875401470421014[EMAIL
 PROTECTED]retrain=spamtemplate=historyhistory_page=1
!DSPAM:4532e47875401470421014!
---






---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] postgres' web site malfunctional ?

2006-10-15 Thread Gregory S. Williamson
Thanks muchly!  Content is back now, formatting is whack, but I can use it 
again.

G

-Original Message-
From:   Joshua D. Drake [mailto:[EMAIL PROTECTED]
Sent:   Sun 10/15/2006 8:27 PM
To: Bill Hawes
Cc: Gregory S. Williamson; pgsql-general@postgresql.org; PostgreSQL WWW
Subject:Re: [GENERAL] postgres' web site malfunctional ?

Bill Hawes wrote:
 Gregory S. Williamson wrote:
 Then I go to http://www.postgresql.org/ I get a blank page 
 ?!? No bytes ... has lost my marbles, my browser (FireFox 
 1.5.0.7) lost its electrons, or ??

 
 
 http://www.postgresql.org/download/ works, as does /support, /developer,
 etc. but nothing is formatted properly.
 

The web team has been notified. It is being worked on.

Joshua D. Drake



 Bill Hawes
 
 
 ---(end of broadcast)---
 TIP 5: don't forget to increase your free space map settings
 


-- 

   === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
   Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate



---
Click link below if it is SPAM [EMAIL PROTECTED]
https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=4532fa1183449119242804[EMAIL
 PROTECTED]retrain=spamtemplate=historyhistory_page=1
!DSPAM:4532fa1183449119242804!
---






---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Storing images in PostgreSQL databases (again)

2006-10-06 Thread Gregory S. Williamson
Hate to suggest corporate software, but there is an Informix/Illustra blade 
that could do something like what you're after (I remember a demo of 
sunset/sunrise photos being selected on the basis of color values) ...

But I think they used smart blobs and didn't use them as key values.

G

-Original Message-
From:   [EMAIL PROTECTED] on behalf of Jean-Christophe Roux
Sent:   Thu 10/5/2006 4:54 PM
To: Alexander Staubo
Cc: pgsql-general@postgresql.org
Subject:Re: [GENERAL] Storing images in PostgreSQL databases (again)

Why would I set a bytea column (containing picures) as a primary key? Because I 
want to be sure that the same image is inserted only once (that requirement 
comes from a real project) and using a primary key for that purpose makes sense 
to me. 
Am I going to retrieve an image row by its image data? I would certainly like! 
For instance, I would like to get the pictures whose main color is green 
(requirement from a real project), and a 
select * from images where main_color(image) = 'green' would be nice.
JCR


- Original Message 
From: Alexander Staubo [EMAIL PROTECTED]
To: Jean-Christophe Roux [EMAIL PROTECTED]
Cc: pgsql-general@postgresql.org
Sent: Thursday, October 5, 2006 7:35:04 PM
Subject: Re: [GENERAL] Storing images in PostgreSQL databases (again)

On Oct 6, 2006, at 01:29 , Jean-Christophe Roux wrote:

 By the way, is it practical to set a bytea column (containing  
 pictures) as primary key? That would severely slow down many  
 operations I guess.

Why would you? It's possible, but completely impractical, since image  
data typically exceeds the index page size. Moreover, are you really  
going to retrieve an image row by its image data?

Alexander.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly









---
Click link below if it is SPAM [EMAIL PROTECTED]
https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=4526a19c122019835456387[EMAIL
 PROTECTED]retrain=spamtemplate=historyhistory_page=1
!DSPAM:4526a19c122019835456387!
---




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Storing images in PostgreSQL databases (again)

2006-10-05 Thread Gregory S. Williamson
FWIW, the company I work for stores its terrabytes of imagery on disk, using a 
database to track them (spatial coordinates, metadata, location, etc.); I have 
worked on projects in which we stored images in a database (blobs in Informix) 
and it worked fine. Both approaches can have their merits.

Personally, I'd do thumbnails on intake and handle them on their own, either on 
disk on in the db. But I have preference for a preprocessing data so runtime 
response is maximized.

Assuming you don't have access to a blade/suite of functions that allow you to 
use the image in the database as a useful data type (Informix at least used ot 
have a blade that did this), you can still use informtation about the image as 
a primary key, to wit, a sufficiently large hash (MD5 for instance). Of course, 
there's time to create the hash which might be an issue in a high volume 
system. Extending a hash with some other data (date ?) can considerably 
decrease the chance of collisions. It's still a longish key, but workable I 
suspect (untested, we used an artificial key, a serial).

$0.02 worth ...

Greg Williamson
DBA
GlobeXplorer LLC


-Original Message-
From:   [EMAIL PROTECTED] on behalf of Jean-Christophe Roux
Sent:   Thu 10/5/2006 4:29 PM
To: pgsql-general@postgresql.org
Cc: 
Subject:Re: [GENERAL] Storing images in PostgreSQL databases (again)

Hi,
If the database had built-in functions to manipulate images (make a thumbnail, 
add text ont it.., make a montage of two pictures) and I could write something 
like 
select thumbnail(image_field, 100, 100) from images_table
that would be a good reason to go the db route versus the filesystem route. A 
database does more then storing data, it makes convenient  to play with them. 
Once my pictures are stored in the database, how do I make thumbnails for 
instance? Maybe the solution already exists; I am curious here. Is there a way 
to integrate ImageMagick into a PostgreSQL workflow?
By the way, is it practical to set a bytea column (containing pictures) as 
primary key? That would severely slow down many operations I guess. 
JCR


- Original Message 
From: Alexander Staubo [EMAIL PROTECTED]
To: pgsql-general@postgresql.org
Cc: DEV [EMAIL PROTECTED]
Sent: Thursday, October 5, 2006 6:30:07 PM
Subject: Re: [GENERAL] Storing images in PostgreSQL databases (again)

On Oct 5, 2006, at 19:47 , DEV wrote:

 I have seen several posts pertaining to the overhead difference  
 in storing
 in a db table versus the file system.  What is this difference?

Well, there's not much space overhead to speak of. I tested with a  
bunch of JPEG files:

$ find files | wc -l
 2724
$ du -hs files
213Mfiles

With an empty database and the following schema:

   create table files (id serial, data bytea);
   alter table files alter column data set storage external;

When loaded into the database:

$ du -hs /opt/local/var/db/postgresql/base/16386
223M/opt/local/var/db/postgresql/base/16386

On my MacIntel with PostgreSQL from DarwinPorts -- a configuration/ 
port where PostgreSQL performance does *not* shine, incidentally --  
PostgreSQL can insert the image data at a pretty stable 2.5MB/s. It's  
still around 30 times slower than the file system at reading the  
data. (I would love to run a benchmark to provide detailed timings,  
but that would tie up my laptop for too long.)

Alexander.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq









---
Click link below if it is SPAM [EMAIL PROTECTED]
https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=452593f911951950113718[EMAIL
 PROTECTED]retrain=spamtemplate=historyhistory_page=1
!DSPAM:452593f911951950113718!
---




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Cause of ERROR: could not open relation 1663/856689/856777: Invalid argument?

2006-09-30 Thread Gregory S. Williamson
Wyatt --

We got a spate of similar errors recently; turned out to be a disk was not 
mounted properly. Once it was reseated all was well. You might also do a RAM 
check just to make sure that something isn't wonky there.

IIRC, I was told (see the archives of the postgres admin mail list) that this 
is an errant index, so you might try reindexing the table and see if you get 
the errors or if they go away.

HTH,

Greg WIlliamson
DBA
GlobeXplorer LLC

-Original Message-
From:   [EMAIL PROTECTED] on behalf of Wyatt Tellis
Sent:   Sat 9/30/2006 9:08 AM
To: pgsql-general@postgresql.org
Cc: 
Subject:[GENERAL] Cause of ERROR: could not open relation 
1663/856689/856777:  Invalid argument?

Hi,

I'm running 8.1.4 on W2K3 R2. I occasionally get errors of the type:

ERROR: could not open relation 1663/856689/856777: Invalid argument

where the last two numbers change. This only seems to happen during 
inserts into the largest table in the database (500,000 rows). What 
does this error message mean?


Thanks.



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


---
Click link below if it is SPAM [EMAIL PROTECTED]
https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=451e9d02292579835456387[EMAIL
 PROTECTED]retrain=spamtemplate=historyhistory_page=1
!DSPAM:451e9d02292579835456387!
---






---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Database design and triggers...

2006-09-07 Thread Gregory S. Williamson
Roman --

You can certainly use a trigger to track changes (as well as creating an audit 
trail and enforcing rules); performance on inserts,updates and deletes will 
suffer accordingly since there's extra operations involved, but it definitely 
be a winner on the other end in generating reports and quick totals.

As long as data changes are properly rolled into a transaction I can't think of 
any obvious ways this setup would fail -- the trigger changes would also be 
committed or rolled back, but you do need to pay attention to when your trigger 
fires (before or after).

See for instance http://www.postgresql.org/docs/8.1/interactive/triggers.html 
(section 33 of the 8.1.4 documentation) for examples and a discussion of the 
different types.

And from the point of view of PostgreSQL function and procedure are used 
interchangably; its not like some languages in which procedures don't return 
values but functions always do. (Someone more knowledgable please correct me if 
I am wrong on this!).

HTH,

Greg Williamson
DBA
GlobeXplorer LLC


-Original Message-
From:   [EMAIL PROTECTED] on behalf of [EMAIL PROTECTED]
Sent:   Wed 9/6/2006 11:05 AM
To: pgsql-general@postgresql.org
Cc: 
Subject:[GENERAL] Database design and triggers...

Hi everybody. Hope I'm posting in the correct group.

My question is part design and part implementation.

Since we are creating an inventory system we need to have the clients
pull up current inventory. Also, we need to have the past transactions
stored for reference and billing. In our previous system in MS Access
we accomplished this by adding up all of the transactions stored in two
tables and generating a temporary table with the latest inventory
count. The problem with this approach is that it is slow because the
temporary table has to be created every time a user needs to see a
report or work on a form. Even when instead of creating a temporary
table we use a query it is still slow. With postgreSQL I found out
about triggers and I figure that instead of calculating the current
inventory count and storing it in a table every time a client needs it
I could have a triggers maintain a table with the current count by
incrementing or decreasing the amounts each time a transaction is
stored in the transaction tables. My worry is that if for some reason a
trigger were to somehow fail to execute correctly there would be an
inconsistency between the transactions table and the current inventory
count table and it would have to be calculated from scratch taking in
to account all of the past transactions in the transactions table.

Are trigger a very safe way to use in the way I describe? Or should I
try using views or stick with the temporary table solution we already
have?

My second part of the question is if there is a tutorial for triggers
and stored procedures and what is the difference between Procedures and
Functions?

Thanks Beforehand!


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


---
Click link below if it is SPAM [EMAIL PROTECTED]
https://mailscanner.globexplorer.com/dspam/dspam.cgi?signatureID=450038a9268108992556831[EMAIL
 PROTECTED]retrain=spamtemplate=historyhistory_page=1
!DSPAM:450038a9268108992556831!
---






---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] apparent wraparound

2006-07-14 Thread Gregory S. Williamson
Reece --

The number of slots needed exceeds messages are telling you that the current 
FSM (Free Space Map) does not have enough space allocated to track all of the 
old tuples that are to be reused. I suspect that having such a situation would 
effect the wraparound issue, since you'd have dead wood which hasn't been 
recycled.

You need to edit the postgresql.conf file and increase the max_fsm_pages and 
max_fsm_relations parameters and then restart postgres (I think you have to 
actually stop and restart, as opposed to a reload, but I could be wrong). You 
may end up needing to adjust the total amount of RAM allocated to Shared Memory 
to allow for as large an FSM as you'll need. That requires a system reboot.

HTH,

Greg Williamson
DBA
GlobeXplorer LLC 

-Original Message-
From:   [EMAIL PROTECTED] on behalf of Reece Hart
Sent:   Fri 7/14/2006 12:30 PM
To: pgsql-general
Cc: 
Subject:Re: [GENERAL] apparent wraparound

Tom Lane wrote:
 I'd ask you the same question I asked Thomas: do you continue to get those 
 log messages
 during subsequent checkpoints?

No, I don't.  The error did not reappear during ~2h of continuous
inserts since my report, didn't reappear after a forced checkpoint
(i.e., via psql), and did not reappear on a recent stop/start cycle.

There was a period when my cron-driven vacuuming was broken and, in
principle, I might have been susceptible to wraparound.  However, I
don't see how we could have had 1B transactions in that period.

One other tidbit: a colleague inadvertently updated ~10M records.  After
this, I started getting errors like:
number of page slots needed (2952496) exceeds max_fsm_pages (50)
I restored from a backup, but still have: 
'number of page slots needed (183248) exceeds max_fsm_pages (5)'
(I reduced max_fsm_pages after the restore.)

I'm not sure whether the vacuum and fsm info is relevant.


-Reece

-- 
Reece Hart, http://harts.net/reece/, GPG:0x25EC91A0


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org

!DSPAM:44b7f15495741414113241!





---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] How to access a table from one database to another database

2006-07-14 Thread Gregory S. Williamson
Possible dblink, in the ./contrib directory would help ? I have never had to 
use it but it seems like it might be what you need.

HTH,

Greg Williamson
DBA
GlobeXplorer LLC

-Original Message-
From:   [EMAIL PROTECTED] on behalf of VivekanandaSwamy R.
Sent:   Fri 7/14/2006 9:56 PM
To: pgsql-general@postgresql.org
Cc: 
Subject:[GENERAL] How to access a table from one database to another 
database


 

Hai all,
I have 2 databases namee PAO and CAS.
PAO contains 3 schemas named Public,pao,sts
CAS contains 4 schemas named Public,cao,sts,reports


Now  i am in PAO database..now i want access table 'activity' in schema
'cas' in CAS database.
How it is posible.


2nd thing is...


i have 2 servers access i.e local and mainserver.


How access table from one server to another server?



please tel me...because we need this one



_

Vivekananda.R mailto:[EMAIL PROTECTED]  | Software Engineer
| CGDA Program.

Infinite Computer Solutions India Pvt. Ltd.|Exciting Times ... Infinite
Possibilities...

SEI-CMMI level 5 | ISO 9001:2000

IT SERVICES | BPO |

Telecom | Finance | Healthcare | Manufacturing |

Energy  Utilities | Retail  Distribution | Government

Tel +91-80-4133 -2 Ext:3006 | Mobile: 9986463365
Fax +91-80-513-10853 | www.infics.com http://www.infics.com/ 

USA | United Kingdom | India | China | Singapore | Malaysia |Hong Kong

_







Information transmitted by this e-mail is proprietary to Infinite Computer 
Solutions and / or its Customers and is intended for use only by the individual 
or the entity to which it is addressed, and may contain information that is 
privileged, confidential or exempt from disclosure under applicable law. If you 
are not the intended recipient or it appears that this mail has been forwarded 
to you without proper authority, you are notified that any use or dissemination 
of this information in any manner is strictly prohibited. In such cases, please 
notify us immediately at [EMAIL PROTECTED] and delete this email from your 
records.

!DSPAM:44b875fa155491804284693!




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Help with storing spatial (map coordinates) data?

2006-06-12 Thread Gregory S. Williamson
We have had good success with postGIS for storing various spatial data sets 
(polygons, points and lines).

They can be found at http://postgis.refractions.net/.

We store our data in lat/long but postGIS has many different spatial reference 
systems defined and I would suspect that minutes/seconds exists. You may want 
to subscribe to and post your question on the postGIS mailing list.

There are windows-ready compiled versions which seem to work well, although 
I've only played with them for prototypes (our real database servers are all 
linux so I can't be of any help on the Windoze front).

In general support for this extension of postgres is quite helpful, so I would 
suggest asking on their general list.

HTH,

Greg Williamson
DBA
GlobeXplorer LLC

-Original Message-
From:   [EMAIL PROTECTED] on behalf of John Tregea
Sent:   Sun 6/11/2006 11:18 PM
To: pgsql-general@postgresql.org
Cc: 
Subject:[GENERAL] Help with storing spatial (map coordinates) data?

Hi,

I have recently switched to PostgreSQL and had no problem bringing our 
existing (my)SQL databases and data into the environment. I am now 
extending the functionality of our databases and want to start storing 
spatial information.

The information is made up of latitude and longitude coordinates that 
define a point or location on the earth's surface. e.g. degrees, minutes 
and seconds north/south and degrees, minutes and seconds east/west.

I have read up on custom data types (with input and output functions) in 
the docs but am not sure if that is the best way to go. Can anyone point 
me to a simple, workable implementation of storing and managing this 
type of data or advise me on how to structure a series of fields that 
could combine to the required string?

I am running postgreSQL 8.1.4 under WinXP Pro and currently evaluating 
an X-Talk front end called Revolution for the GUI development and have 
only some general experience with SQL.

Thanks in advance

John Tregea

---(end of broadcast)---
TIP 6: explain analyze is your friend

!DSPAM:448d0905111031804284693!





---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Performance

2006-04-23 Thread Gregory S. Williamson

Bert --

You don't include some necessary details:

What version of postgres ? (8.x is noticably better at that OR / IN stuff I 
think, at least to some degree) 

Please include some info about your setup -- what are the settings in the 
postgres config file (effective_cache_size, random_page_cost, work_mem [for 
postgres 8.x) or sort_mem in 7.x)

Perhaps some info about the server in question (OS, amount of RAM, # of disks 
and their setup)

Please also post the results of several EXPLAIN ANALYZE sql here; this will 
help people see what decisions the planner is making. Perhaps best to show one 
that is small, an intermdediate size and one that is painfully slow (the 
explain analyze has to run the actual query in question so I realize that this 
may be too slow, but some examples will help).

Greg Williamson
DBA
GlobeXplorer LLC

-Original Message-
From:   [EMAIL PROTECTED] on behalf of Bert
Sent:   Sat 4/22/2006 4:18 PM
To: pgsql-general@postgresql.org
Cc: 
Subject:[GENERAL] Performance

Hi List
I have maybe an easy question but i do not find an answer, i have this
SQL query:

SELECT geom,group,production_facs FROM south_america
   WHERE municipio = ''
   OR municipio = 'ACRE'
   OR municipio = 'ADJUNTAS'
   OR municipio = 'AGUADA'

The performance of this query is quite worse as longer it gets, its
possible that this query gets over 20 to 30 OR comparisons, but then
the performance is really worse, is it possible to speed it up?
Thanks
Clemens


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster

!DSPAM:444aba30189631465223968!





---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Truncate and Foreign Key Constraint question

2006-04-12 Thread Gregory S. Williamson
This is in postgres 8.1:
 PostgreSQL 8.1.1 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 3.4.2 
20041017 (Red Hat 3.4.2-6.fc3)


 I've got a table in one schema (work.client_collect_rates) which has an FK 
constraint with a table, content.collections_l (definitions shown below). 
There's about 500 entries currently in my collections_l table.

I need to wipe out the contents of the collections_l table nightly and refresh 
it from a remote master source. (Don't ask ... long  sordid history)

As the sequence below shows, I dropped the FK constraint successfully, but when 
I run TRUNCATE collections_l it says:

ERROR:  cannot truncate a table referenced in a foreign key constraint
DETAIL:  Table client_collect_rates references collections_l via foreign 
key constraint $2.
HINT:  Truncate table client_collect_rates at the same time.

This truncation of the second table is *not* an option, but since the 
constraint $2 is clearly gone, I am wondering what in the name of sweet 
apples is going on ? Is this a bug ? Have a developed premature senility ?

Any clues for the clueless would be gratefully accepted!

TIA,

Greg Williamson
DBA
GlobeXplorer LLC


billing=# \d work.client_collect_rates
  Table work.client_collect_rates
Column |  Type   | Modifiers
---+-+---
 contract_id   | integer | not null
 collection_id | integer | not null
 rate  | numeric |
 break_1   | numeric |
 rate_1| numeric |
 break_2   | numeric |
 rate_2| numeric |
 break_3   | numeric |
 rate_3| numeric |
 break_4   | numeric |
 rate_4| numeric |
Indexes:
clnt_colrate_ndx UNIQUE, btree (contract_id, collection_id)
Foreign-key constraints:
$1 FOREIGN KEY (contract_id) REFERENCES client_contracts(contract_id)
$2 FOREIGN KEY (collection_id) REFERENCES 
content.collections_l(collect_id)

billing=# \d content.collections_l
   Table content.collections_l
Column|  Type  | Modifiers
--++
 collect_id   | integer| not null
 owner| integer|
 collection_name  | character(50)  |
 begin_date   | date   |
 end_date | date   |
 pos_accuracy | integer|
 res_accuracy | integer|
 loc_code | character(30)  |
 color| integer| default 0
 category_id  | integer|
 is_mosaic| integer| not null default 0
 detail_metadata_view | character varying(255) |
 jdbc_url | character varying(255) |
 jdbc_driver  | character varying(255) |
Indexes:
collections_l_pkey PRIMARY KEY, btree (collect_id)
collect_own_ndx btree (owner, collect_id)

billing=# alter table work.client_collect_rates drop constraint $2;
ALTER TABLE

billing=# \d work.client_collect_rates
  Table work.client_collect_rates
Column |  Type   | Modifiers
---+-+---
 contract_id   | integer | not null
 collection_id | integer | not null
 rate  | numeric |
 break_1   | numeric |
 rate_1| numeric |
 break_2   | numeric |
 rate_2| numeric |
 break_3   | numeric |
 rate_3| numeric |
 break_4   | numeric |
 rate_4| numeric |
Indexes:
clnt_colrate_ndx UNIQUE, btree (contract_id, collection_id)
Foreign-key constraints:
$1 FOREIGN KEY (contract_id) REFERENCES client_contracts(contract_id)

(Note that the $2 FK is gone...)

billing=# truncate content.collections_l;
ERROR:  cannot truncate a table referenced in a foreign key constraint
DETAIL:  Table client_collect_rates references collections_l via foreign 
key constraint $2.
HINT:  Truncate table client_collect_rates at the same time.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Truncate and Foreign Key Constraint question

2006-04-12 Thread Gregory S. Williamson
Doh !  A test schema that was a left over. Thanks for the sanity check ... as 
usual, pilot error!

g

-Original Message-
From:   Tom Lane [mailto:[EMAIL PROTECTED]
Sent:   Wed 4/12/2006 10:02 PM
To: Gregory S. Williamson
Cc: pgsql-general@postgresql.org
Subject:Re: [GENERAL] Truncate and Foreign Key Constraint question 

Gregory S. Williamson [EMAIL PROTECTED] writes:
 As the sequence below shows, I dropped the FK constraint successfully, but 
 when I run TRUNCATE collections_l it says:
 ERROR:  cannot truncate a table referenced in a foreign key constraint
 DETAIL:  Table client_collect_rates references collections_l via foreign 
 key constraint $2.

Hm, works for me.  Is it possible that you've got multiple
client_collect_rates tables in different schemas, and it's complaining
about some other one?  The error message doesn't show the schema of the
table ...

regards, tom lane

!DSPAM:443ddb4b66027357040552!





---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Guidelines for upgrading from pgsql7.4.xxx server to pgsql8.xxx server

2006-03-29 Thread Gregory S. Williamson
As others have pointed out, OID dependant tables may need special attention.

We recently upgraded from 7.4 to 8.1.x and found some issues with encoding -- 
8.1 defaulted to a different encoding and we had some data that was invalid; we 
manually fixed the old data before retrying the export.

Make sure you read the manual's section on the configuration parameters as 
well; there are some changes / improvements.

Some SQL may need adjustment -- there are some stricter checks in 8.1 which 
will fail to run SQL that 7.4 would accept (look for the Missing WHERE clause 
stuff in the manual, for instance.)

Our upgrade went smoothly (both runtime with postGIS data and the billing side 
of things) and I'd upgrading; 8.1 has some substantial improvements.

Greg Williamson
DBA
GlobeXplorer LLC

-Original Message-
From:   [EMAIL PROTECTED] on behalf of Luki Rustianto
Sent:   Tue 3/28/2006 6:53 PM
To: pgsql-general@postgresql.org
Cc: 
Subject:[GENERAL] Guidelines for upgrading from pgsql7.4.xxx server to 
pgsql8.xxx server

Hi All,

All of our application now runs on postgresql7.4.xxx servers, I never
tried it on version 8.xxx

I wonder if there are any guidelines / step by step / special
considerations whether it applies to database structure or the
application it self if we want to upgrade to version 8.xxx ?

Thanks.

---(end of broadcast)---
TIP 6: explain analyze is your friend

!DSPAM:4429f70e308891228024673!





---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Is there a way to list running queries

2006-01-21 Thread Gregory S. Williamson
Frank Church caused electrons to display this:
 
 Is there a way to list running queries in PostgreSQL, especially those that
 appear to have gone away or don't seem to be running at all?

You should always include relevant info such as which version of postgres you 
are using.

If the stats are enabled (stats_start_collector and stats_command_string in 
8.1) you can see running queries in the pg_stat_activity system view.

You can also set the logging to show all commands (in 8.1 set 
log_min_duration_statement to 0 (and perhaps set log_statement to 'all') and 
the designated log will show all queries.

Queries which have gone away will prove to most elusive, I fear ... not running 
at all could be a symptom of locks or contentions for tables/rows, etc. More 
information might help people give you a better answer.

HTH,

Greg Williamson
DBA
GlobeXplorer LLC




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Detaching database

2005-12-28 Thread Gregory S. Williamson

 
   Gregory S. Williamson [EMAIL PROTECTED] wrote:
  Petr,
  
  As long as the new server is the same operating system, and the versions of 
  postgres are the same, 
 ...

 As a clarification, 'versions are the same' needs to be more strict than the
 version number (e.g. 8.1.1) and should include the build options, as some
 build options (notably --enable-integer-datetimes) change the format used
 for data.

Thanks for the clarification on that -- it hadn't occurred tome and I can see 
how it might lead to issues!

G



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Detaching database

2005-12-27 Thread Gregory S. Williamson
Petr,

As long as the new server is the same operating system, and the versions of 
postgres are the same, you can do a binary copy of the data directory and move 
it to the new machine, point the new server's postgres to the copied and data 
and start it up. Indexes, statistics, etc. all are intact and ready to go. On 
the negative side I think you need to idle the source database during the 
initial copy.

We've done this to move databases in the 5-10 gigabyte range on postgres 7.4.

I am not familiar with those other database's capabilities so if mu humble 
suggestion is not what you want, a bit more information about the problem might 
be of help.

HTH,

Greg Williamson


-Original Message-
From:   [EMAIL PROTECTED] on behalf of Petr
Sent:   Tue 12/27/2005 3:39 PM
To: pgsql-general@postgresql.org
Cc: 
Subject:[GENERAL] Detaching database
Hi.

Is any way how to detach complete database, and attach it into other Postgre
server (like MSSQL, Interbase etc. databases movability) ?
Moving database via SQL export is crazy way for me (and for my customers).

Thanks and forgive me for my poor english.

Petr


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings

!DSPAM:43b1d0f4175415020319812!





---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] new beginner to postgresql. Looking at it for a church

2005-12-27 Thread Gregory S. Williamson
It sounds as if reliability of the data is of some importance (usually is in 
financial applications), and personally, I wouldn't trust mySQL with such data  
-- the inability to have clean transactions, its willingness to insert data 
that doesn't match what the original input was with throwing an error, and a 
few other gotchas come to mind. 

Of course, postgres may have a slightly higher learning curve -- mySQL in its 
native form can outperform postgres in some applications. But if the data is 
important I'd go with postgres.

Of course, this *is* a postgres list so such a feeling might not be reflective 
of everyone's results.

Greg Williamson
DBA
GlobeXplorer LLC


-Original Message-
From:   [EMAIL PROTECTED] on behalf of Pamela
Sent:   Tue 12/27/2005 9:33 PM
To: pgsql-general@postgresql.org
Cc: 
Subject:FW: Re[2]: [GENERAL] new beginner to postgresql.  Looking at it 
for a church

I would prefer using linux.  I have worked in a limited capacity with both
linux and windows.  The front end I was looking at Ruby/PHP.

Thanks Igor

SQ- ledger.org provided by Dann Corbit has a lot of what I would be looking
for plus more.  The financials would require a budgeted amount plus requires
budgeted minus actual, plus many computations and configurations so that
they could at consolidated statements for the churches and separate
financials.  This would have to datamodeled properly and allow for there
particulars.


-Original Message-
From: go KEY952866 [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 27, 2005 9:25 PM
To: [EMAIL PROTECTED]
Subject: Re[2]: [GENERAL] new beginner to postgresql. Looking at it for a
church

Hi Pamela,

 Tell me please what front-end application are u planning for
 your project? and what server(Linux or Win) are u running for
 postgres?
 (if it is not a top-secret, please)

 Have a nice day!
 Igor.
 

TC Pamela wrote:
 Hello:
 
  
 
 I was wondering if anyone has setup a point-of-sale system with 
 postgresql.  Also, I will have to create a database for a church that 
 requires lots of tables and subcategories.  They have 4 different 
 locations and wish to be interconnected amongst each other, 


TC Postgresql can handle such a task, and should make things a little 
TC easier for you if you use schemas to organize your tables etc.

TC You also have the option to use replication and dblink to do cross 
TC database queries etc.

TC Don't know about the point of sale question though.


TC (You mentioned you where new to Postgres)

TC If you are a windows user be sure to check out PG Lightning Admin, it's 
TC much easier to use than PG Admin III (my opinion, no flames please) and 
TC it priced very inexpensively.

TC AM Software Design
TC http://www.amsoftwaredesign.com
TC Home of PG Lightning Admin for Postgresql


TC ---(end of broadcast)---
TC TIP 4: Have you searched our list archives?

TChttp://archives.postgresql.org



-- 
Ñ óâàæåíèåì,
 Èãîðü mailto:[EMAIL PROTECTED]


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings

!DSPAM:43b223db207371333710190!





---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] is this a bug or I am blind?

2005-12-15 Thread Gregory S. Williamson


 If you look my 3rd query, you will see that there are no spaces, however:
 
 select *, length(username), length('potyty') from common_logins where 
username like 'potyty';
   uid   | username | password | lastlogin  | status | 
usertype | loginnum | length | length
 +--+--+++--+--++
  155505 | potyty   | board| 2004-08-16 17:45:55.723829 | A  | 
S|1 |  6 |  6
   60067 | potyty   | board| 2004-07-07 20:22:17.68699  | A  | 
S|3 |  6 |  6
  174041 | potyty   | board| 2005-02-17 00:00:13.706144 | A  | 
S|3 |  6 |  6
 (3 rows)
 
Mage

I am puzzled by the lack of a % in the LIKE query. When I try this on 
postgres 7.4 and 8.0 I get no rows when I am missing it; including it works as 
expected.

The names have been changed to protect the guilty ;-} but the core of it is 
true -- no % means wierdnesses, I think.

gex_runtime=# select gex_clientname from gex_clients where gex_clientname like 
'Home';
 gex_clientname

(0 rows)

gex_runtime=# select gex_clientname from gex_clients where gex_clientname like 
'Home%';
gex_clientname
--
 HomeHappinesses
 HomeMorgageValues, Inc.
(2 rows)

Could you try your query again with the wild card ?

HTH

Greg Williamson
DBA
GlobeXplorer LLC



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] is this a bug or I am blind?

2005-12-15 Thread Gregory S. Williamson
Well, then I have the disease. The database is UNICODE:
gex_runtime=# select gex_clientname from gex_clients where gex_clientname = 
'HomeHappinesses';
gex_clientname
--
 HomeGain
(1 row)

gex_runtime=# select gex_clientname from gex_clients where gex_clientname like 
'HomeHappinesses';
 gex_clientname

(0 rows)

 
 Gregory S. Williamson wrote:
  
  I am puzzled by the lack of a % in the LIKE query. When I try this on
  postgres 7.4 and 8.0 I get no rows when I am missing it; including it
  works as expected.
 
  The names have been changed to protect the guilty ;-} but the core of it
  is true -- no % means wierdnesses, I think.
 
  gex_runtime=# select gex_clientname from gex_clients where gex_clientname 
  like 'Home';
   gex_clientname
  
  (0 rows)
  
  gex_runtime=# select gex_clientname from gex_clients where gex_clientname 
  like 'Home%';
  gex_clientname
  --
   HomeHappinesses
   HomeMorgageValues, Inc.
  (2 rows)
  
  Could you try your query again with the wild card ?
 
 But like without any wildcards should be the same as =, but it isn't 
 in the original post.
  

Well then I have the same behavior -- the database is UNICODE:

gex_runtime=# select gex_clientname from gex_clients where gex_clientname = 
'HomeHappinesses';
gex_clientname
--
 HomeHappinesses
(1 row)

gex_runtime=# select gex_clientname from gex_clients where gex_clientname like 
'HomeHappinesses';
 gex_clientname

(0 rows)

An = is not equivalent to LIKE with no wildcard.

I never really thought of this as a bug, but if it is ... ring one up for Mage 
as a good catch.




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] outdated (bad) information in pg_stat_activity

2005-12-15 Thread Gregory S. Williamson
Jim Nasby wrote:

  I too have had issues with stuck entries in the pg_stat_activity view -- 
   the real pid is long gone but postgres still thinks the process is there.
  
  It would be real sweet to have a way of cleaning this table with bringing  
  postgres off-line.
 
 Rather than trying to clean things up by hand, could the stats system
 instead periodically check to make sure that all the PIDs it knows about
 actually still exist? I think that should be a pretty cheap check to
 perform...

This would certainly work for me, even as a function that a DBA might call 
manually.
...
 On a side note, is GlobeXplorer using PostgreSQL? Would they be willing
 to let us publicize that fact? Better yet, would they be willing to do a
 case study?

We are indeed using postgres (and postGIS) for both runtime data access and 
billing and other misc. data processing requirements. 

We're be delighted to be publicized as happy users -- the transition from 
Informix was fairly smooth and performance is solid. 

Feel free to contact me off-list if you like for more substantive comments. I 
am gsw @ globexplorer.com

Greg W.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] outdated (bad) information in pg_stat_activity

2005-12-14 Thread Gregory S. Williamson
I too have had issues with stuck entries in the pg_stat_activity view -- the 
real pid is long gone but postgres still thinks the process is there.

It would be real sweet to have a way of cleaning this table with bringing 
postgres off-line.

My $0.02 worth ...

Greg Williamson
DBA
GlobeXplorer LLC

-Original Message-
From:   [EMAIL PROTECTED] on behalf of Chris Browne
Sent:   Wed 12/14/2005 11:06 AM
To: pgsql-general@postgresql.org
Cc: 
Subject:Re: [GENERAL] outdated (bad) information in pg_stat_activity
[EMAIL PROTECTED] (hubert depesz lubaczewski) writes:
 On 12/14/05, Tom Lane [EMAIL PROTECTED] wrote:


   hubert depesz lubaczewski [EMAIL PROTECTED] writes:
   my pg_stat_activity is raporting process working on a query for 12 
 hours
   straight! (query_start).
   2 problems:
   1. this particular query runs typically between 80 and 200 miliseconds!
   2. procpid of this backend does not exists anymore in system
  It's possible that the backend exited message got dropped by the stats
  mechanism --- the stats code is deliberately designed to lose messages
  rather than delay the rest of the system, so it's not 100% reliable.
  


 is it possible to by hand remove bad entries from this stats?
 depesz

The only answer I have been able to come to is that restarting the
postmaster will clear this all up.

If there is some less intrusive way of accomplishing this, I'd be keen
on hearing about it...
-- 
cbbrowne,@,cbbrowne.com
http://cbbrowne.com/info/nonrdbms.html
Funny, the  only thing that makes me  go Keanu about Microsoft is the
fact  that they are constantly  behind the times  and yet  claim to be
innovating. -- Steve Lamb [EMAIL PROTECTED]

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org

!DSPAM:43a07aed148321697067737!






---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [pgsql-advocacy] [GENERAL] Oracle buys Innobase

2005-10-11 Thread Gregory S. Williamson
Jim C. Nasby was quoted as saying:
  Jim C. Nasby wrote:
   Of course one flip-side to all this is that if Oracle does attack us it
   actually lends credibility; it means they see PostgreSQL as a threat. At
   this point that could do more good for us than harm, depending on how
   exactly the attacked.
  
  Well, that was MySQL's reaction to it, but I think the harm far
  outweighs the good for them.  Its more like, Oracle finds MySQL a
  threat, what is MySQL going to do now!  We don't want that kind of
  outcome.  Also, there are ways of attacking that do not show Oracle as
  an agreesor, like hiring PostgreSQL developers.
 
 Well, they effectively took a big chunk of MySQL's commercial technology
 away, something the'd have a harder time doing with PostgreSQL (unless
 we're violating patents).
-- 

Doesn't really matter if the legal issues are ultimately in one's favor, if 
one's erstwhile opponent has enough lawyer time ... even if you can survive the 
lengthy battle, it may well be a pyrrhic victory.

Not having specific assets to be tied up helps, but Oracle could then generate 
enough FUD it would hamper the use (and spread) of PostgreSQL. Oracle would 
probably not do so directly but through some 3rd party (or parties).

OTH, has PostgreSQL cost Oracle enough, or does it threaten to cost enough, to 
make such a venture worthwhile? In the short run it would generate a lot of 
unsympathetic press and some support for the project.

My $0.03 worth ... and now back to work.

Greg Williamson


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] PostgreSQL Gotchas --- count()

2005-10-07 Thread Gregory S. Williamson


 Gregory S. Williamson wrote:
 [ re COUNT(*) ]
 On Informix however it is blindingly fast, and can also be instantly
 conjured with the dbaccess tool (Info/Table/Status). They might be
 stashing this count somewhere, but it is not available when the table
 is locked, as during a load. However they do it, performance does not
 seem to suffer, and having this rapidly available is certainly nice.
 Especially when people are used to it.

 Informix locks rows during modification so they don't have the MVCC
 visibility problem we have (some rows are visible to only some
 backends).

More to the point: performance does not seem to suffer is an opinion
based on no facts.  You have no idea what it's costing Informix to
maintain that count --- ie, how much faster might other things go if
COUNT(*) didn't have to be instant?

Excellent point. But since my standard was is Postgres as fast as Informix on 
bulk loads / deletes / updates (which is where this delay would surely manifest 
itself) I had a faint disappointment on first using Postgres and seeing this: 
The load, etc. speeds are close enough, so why this wild disparity in count I 
thought to myself.

I understand well why this is -- been hashed out a lot on various lists -- and 
I am not specifically arguing for changing Postgres. Just emphasizing that this 
needs to be spelled well in any Gotchas discussion.

We know quite well what it would cost to make this happen in Postgres,
and it's the general judgment that we don't want to pay those costs ---
certainly not to force everyone to pay them.

An option (compile time ?) that let users have some tradeoff *might* be of 
interest to some. But not worth desitracting core people from more pressing 
issues.

My $0.02 worth ... sorry to waste bandwidth.

G




---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PostgreSQL Gotchas --- count()

2005-10-06 Thread Gregory S. Williamson
On Informix however it is blindingly fast, and can also be instantly conjured 
with the dbaccess tool (Info/Table/Status). They might be stashing this count 
somewhere, but it is not available when the table is locked, as during a load. 
However they do it, performance does not seem to suffer, and having this 
rapidly available is certainly nice. Especially when people are used to it.

Greg Williamson
DBA
GlobeXplorer LLC

-Original Message-
From:   [EMAIL PROTECTED] on behalf of Jeffrey Melloy
Sent:   Thu 10/6/2005 3:47 PM
To: Neil Conway
Cc: Aly S.P Dharshi; pgsql-general@postgresql.org
Subject:Re: [GENERAL] PostgreSQL Gotchas
Neil Conway wrote:


COUNT(*) very slow: this is a known issue -- see the -hackers archives
for many prior discussions. MVCC makes this hard to solve effectively
(whether applications should actually be using COUNT(*) on large tables
with no WHERE clause is another matter...)

-Neil
  

And it's not like a count(*) on an Oracle database of any decently-sized 
dataset is blazing fast, or even in blazing's ballpark.

The only thing I could see actually being an issue is the random() one 
and add missing from.  The rest are trivial.  The random() thing is 
interesting, esoteric, and probably has never been a problem in a real 
situation.  (Or has exactly once, when he wrote that gotcha)

Jeff

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings

!DSPAM:4345aeea115747915089936!





---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Quotation marks in queries

2005-07-14 Thread Gregory S. Williamson

This is only really necessary if the table or column names are mixed cases or 
include spaces or some such ... normally (?) this is not required. PostgreSQL 
relentlessly lower cases such names unless they are double quoted.

HTH,

Greg Williamson
DBA
GlobeXplorer LLC

-Original Message-
From:   [EMAIL PROTECTED] on behalf of Martynas Brijunas
Sent:   Thu 7/14/2005 12:31 AM
To: pgsql-general@postgresql.org
Cc: 
Subject:[GENERAL] Quotation marks in queries
Hello,

I am a total newbie to PostgreSql, coming from MS Access background. I have 
a question regarding queries in PostgreSql: why do I need to enclose every 
field name and table name in quotation marks like

SELECT Name From contacts

That is a major inconvenience when composing a query string in a VB program. 
Thank you.



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly

!DSPAM:42d6173839122069320068!





---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Privileges question

2005-02-14 Thread Gregory S. Williamson

James --

I am not sure if anyone answered this yet, but you might try:
GRANT USAGE ON SCHEMA sma_apps TO sma_user;
as the same user as createdthe schema ...

HTH,

Greg Williamson
DBA
GlobeXplorer LLC
-Original Message-
From:   James Moe [mailto:[EMAIL PROTECTED]
Sent:   Mon 2/14/2005 10:59 AM
To: pgsql-general@postgresql.org
Cc: 
Subject:[GENERAL] Privileges question
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hello,
~  I created a schema, database, tables, users and a user group.

create user sma_user password 'x';
create group sma_user_group;
alter  group sma_user_group add user sma_user;

create table sma_apps.ordr_dat ( ... );
grant select,delete,insert,update on sma_apps.ordr_dat to group
sma_user_group;

~  Access privileges for database smadb1
~  Schema  |   Name   | Type  |  Access privileges
- 
--+--+---+-
~ sma_apps | ordr_dat | table | {sma_admin=arwdRxt/sma_admin,group
sma_user_group=arwd/sma_admin}


~  When logged in as sma_user, it is denied access to the schema:
ERROR:  permission denied for schema sma_apps

What am I missing? How do I provide access to a schema?

- --
jimoe at sohnen-moe dot com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.0 (OS/2)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFCEPUUzTcr8Prq0ZMRAvU/AJ90kwdpxNYGgHdzBA7A2uPFo4FcNwCdFeGz
Q7Dte87qrtET8yoI7uZ9VD0=
=0sLa
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq

!DSPAM:4210f76918377043564807!





---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] test

2004-12-10 Thread Gregory S. Williamson
The list is either down or very quiet ?

G

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] migrating from informix

2004-12-10 Thread Gregory S. Williamson
Jaime --

Sorry that I didn't respond earler -- been quite busy.

We have migrated our runtime aspect (which is heavy use of the Spatial 
Blade/postGIS in a mostly read-only environment); we were using Informix 9.3 
dynamic server.

I have some notes I wrote up that I'll post tonight -- they are on a machine I 
don't have access to right now. Most of the tables and supporting SQL (perl 
scripts mostly) transferred cleanly; I didn't try to automate conversion of 
stored procedures or triggers (not a lot of these in our runtime).

Greg Williamson
DBA
GlobeXplorer LLC

-Original Message-
From: Jaime Casanova [mailto:[EMAIL PROTECTED]
Sent: Monday, December 06, 2004 9:32 PM
To: [EMAIL PROTECTED]
Subject: [GENERAL] migrating from informix


Hi,

someone has successfully migrated a database from
informix to postgresql?

there are any tools that helps or maybe an script?

regards,
Jaime Casanova

_
Do You Yahoo!?
Información de Estados Unidos y América Latina, en Yahoo! Noticias.
Visítanos en http://noticias.espanol.yahoo.com

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] why use SCHEMA? any real-world examples?

2004-11-25 Thread Gregory S. Williamson

As other posters have indicated, there's a convenience factor and an advantage 
to compartmentalizing data. In our case we don't care so much about user rights 
(a very useful aspect in and of itself), but more for performance issues.

We have some applications that use a lot of detailed data about properties in 
different counties. We have a central table that stores their spatial 
attributes and some data about the properties themselves. The table has several 
million rows currently -- selections based on a bounding box are very fast, but 
if we try to get a list of all properties on all streets with names like Elm% 
in a given county, the select is painfully slow as the index (county / street 
in this simplified case) lacks specificity -- any given county yields say a 
half million rows as candidates by county, with hundreds of possible street 
entries, so sequential scans are used.

Hence, I broke out some of the property data that needed to be searched by 
county, with each county in its own schema,and each schema has the same tables 
(so the schema called f10675 has a name_search table that has the same name 
as the f01223 schema, but its own contents.

The search tables all refer to the original data by a unique identifier that is 
common between the schema/search tables and the main store. The search in these 
schema based tables is much faster because the specificity of the index is much 
greater, yielding only dozens or hundreds of candidates out of hundreds of 
thousands of rows. 

The extra space taken by redundant data storage is more than compensated for by 
speed in retrieval.

HTH clarify possibilties,

Greg WIlliamson
DBA
GlobeXplorer LLC

-Original Message-
From:   Miles Keaton [mailto:[EMAIL PROTECTED]
Sent:   Wed 11/24/2004 9:12 PM
To: [EMAIL PROTECTED]
Cc: 
Subject:[GENERAL] why use SCHEMA? any real-world examples?
I just noticed PostgreSQL's schemas for my first time.
(http://www.postgresql.org/docs/current/static/ddl-schemas.html) 

I Googled around, but couldn't find any articles describing WHY or
WHEN to use schemas in database design.

Since the manual says HOW, could anyone here who has used schemas take
a minute to describe to a newbie like me why you did?   What benefits
did they offer you?   Any drawbacks?

Thanks for your time.

- Miles

---(end of broadcast)---
TIP 8: explain analyze is your friend




---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] null value of type java.sql.Time

2004-11-22 Thread Gregory S. Williamson
Try:
  SELECT * FROM event WHERE game_clock IS NULL;

Greg Williamson
DBA
GlobeXplorer LLC

-Original Message-
From: phil campaigne [mailto:[EMAIL PROTECTED]
Sent: Monday, November 22, 2004 2:33 PM
To: [EMAIL PROTECTED]
Subject: [GENERAL] null value of type java.sql.Time


Occasionally I want to store a null value for my java.sql.Time-- Time 
column in Postgresql.
update event set game_clock=null where event_id=1;

 I can retreive the record with the null value (type Time) if I select 
on the primary key,
select game_clock from event where event_id = 1;

but when I try to select on the null column value, I get zero records.
 select * from event where game_clock=null;

How can I retreive records with null values for a column?
thanks,
Phil



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] Mysterious Death of postmaster (-9)

2004-11-13 Thread Gregory S. Williamson
Dear peoples,

We had an oddness today with one of of postgres servers (Dell 2 CPU box running 
linux) and postgres 7.4. The server was under heavy load (50+ for a 1 minutes 
spike; about 20 for the 15 minute average) with about 250 connections (we still 
don't understand the heavy load itself).

Looking in the logs I see:
2004-11-13 13:30:28 LOG:  unexpected EOF on client connection
2004-11-13 13:30:40 LOG:  unexpected EOF on client connection
2004-11-13 13:38:28 LOG:  could not send data to client: Broken pipe
2004-11-13 13:42:15 LOG:  server process (PID 30272) was terminated by signal 9
2004-11-13 13:42:16 LOG:  terminating any other active server processes
2004-11-13 13:42:16 WARNING:  terminating connection because of crash of another
 server process

The EOFs are almost certainly Proxool closing connections from the client to 
the database.

The sysad who was on call today swears he didn't send a kill signal (or any 
signal at all) -- suddenly the load dropped off and the server was down. It has 
restarted normally and shows no signs of being worse for the wear (this is 
really a read-only db so data corruption chances are minimal, I think).

Just to rule out any internal chances, is there any way this shutdown could 
have been triggered from within postgres itself ? Can anyone construct any 
scenarios in which Linux, postgres or proxool could have done this without 
human intervention ?

I have looked through manuals and some FAQs and newsgroup discussions and my 
gut feeling is that this can't be from postgres, but I thought I'd ask in the 
chance that I am, as is often the case, Unclear On The Concept.

Thanks for any illumination,

Greg Williamson
DBA
GlobeXplorer LLC 

ps if this is not the right list please let know what might be an appropriate 
one. gracias!

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Mysterious Death of postmaster (-9)

2004-11-13 Thread Gregory S. Williamson

Thanks Alvaro and Steven -- this may in fact be what happened as the monitor 
showed that at about that time memory definitely was taxed and showed oddnesses.

I'll read up on this -- thanks very much for the (promising) clue!

Greg W.


-Original Message-
From:   Alvaro Herrera [mailto:[EMAIL PROTECTED]
Sent:   Sat 11/13/2004 3:06 PM
To: Gregory S. Williamson
Cc: [EMAIL PROTECTED]
Subject:Re: [GENERAL] Mysterious Death of postmaster (-9)
On Sat, Nov 13, 2004 at 02:39:38PM -0800, Gregory S. Williamson wrote:

Gregory,

 We had an oddness today with one of of postgres servers (Dell 2 CPU box
 running linux) and postgres 7.4. The server was under heavy load (50+ for a 1
 minutes spike; about 20 for the 15 minute average) with about 250 connections
 (we still don't understand the heavy load itself).
 
 Looking in the logs I see:
 2004-11-13 13:30:28 LOG:  unexpected EOF on client connection
 2004-11-13 13:30:40 LOG:  unexpected EOF on client connection
 2004-11-13 13:38:28 LOG:  could not send data to client: Broken pipe
 2004-11-13 13:42:15 LOG:  server process (PID 30272) was terminated by signal 
 9

This looks an awful lot like the Linux Out-Of-Memory killer got you.
This happens when the Linux kernel overcommits memory.  There is something
about this on the documentation, and has been discussed in the past
here.  Please see the archives (www.pgsql.ru; look for OOM killer and
linux overcommit).

Luckily it didn't get your postmaster, as has happenned to other
people ...

-- 
Alvaro Herrera ([EMAIL PROTECTED])
XML! Exclaimed C++.  What are you doing here? You're not a programming
language.
Tell that to the people who use me, said XML.




---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL] Conactenating text with null values

2004-11-05 Thread Gregory S. Williamson
This is postgres 7.4 on a linux box ...

I have driven myself to distraction trying to what ought to be easy.

I have a table with house number, street direction, street name and street suffix as 4 
columns. I want to paste them together as one text string for use by another 
application.

SELECT s_house,s_post_dir,s_street,s_suffix FROM parcels WHERE s_pin = '1201703303520';
 s_house | s_post_dir |s_street| s_suffix
-+++--
 34643   || FIG TREE WOODS |

So to get 34643 FIG TREE WOODS what do I do ?

SELECT s_house || ' ' || s_post_dir || ' ' || s_street || ' ' || s_suffix FROM parcels 
WHERE s_pin = '1201703303520';
 ?column?
--

(1 row)

I have tried all manner of COALESCE and various trickeries. Nothing works. In Informix 
this works exactly as I think it should. Is Informix totally whack, or what ? 

I guess I will have to code a perl script to do this seemingly straightforward 
operation. Any suggestions as to what i am missing (and I've been back and forth 
through the manual) would be most welcome.

Thanks,

Greg Williamson
DBA (hah!)
GlobeXplorer LLC


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Conactenating text with null values

2004-11-05 Thread Gregory S. Williamson

Thanks to you and Richard for pointing me in the right direction (I had the wrong 
syntax on the coalesce function) ... too late at night here (too early in the 
morning?) and I much appreciate the help.

The mysteries of NULL ...

Greg W.

-Original Message-
From:   Oliver Elphick [mailto:[EMAIL PROTECTED]
Sent:   Fri 11/5/2004 2:15 AM
To: Gregory S. Williamson
Cc: [EMAIL PROTECTED]
Subject:Re: [GENERAL] Conactenating text with null values
On Fri, 2004-11-05 at 09:25, Gregory S. Williamson wrote:
 This is postgres 7.4 on a linux box ...
 
 I have driven myself to distraction trying to what ought to be easy.
 
 I have a table with house number, street direction, street name and
 street suffix as 4 columns. I want to paste them together as one text
 string for use by another application.
 
 SELECT s_house,s_post_dir,s_street,s_suffix FROM parcels WHERE s_pin =
 '1201703303520';
  s_house | s_post_dir |s_street| s_suffix
 -+++--
  34643   || FIG TREE WOODS |
 
 So to get 34643 FIG TREE WOODS what do I do ?
 
 SELECT s_house || ' ' || s_post_dir || ' ' || s_street || ' ' ||
 s_suffix FROM parcels WHERE s_pin = '1201703303520';
  ?column?
 --
 
 (1 row)
 
 I have tried all manner of COALESCE and various trickeries. Nothing
 works. In Informix this works exactly as I think it should. Is
 Informix totally whack, or what ? 
 
 I guess I will have to code a perl script to do this seemingly
 straightforward operation. Any suggestions as to what i am missing
 (and I've been back and forth through the manual) would be most
 welcome.

I presume the empty columns are NULL.  Anything concatenated with NULL
produces NULL.  You should probably define columns that can be blank as
NOT NULL DEFAULT ''

With your current data you should use:

SELECT TRIM(TRIM(TRIM(COALESCE(s_house,'') || ' ' ||
COALESCE(s_post_dir,'')) || ' ' || COALESCE(s_street ,'')) || ' ' ||
COALESCE(s_suffix,'')) FROM parcels WHERE s_pin = '1201703303520';

The TRIMs are to remove surplus spaces from inside the result string.






---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Conactenating text with null values

2004-11-05 Thread Gregory S. Williamson

Sweet. I learn something every day. thanks for ideas, one and all!
G 
-Original Message-
From:   Alvaro Herrera [mailto:[EMAIL PROTECTED]
Sent:   Fri 11/5/2004 8:49 AM
To: Csaba Nagy
Cc: [EMAIL PROTECTED]; Gregory S. Williamson; Postgres general mailing list
Subject:Re: [GENERAL] Conactenating text with null values
On Fri, Nov 05, 2004 at 12:51:11PM +0100, Csaba Nagy wrote:
 [snip]
  SELECT TRIM(TRIM(TRIM(COALESCE(s_house,'') || ' ' ||
  COALESCE(s_post_dir,'')) || ' ' || COALESCE(s_street ,'')) || ' ' ||
  COALESCE(s_suffix,'')) FROM parcels WHERE s_pin = '1201703303520';
  
  The TRIMs are to remove surplus spaces from inside the result string.
 Avoiding the inner trims:
 
 SELECT TRIM(
   COALESCE(s_house || ' ','') 
   || COALESCE(s_post_dir || ' ','') 
   || COALESCE(s_street || ' ','')
   || COALESCE(s_suffix,'')
 ) FROM parcels WHERE s_pin = '1201703303520';
 
 Looks a bit more understandable :-)

But it's still too cumbersome.  How about creating a new operator?  With
the example below the query would simply be

SELECT s_host ||~ s_post_dir ||~ s_street ||~ s_suffix
  FROM parcels
 WHERE s_pin = '1201703303520';

alvherre=# CREATE FUNCTION text_concat_nulls_with_an_embedded_space(text, text) 
RETURNS text AS 'SELECT CASE WHEN $1 IS NULL THEN $2 WHEN $2 IS NULL THEN $1 ELSE $1 
|| '' '' || $2 END' LANGUAGE sql;
CREATE FUNCTION
alvherre=# SELECT text_concat_nulls_with_an_embedded_space('foo', NULL);
 text_concat_nulls_with_an_embedded_space 
--
 foo
(1 fila)

alvherre=# SELECT text_concat_nulls_with_an_embedded_space(NULL, 'foo');
 text_concat_nulls_with_an_embedded_space 
--
 foo
(1 fila)

alvherre=# SELECT text_concat_nulls_with_an_embedded_space('bar', 'foo');
 text_concat_nulls_with_an_embedded_space 
--
 bar foo
(1 fila)
alvherre=# CREATE OPERATOR ||~ (PROCEDURE = text_concat_nulls_with_an_embedded_space, 
LEFTARG = text, RIGHTARG = text);
CREATE OPERATOR
alvherre=# SELECT 'hi' ||~ 'foo' ||~ null ||~ null ||~ 'bar' ||~ 'baz' ||~ null ||~ 
'bye'; 
  ?column?  

 hi foo bar baz bye
(1 fila)


-- 
Alvaro Herrera (alvherre[a]dcc.uchile.cl)
The first of April is the day we remember what we are
the other 364 days of the year  (Mark Twain)





---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] primary key and existing unique fields

2004-10-27 Thread Gregory S. Williamson
-Original Message-
From:   Robby Russell [mailto:[EMAIL PROTECTED]
Sent:   Tue 10/26/2004 9:08 PM
To: Kevin Barnard
Cc: [EMAIL PROTECTED]
Subject:Re: [GENERAL] primary key and existing unique fields
On Tue, 2004-10-26 at 22:03 -0500, Kevin Barnard wrote:
 On Tue, 26 Oct 2004 17:05:27 -0700, Robby Russell [EMAIL PROTECTED] wrote:
  On Tue, 2004-10-26 at 17:26 -0400, Mike Mascari wrote:
   joking
  
   Apparently gamma functions and string theory have little to do with
   understanding the relational model of data.
  
   /joking
  
  m.. string theory. :-)
  
  
 Ya you know the theory that states that the Database is really made up
 of a large amount of strings.  Some are even null terminated strings,
 although most strings really have a quanta that can be found immediate
 before the string. :-)

How do we SELECT the string so that we can observe it then? ;-)



-- 
/***
* Robby Russell | Owner.Developer.Geek
* PLANET ARGON  | www.planetargon.com
* Portland, OR  | [EMAIL PROTECTED]
* 503.351.4730  | blog.planetargon.com
* PHP/PostgreSQL Hosting  Development
/

You can't observe it ... only *infer* it.



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Setting search paths inside a function (plpgsql)

2004-09-29 Thread Gregory S. Williamson
Dear peoples,

I've got a problem which seemed to be neatly solved by the use of schemas, and in fact 
it mostly works, but I have tried to go one step too far, perhaps.

Rather than have the application do

SET search_path TO f12057;
SELECT * FROM parcel-owners WHERE ... ;
SET search_path TO public;

I thought I'd have a single function in the public schema which they call:

select * from fips_name_srch('12057','white');

and in the function I do:
 env_str := ''SET search_path TO f'' || p_fips || '',public'';
 EXECUTE env_str;
and then my search and a LOOP to return values with a final SET command to put us back 
to the public schema ...

In fact it works, once, and returns the expected values, but subsequent calls get 
exactly the same data ... 

gex_vector=# select * from fips_name_srch('12057','white');
NOTICE:  doing name search for fips 12057
NOTICE:  did exec of SET search_path TO f12057,public
 parcel_gid | parcel_fips | parcel_zip | parcel_ownname
+-++
   11449960 | 12057   | 33548  | DELANOE WHITE
   11437500 | 12057   | 33548  | WHITE DORREN
   11444394 | 12057   | 33548  | WHITE FERD T AND LACY A JR
...

select * from fips_name_srch('12031','white');
NOTICE:  doing name search for fips 12031
NOTICE:  did exec of SET search_path TO f12031,public
 parcel_gid | parcel_fips | parcel_zip | parcel_ownname
+-++
   11449960 | 12057   | 33548  | DELANOE WHITE
   11437500 | 12057   | 33548  | WHITE DORREN
   11444394 | 12057   | 33548  | WHITE FERD T AND LACY A JR
...

If I exit and run the second one it works:
gex_vector=# select * from fips_name_srch('12031','white');
NOTICE:  doing name search for fips 12031
NOTICE:  did exec of SET search_path TO f12031,public
 parcel_gid | parcel_fips | parcel_zip | parcel_ownname
+-++
8830922 | 12031   | 32202  | CARLA WHITE MISSION
8830925 | 12031   | 32202  | CARLA WHITE MISSION
8855011 | 12031   | 32202  | CARLA WHITE MISSION
8824016 | 12031   | 32202  | CARLA WHITE MISSION INC
...
I have tried variations with VOLATILE explicitly defined and some unsuccessful 
gyrations. I am sure the answer is obvious but I am not seeing it. This is postgres 
7.4, the function is below.

Any suggestions or advice would be welcome ... (RTFM acceptable but a page reference 
would be helpful)

thanks,

Greg Williamson
DBA
GlobeXplorer LLC

CREATE TYPE fips_name_results_t AS (parcel_gid INTEGER,
  parcel_fips VARCHAR(10),parcel_zip VARCHAR(10),parcel_ownname TEXT);

BEGIN;
CREATE OR REPLACE FUNCTION fips_name_srch(VARCHAR,VARCHAR)
 RETURNS setof fips_name_results_t AS '
DECLARE p_fips ALIAS FOR $1;
p_srchstr ALIAS FOR $2;
parcel_gid INTEGER;
parcel_zip VARCHAR(10);
parcel_ownname TEXT;
env_str TEXT;
retrec fips_name_results_t%rowtype;
BEGIN
RAISE NOTICE ''doing name search for fips %'',p_fips;
env_str := ''SET search_path TO f'' || p_fips || '',public'';
EXECUTE env_str;
RAISE NOTICE ''did exec of %'',env_str;
FOR retrec IN
SELECT o.gid,o.s_fips_cou,o.s_zip,o.s_ownername
FROM parcel_owners o, parcel_owner_fti f
WHERE f.string = p_srchstr AND f.id = o.orig_id ORDER BY 2,3,4
LOOP
RETURN NEXT retrec;
--SET search_path TO public;
END LOOP;
RETURN;
SET search_path TO public;
END;
' LANGUAGE 'plpgsql' VOLATILE;


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Setting search paths inside a function (plpgsql)

2004-09-29 Thread Gregory S. Williamson
Doh !  Thanks for the insight.

Greg
-Original Message-
From:   Tom Lane [mailto:[EMAIL PROTECTED]
Sent:   Wed 9/29/2004 8:40 AM
To: Gregory S. Williamson
Cc: [EMAIL PROTECTED]
Subject:Re: [GENERAL] Setting search paths inside a function (plpgsql) 
Gregory S. Williamson [EMAIL PROTECTED] writes:
 RAISE NOTICE ''doing name search for fips %'',p_fips;
 env_str := ''SET search_path TO f'' || p_fips || '',public'';
 EXECUTE env_str;
 RAISE NOTICE ''did exec of %'',env_str;
 FOR retrec IN
 SELECT o.gid,o.s_fips_cou,o.s_zip,o.s_ownername
 FROM parcel_owners o, parcel_owner_fti f
 WHERE f.string = p_srchstr AND f.id = o.orig_id ORDER BY 2,3,4
 LOOP

You'd have to use FOR-IN-EXECUTE to make this work the way you are
expecting.  As is, the plan for the SELECT is generated and cached
the first time through, and in the process the table references are
bound to specific tables in specific schemas.

regards, tom lane




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] test message -- wondering if the list is down -- please delete

2004-09-03 Thread Gregory S. Williamson
Sorry to waste bandwidth but I've never seen a day with no messages.
G

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Killing a session

2004-08-21 Thread Gregory S. Williamson

kill -15 pid#

will kill a given thread ... you can look in the pg_stat_activity; the query_start 
shows when it started, the current_query shows at least a portion of the SQL the 
thread is executing, and procpid if the pid of the process to kill with a -15 command.

Not sure about Windows variants, though.

HTH

Greg Williamson
DBA
GlobeXplorer LLC

-Original Message-
From:   Vitaly Belman [mailto:[EMAIL PROTECTED]
Sent:   Sat 8/21/2004 1:32 PM
To: [EMAIL PROTECTED]
Cc: 
Subject:[GENERAL] Killing a session
Is there a way to kill a session in PostrgeSQL? I ran a bad query by
mistake and I don't want to shut the whole database just to quit using
the whole CPU.

-- 
 ICQ: 1912453
 AIM: VitalyB1984
 MSN: [EMAIL PROTECTED]
 Yahoo!: VitalyBe

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])




---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Trying to minimize the impact of checkpoints (resend)

2004-06-11 Thread Gregory S. Williamson
There is something wonky on this mail list. I did not send this.


-Original Message-
From:   Gregory S. Williamson
Sent:   Fri 6/11/2004 2:10 PM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject:Re: [GENERAL] Trying to minimize the impact of checkpoints (resend)
In-reply-to: [EMAIL PROTECTED] 
References: [EMAIL PROTECTED]
Comments: In-reply-to [EMAIL PROTECTED] dated Fri, 11 Jun 2004
15:55:14 -0400
Date: Fri, 11 Jun 2004 16:42:19 -0400
Message-ID: [EMAIL PROTECTED]
From: Tom Lane [EMAIL PROTECTED]
X-Virus-Scanned: by amavisd-new at hub.org
X-Spam-Status: No, hits=0.0 tagged_above=0.0 required=5.0 tests=
X-Spam-Level: 
X-Mailing-List: pgsql-general
Precedence: bulk
Sender: [EMAIL PROTECTED]
X-imss-version: 2.5
X-imss-result: Passed
X-imss-scores: Clean:99.9 C:15 M:2 S:5 R:5
X-imss-settings: Baseline:2 C:2 M:2 S:2 R:2 (0.1500 0.1500)
Return-Path: [EMAIL PROTECTED]
X-OriginalArrivalTime: 11 Jun 2004 20:51:04.0152 (UTC) FILETIME=[CF961D80:01C44FF5]

[EMAIL PROTECTED] writes:
 I'm using PostgreSQL 7.3.4 on RH9. Data and logs are on separate
 disks. (These are low-end IDE disks. That part of the problem
 is out of my control.)

 When a checkpoint occurs, all operations slow way, way down.

Not too surprising; you haven't got enough I/O bandwidth.

 Does anyone have any experience in modifying the priority of the
 checkpoint process itself, (re-nicing it)?

That would be a waste of time, because your problem is with I/O usage
not CPU usage, and nice doesn't impact I/O scheduling AFAIK.

You might be able to get somewhere by inserting intrapage delays into
the checkpoint write loop, similar to what's been done to VACUUM since
7.4.  (I have a todo item to do this for CVS tip, in fact.)  You'd not
want this to happen during a shutdown checkpoint, but for ordinary
checkpoints I don't believe there's any problem with spacing out the
writes.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]





---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Idle in Transaction and hung connections

2004-04-30 Thread Gregory S. Williamson

Sounds like an excellent suggestion ... we'll get a copy of this release.

Do you know of any incompatabilities with postgres 7.4 ?

 We can upgrade a server to 7.5 and the JDBC, put running some servers on 7.4/7.4 JDBC 
might be, uhm, difficult for me to sell operations (and we have only seen this problem 
in runtime). We might also be able to look at certain portions of the CVS code and see 
what changed and make backward patches ? (forwarding a suggestion from our engineering 
people)

Thanks,

Greg W.

-Original Message-
From:   Kris Jurka [mailto:[EMAIL PROTECTED]
Sent:   Thu 4/29/2004 7:27 PM
To: Gregory S. Williamson
Cc: Tom Lane; [EMAIL PROTECTED]
Subject:Re: [GENERAL] Idle in Transaction and hung connections 


On Thu, 29 Apr 2004, Gregory S. Williamson wrote:

 Tom --
 
 Thanks for the suggestion, and the rapid response on something which may
 not be truely a postgres issue (perhaps more a JDBC thing)!
 

This behavior is fixed in the 7.5 cvs version of the JDBC driver if you'd
like to try it out.

Kris Jurka





---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] Idle in Transaction and hung connections

2004-04-29 Thread Gregory S. Williamson
Dear peoples,

Periodically we are getting runaway postgres processes on our Linux (2.4.21-0.13 on 
Dell servers), using 7.4 and GIS (0.8 USE_GEOS=1 USE_PROJ=1 USE_STATS=1).

All of the queries come in from remote servers using JDBC/proxool; once every 4 hours 
we have a process on the client side that cleans out old connections.

All the processes are doing is single queries -- no inserts or updates.

Very occasionally we will see a thread go wild, taking up a huge amount of processor 
time (the load will climb by 1 for each process -- usual load is around .2, when 
these hit the load rises to 1.x all the way up to a load of about 40 once). The 
pg_stat_activity shows these conections as being old -- much older than any live 
thread. All such connections are in a state of IDLE IN TRANSACTION which seems odd 
as these are all queries and presumably each query is a complete transaction. My 
tenative theory is that something is killing the client while the server side still 
thinks it has data to send, or some such variant. The client machines don't have a 
corresponding connection to the one on the postgres server.

Killing the runaways with a -15 seems to bring the load back down and all is well, 
until it happens again.

Does anyone have any ideas what might be triggering this ? It is mostly an annoyance 
but on a couple of occasions seems to have brought down a server, or at least rendered 
it non-functional.

Thanks for any advice !

Greg Williamson
DBA
GlobeXplorer LLC

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] FW: Postgres 7.4 runing on BSD

2003-12-05 Thread Gregory S. Williamson
If anyone has any advice on building Postgres 7.4 for a freeBSD box it sure would be 
welcome.

TIA,

Greg Williamson
DBA
GlobeXplorer LLC

--

well, it takes gmake to build Postgres..

having trouble installing 'gmake' on the BSD box; it started to do its BSD autoinstall 
thing and built a whole crapload of stuff when I just wanted gmake, and is now 
crashing looking for a library...

wc-eng-01bsd# cd gmake
wc-eng-01bsd# make
===  Extracting for gmake-3.80_1
 make-3.80.tar.bz2 doesn't seem to exist in /usr/ports/distfiles/.
 Attempting to fetch from ftp://ftp.gnu.org/gnu/make/.
fetch: ftp://ftp.gnu.org/gnu/make/make-3.80.tar.bz2: No route to host
 Attempting to fetch from http://mirrors.usc.edu/pub/gnu/make/.
fetch: http://mirrors.usc.edu/pub/gnu/make/make-3.80.tar.bz2: Operation timed out
 Attempting to fetch from ftp://gatekeeper.dec.com/pub/GNU/make/.
Receiving make-3.80.tar.bz2 (920645 bytes): 100%
430149 bytes transferred in 6.4 seconds (65.90 kBps)
 Checksum OK for make-3.80.tar.bz2.
===  Patching for gmake-3.80_1
===  Applying FreeBSD patches for gmake-3.80_1
/bin/rm /usr/home/postgres/postgres/gmake/work/make-3.80/doc/make.info*
===   gmake-3.80_1 depends on shared library: intl.5 - not found
===Verifying install for intl.5 in /usr/ports/devel/gettext
 gettext-0.11.5.tar.gz doesn't seem to exist in /usr/ports/distfiles/.
 Attempting to fetch from ftp://ftp.gnu.org/gnu/gettext/.
Receiving gettext-0.11.5.tar.gz (3724099 bytes): 100% (ETA 00:00)
3724099 bytes transferred in 53.4 seconds (68.12 kBps)
===  Extracting for gettext-0.11.5_1
 Checksum OK for gettext-0.11.5.tar.gz.
===  Patching for gettext-0.11.5_1
===  Applying FreeBSD patches for gettext-0.11.5_1


(much crap later)

 install  -o root -g wheel -m 444 uintmax_t.m4 /usr/local/share/aclocal/uintmax_t.m4
 install  -o root -g wheel -m 444 ulonglong.m4 /usr/local/share/aclocal/ulonglong.m4
Making install in tests
/bin/sh ./mkinstalldirs /usr/local/share/gettext
 install  -o root -g wheel -m 444 ABOUT-NLS /usr/local/share/gettext/ABOUT-NLS
/bin/sh ./mkinstalldirs /usr/local/share/gettext
 install  -o root -g wheel -m 555 config.rpath /usr/local/share/gettext/config.rpath
 install  -o root -g wheel -m 555 mkinstalldirs /usr/local/share/gettext/mkinstalldirs
===   Generating temporary packing list
/bin/mkdir -p /usr/local/share/emacs/site-lisp
install  -o root -g wheel -m 444 
/usr/ports/devel/gettext/work/gettext-0.11.5/misc/po-compat.el 
/usr/local/share/emacs/site-lisp
install  -o root -g wheel -m 444 
/usr/ports/devel/gettext/work/gettext-0.11.5/misc/po-mode.el 
/usr/local/share/emacs/site-lisp
/bin/mkdir -p /usr/X11R6/share/locale
===   Compressing manual pages for gettext-0.11.5_1
===   Running ldconfig
/sbin/ldconfig -m /usr/local/lib
===   Registering installation for gettext-0.11.5_1
===   Returning to build of gmake-3.80_1
Error: shared library intl.5 does not exist
*** Error code 1


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgreSQL versus MySQL

2003-09-19 Thread Gregory S. Williamson

Oracle = Saturn IV. ?!???

Perhaps they claim to be. More like a shuttle with pretensions. Oracle was utterly 
unable to support our web site. And then they wanted a truely preposterous sum for 
their wretched software.

Informix, on the other hand, has performed like, well, like a Saturn [which, by the 
way, the US could not build again ... apparently they lost the plans]. But it also 
costs a fair bit o' pocket change.

Now, maybe if we take a couple of Titan IIs and stack them on top of each other ...

Greg Williamson
DBA GlobeXplorer LLC

-Original Message-
From:   Tom Lane [mailto:[EMAIL PROTECTED]
Sent:   Thu 9/18/2003 10:30 PM
To: scott.marlowe
Cc: Steve Crawford; Scott Holmes; PgSQL General ML
Subject:Re: [GENERAL] PostgreSQL versus MySQL 

scott.marlowe [EMAIL PROTECTED] writes:
 ... Being honest and fair will win 
 hearts and minds, and when they need the Saturn 4 instead of the Estes
 rocket, they'll remember who to come to.

I like this analogy, though maybe you've overstretched.  Perhaps:

MySQL = Estes.  Put in InnoDB, and you have a D engine ... but it's
still a model rocket.

Postgres = Titan II.  Can boost LEO missions or small interplanetary
probes.  Never mind its ICBM heritage ;-)

Oracle = Saturn IV.  Can take you to the moon ... if you can afford
the price tag.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly




---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] setting last_value of sequence

2003-09-03 Thread Gregory S. Williamson
Perhaps:
SELECT SETVAL('resrc_serial', MAX(resource_id)) FROM ia_resources; 
   the sequencethe column  the table
This sets the sequence to the highest number after I have used copy to load a table; 
other values instead of MAX() could be used (e.g. 123456, etc.).

HTH,

Greg Williamson

-Original Message-
From: John Harrold [mailto:[EMAIL PROTECTED]
Sent: Tuesday, September 02, 2003 2:01 PM
To: pgsql general list
Subject: [GENERAL] setting last_value of sequence


i've run into the situation where i need to set the last_value of a
sequence. can someone tell me how this is done? 

-- 
--
   | /\
 john harrold  | \ / ASCII ribbon campaign
  jmh at member.fsf.org|  X  against HTML mail
   the most useful idiot   | / \
--
 What difference does it make to the dead, the orphans, and the homeless,
 whether the mad destruction is brought under the name of totalitarianism or
 the holy name of liberty and democracy?
 --Gandhi
--
gpg --keyserver keys.indymedia.org --recv-key F65A739E
--

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] is linux ready for databases ? (Ziff Davis article on Oracle and Linux)

2003-08-26 Thread Gregory S. Williamson
Perhaps a subject line would help ...

-Original Message-
From: Gregory S. Williamson 
Sent: Monday, August 25, 2003 2:28 PM
To: '[EMAIL PROTECTED]'
Subject: 



One of our sysads sent this link ... wondering if there is any comment on it from the 
world of actual users of linux and a database.

http://story.news.yahoo.com/news?tmpl=storycid=1738ncid=738e=9u=/zd/20030825/tc_zd/55311

Greg Williamson
DBA GlobeXplorer LLC

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL]

2003-08-26 Thread Gregory S. Williamson

One of our sysads sent this link ... wondering if there is any comment on it from the 
world of actual users of linux and a database.

http://story.news.yahoo.com/news?tmpl=storycid=1738ncid=738e=9u=/zd/20030825/tc_zd/55311

Greg Williamson
DBA GlobeXplorer LLC

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


[GENERAL] High volume use of postGres

2003-08-14 Thread Gregory S. Williamson

I've been charged with investigating postGIS as a replacement for our current Informix 
runtime (9.3),  which uses the Spatial blade. Currently we use two Sun Sparc/Ultra 80 
boxes with 3 CPUS dedicated to Informix.

We use our database to calculate the image files we have within a given area 
(ST_Union, Intersect and the like).

Originally this company tried Oracle, but, as the Firesign Theater put it, He's no 
fun, he fell right over! ... Informix works well but charges a hefty amount for each 
runtime CPU.

Our databases are not particularly large -- only about 500,000 rows in the larger 
ones, but we're running more than a million hits a day, not evenly spread, natch so at 
peak we need a lot of capacity.

Are there documents on performance of postGres on various platforms that I might be 
able to get some ideas from ? Such issues as multiple CPU vs single CPU, Operating 
System -- we're leaning towards Linux of some flavor but also have some Sun servers 
(not as hefty as our current runtime database servers) -- would be of interest.

Any suggestions from users that have/are really beating up postGIS/postgres would be 
welcome ! (for instance, cleaning up a busy database, supporting multiple servers, 
etc.)

Thanks,

Greg Williamson
DBA
GlobeXplorer LLC 

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] One table in two different databases

2003-08-09 Thread Gregory S. Williamson
Perhaps:
dblink -
Allows remote query execution
by Joe Conway [EMAIL PROTECTED]

This is under the contrib directory in your source distribution. I haven't done much 
with it, but it seems to allow access to  tables not in the current database.

HTH,

Greg Williamson

-Original Message-
From: Marcelo Soares [mailto:[EMAIL PROTECTED]
Sent: Friday, August 08, 2003 6:17 PM
To: [EMAIL PROTECTED]
Subject: [GENERAL] One table in two different databases


Hi all,

I would like to know if its possible to have the SAME TABLE into two
different databases (but in the same server). Or to create a view of a
table of database X at the database Y.

I try to find it in Postgres docs, with no success.

Thanks to all,

Marcelo Soares
Informática - Master Hotéis
(51)3212-5055
ICQ Externo: 19398317
ICQ Interno: 1002

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] [ADMIN] Call to undefined function: pg_connect()

2003-08-08 Thread Gregory S. Williamson
I don't have a clue about which is which, but I bet providing some dates (ls -l) would 
help figuring out which is the most recent vs. oldest) ... might provide helpful 
information to people who know more about compiling this beast.

Greg Williamson


-Original Message-
From:   shreedhar [mailto:[EMAIL PROTECTED]
Sent:   Thu 8/7/2003 12:13 AM
To: Joe Conway
Cc: Postgre Admin; Postgre General
Subject:Re: [GENERAL] [ADMIN] Call to undefined function: pg_connect()

/usr/lib/libpq.so.2
/usr/lib/libpq.so.2.0
/usr/lib/libpq.so.2.2
/usr/lib/libpq.so

I have the above versions of libpq.so in my system. Which I have to use.
Suppose it might have to chage any config files where I should change.

Thanks alot,
With best regards,
Sreedhar
- Original Message -
From: Joe Conway [EMAIL PROTECTED]
To: shreedhar [EMAIL PROTECTED]
Cc: Postgre Admin [EMAIL PROTECTED]; Postgre General
[EMAIL PROTECTED]
Sent: Thursday, August 07, 2003 10:24 AM
Subject: Re: [GENERAL] [ADMIN] Call to undefined function: pg_connect()


 shreedhar wrote:
  Hi All,
 
  I Updated my Postgresql Server from 7.2.4 which came along with RH7.3 to
  7.3.2. After updating I could not able to connect database through PHP.
So I
  recompiled PHP4.1.2 (same version which have earlier in my system), even
  then I am not able to connect through PHP. Is there any other package
has to
  be installed for this connection.

 If I remember correctly, this is because libpq.so changed from
 libpq.so.2 to libpq.so.3. Is there an old copy of libpq.so.2 hanging
 around that is getting linked with instead of libpq.so.3? Did you re-run
 configure and do a `make clean` on PHP before rebuilding it? If you are
 really desperate, try creating a libpq.so.2 symlink from libpq.so.3.

 HTH,

 Joe




 ---(end of broadcast)---
 TIP 7: don't forget to increase your free space map settings



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]