Re: [GENERAL] tsearch2: more than one index per table?

2005-11-22 Thread Oleg Bartunov

On Wed, 23 Nov 2005, Teodor Sigaev wrote:


ERROR:  could not find tsearch config by locale
UPDATE t SET idxA=to_tsvector('default', a);



Is it working
select to_tsvector('foo bar')?

I suppose, no. In that case tsearch can't find configuration for current 
database locale, update pg_ts_cfg.locale in wished row to correct value.


Actually, it's described in tsearch2 introduction. 
Make sure pg_ts_cfg.locale matched server's locale (see 'show all')



Regards,
Oleg
_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Strugging with NEW and OLD records.

2005-11-22 Thread Script Head
I am a newbie to the stored proc. game and this is eating my brain.

> CREATE TABLE name(first VARCHAR(32) NULL,last VARCHAR(32) NULL, extra VARCHAR(32) NULL );

> CREATE OR REPLACE FUNCTION update_name() RETURNS opaque AS '
    DECLARE
    BEGIN
    
        NEW.extra:=NEW.first;
        RETURN NEW;

    END;
' LANGUAGE 'plpgsql';

> CREATE TRIGGER update_name_extra BEFORE INSERT 
ON name
EXECUTE PROCEDURE update_name();

> INSERT INTO name(first,last) VALUES('script','head');
ERROR:  record "new" is not assigned yet
DETAIL:  The tuple structure of a not-yet-assigned record is indeterminate.
CONTEXT:  PL/pgSQL function "update_name" line 4 at assignment

What am I doing wrong?

ScriptHead


Re: [GENERAL] tsearch2: more than one index per table?

2005-11-22 Thread Teodor Sigaev

ERROR:  could not find tsearch config by locale
UPDATE t SET idxA=to_tsvector('default', a);



Is it working
select to_tsvector('foo bar')?

I suppose, no. In that case tsearch can't find configuration for current 
database locale, update pg_ts_cfg.locale in wished row to correct value.



--
Teodor Sigaev   E-mail: [EMAIL PROTECTED]
   WWW: http://www.sigaev.ru/

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Index Administration: pg_index vs. pg_get_indexdef()

2005-11-22 Thread Thomas F. O'Connell


On Nov 22, 2005, at 10:56 PM, Tom Lane wrote:


"Thomas F. O'Connell" <[EMAIL PROTECTED]> writes:

In an old thread , Tom Lane suggested that it would be "unreasonable" to
use pg_index to reconstruct (expressional) indexes (in 7.4). The
suggested alternative was to use pg_get_indexdef().


IIRC, the point I was trying to make was that making client code  
try to

interpret the contents of pg_index.indexprs or pg_index.indpred is a
losing proposition.  If you feel that you'd rather read the other  
fields

of pg_index for yourself, I won't argue with you.


Yeah, I took a look at pg_index.indexprs and have already Perled up a  
parser for the pg_get_indexdef() output... :)


Out of curiosity (without much knowledge of how pg_get_indexdef()  
generates its output), would it be difficult to allow the view to  
have a more useful format? What is the intention of providing an  
expression tree? How could that be used?


--
Thomas F. O'Connell
Database Architecture and Programming
Co-Founder
Sitening, LLC

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-469-5150
615-469-5151 (fax)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Index Administration: pg_index vs. pg_get_indexdef()

2005-11-22 Thread Tom Lane
"Thomas F. O'Connell" <[EMAIL PROTECTED]> writes:
> In an old thread  msg00271.php>, Tom Lane suggested that it would be "unreasonable" to  
> use pg_index to reconstruct (expressional) indexes (in 7.4). The  
> suggested alternative was to use pg_get_indexdef().

IIRC, the point I was trying to make was that making client code try to
interpret the contents of pg_index.indexprs or pg_index.indpred is a
losing proposition.  If you feel that you'd rather read the other fields
of pg_index for yourself, I won't argue with you.

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] tsearch2: more than one index per table?

2005-11-22 Thread Rick Schumeyer








Is there something in tsearch2 that prevents more than one
index per table?

I would like an index on field A, and a separate index on
field B.

The index builds fine for A, but gives an error for B. 
The error text is 

 

ERROR:  could not find tsearch config by locale

 

The code below is taken almost verbatim from the tsearch2
documentation.

 

Any help is appreciated!

 



 

\i
/home/rick/ftp/postgresql-8.1.0/contrib/tsearch2/tsearch2.sql

 

CREATE TABLE t (a varchar(20),
b varchar(20));

INSERT INTO t (a,b) VALUES
('hello world','quick brown fox');

 

--

-- A

--

 

ALTER TABLE t ADD COLUMN idxA
tsvector;

UPDATE t SET idxA=to_tsvector('default',
a);

VACUUM FULL ANALYZE;

CREATE INDEX idxA_idx ON t
USING gist(idxA);

VACUUM FULL ANALYZE;

 

CREATE TRIGGER ts_A_Update
BEFORE UPDATE OR INSERT ON t

FOR EACH ROW EXECUTE
PROCEDURE tsearch2(idxA, a);

 

--

-- B

--

 

ALTER TABLE t ADD COLUMN idxB
tsvector;

 

--

-- The next line gives:
ERROR:  could not find tsearch config by locale

--

UPDATE t SET idxB=to_tsvector('default',
b);

 

VACUUM FULL ANALYZE;

CREATE INDEX idxB_idx ON t
USING gist(idxB);

VACUUM FULL ANALYZE;

 

CREATE TRIGGER ts_B_Update
BEFORE UPDATE OR INSERT ON t

FOR EACH ROW EXECUTE
PROCEDURE tsearch2(idxB, b);

 

 








Re: [GENERAL] Why pgAdmin III guru suggests VACUUM in 8.1

2005-11-22 Thread Tom Lane
"Matthew T. O'Connor"  writes:
>> LOG:  autovacuum: processing database "foo"

> Also this creates a lot of noise in the log files.  I think it would be 
> better to downgrade this message to a NOTICE or even a DEBUG, and 
> replace it with a LOG level message that states when action has taken 
> place against the table.

I agree that the "processing database" message isn't too exciting, but
it seems that forcing per-table messages up to LOG level would create
even more log clutter.  I could support "processing table" at level
DEBUG1 and "processing database" at DEBUG2.  Or maybe we should think
harder about the idea recently mentioned of letting the autovacuum
process have its own log-level setting, separate from ordinary backends'
setting.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Set Returning Function (Pipelining)

2005-11-22 Thread Joe Conway

tschak wrote:

I have a question on set returning functions. In one of the TechDocs on
the postgres website it says:
"Currently SRF returning PL/pgSQL functions must generate the entire
set before the result is returned"

It also says that this might be changed in future releases (later than
7.3).
My question is now wether the result is returned linewise in an newer
version


I'm afraid not. But are you sure you need pipelining?

Joe

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[GENERAL] Set Returning Function (Pipelining)

2005-11-22 Thread tschak
Hi everyone,

I have a question on set returning functions. In one of the TechDocs on
the postgres website it says:
"Currently SRF returning PL/pgSQL functions must generate the entire
set before the result is returned"

It also says that this might be changed in future releases (later than
7.3).
My question is now wether the result is returned linewise in an newer
version and if yes, wether a program using the
S(erver)P(rogramming)I(interface) can utilize this functionality in
order get a better distribution of the workload?

Thx for your help,

tschak


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Slow pgdump

2005-11-22 Thread Patrick Hatcher

OS - RH3
Pg - 7.4.9
Ram - 8G
Disk-709G  Raid 0+1

We are having a pgdump issue that we can't seem to find an answer for

Background:
Production server contains 11 databases of which 1 database comprises 85%
of the 194G used on the drive.  This one large db contains 12 schemas.
Within the schemas of the large db, there maybe 1 or 2 views that span
across 2 schemas.

If we do a backup using pgdump against the entire database, it will take
upwards of 8+ hours for the backup to complete.

If we split the backup up to do a pgdump for the first 10 dbs and then do a
pgdump by schema on the 1 large db, the the backup takes only 3.5hrs

The other than using the schema switch, there is no compression happening
on either dump.

Any ideas why this might be happening or where we can check for issues?

TIA
Patrick Hatcher
Development Manager  Analytics/MIO
Macys.com


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Index Administration: pg_index vs. pg_get_indexdef()

2005-11-22 Thread Thomas F. O'Connell
In an old thread , Tom Lane suggested that it would be "unreasonable" to  
use pg_index to reconstruct (expressional) indexes (in 7.4). The  
suggested alternative was to use pg_get_indexdef().


I administer a postgres 8.0.x database with thousands of inherited  
tables, each with what is supposed to be a consistently maintained  
set of indexes. As the application programmers change (and have  
changed) the DDL specification over time as it affects the creation  
of new tables, however, it is the case that some indexes have gotten  
out of sync or duplicated (in terms of the specified columns).


For the purposes of developing an index administration toolkit whose  
intent is to discover and remove duplicate indexes and to add missing  
indexes to pre-existing tables, it seems like it's easier to write an  
index verification process based on the contents of pg_index (e.g.,  
in order to compare the columns referenced by indexes on a table to  
determine whether any of them seem to be duplicative) than to have to  
parse the output of pg_get_indexdef(). Am I off base in this thinking?


P.S. Regardless of the wisdom of using pg_index for such purposes,  
the OP in the old thread raised what I think is a good question: why  
are techniques for accessing int2vector nowhere documented if the  
type itself makes its way into very user-visible documentation and  
catalogs/views?


--
Thomas F. O'Connell
Database Architecture and Programming
Co-Founder
Sitening, LLC

http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-469-5150
615-469-5151 (fax)


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Why pgAdmin III guru suggests VACUUM in 8.1

2005-11-22 Thread Matthew T. O'Connor

Andrus wrote:

Jim,
  

Keep in mind that if analyze has never been run on a table the database
will assume 1000 rows, which is definately off from 122 rows.



autovacuum processes this tabele regularly.
I believed that autovacuum can update the row count to be real.


I think this is a poor choice of wording.  Autovacuum frequently creates 
log entries that read something like: 


LOG:  autovacuum: processing database "foo"

This implies that autovacuum has done something to that database, which 
may or may not be the case.  All this message really means is that 
autovacuum took at look at this database to see if there was any VACUUM 
or ANALYZE operations that should be performed.  I think saying 
"processing" implies that some action is being taken.


Also this creates a lot of noise in the log files.  I think it would be 
better to downgrade this message to a NOTICE or even a DEBUG, and 
replace it with a LOG level message that states when action has taken 
place against the table.


Matt



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Why pgAdmin III guru suggests VACUUM in 8.1

2005-11-22 Thread Jim C. Nasby
On Tue, Nov 22, 2005 at 09:33:34PM +0200, Andrus wrote:
> Jim,
> 
> > Upsizes? Are you adding more data? If so then yes, analyze would be
> > good, though autovacuum should handle it for you.
> 
> I create new Postgres database, upsize a lot of data into  it. After that

FWIW, people generally refer to that as 'loading data'; I've never heard
of 'upsizing' before, which is why I was somewhat confused.

> this database goes online and will receive a lot of transactions daily.
> I'm using PG 8.1 default postgres.conf file.

Well, by default autovacuum is turned off. Aside from that there's other
things you'll want to tune. Take a look at the annotated postgresql.conf
that's in techdocs.

> I read than autovacuum may skip some tables which can cause wrong guru
> hints.
> 
> So it seems that I should run manually VACUUM ANALYZE for the whole database
> once for initial statistics colection of those tables, isn't it?

No real need to vacuum, just running analyze on the entire database
would suffice.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Why pgAdmin III guru suggests VACUUM in 8.1

2005-11-22 Thread Andrus
Jim,

> Upsizes? Are you adding more data? If so then yes, analyze would be
> good, though autovacuum should handle it for you.

I create new Postgres database, upsize a lot of data into  it. After that
this database goes online and will receive a lot of transactions daily.
I'm using PG 8.1 default postgres.conf file.

I read than autovacuum may skip some tables which can cause wrong guru
hints.

So it seems that I should run manually VACUUM ANALYZE for the whole database
once for initial statistics colection of those tables, isn't it?

Andrus.



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Why pgAdmin III guru suggests VACUUM in 8.1

2005-11-22 Thread Jim C. Nasby
On Tue, Nov 22, 2005 at 09:01:25PM +0200, Andrus wrote:
> Jim,
> 
> > Keep in mind that if analyze has never been run on a table the database
> > will assume 1000 rows, which is definately off from 122 rows.
> 
> autovacuum processes this tabele regularly.
> I believed that autovacuum can update the row count to be real.

It can, but without looking at logs of what autovacuum has actually done
there's no way to know if it actually has analyzed that table or not.

> > You might want to ask on the pgAdmin list. Though I'd recommend against
> > calling the guru 'stupid' over there. :)
> 
> I'm creating application which upsizes existing data to new postgres 8.1
> database.
> 
> Will I need to run VACUUM ANALYZE after upsizing ?

Upsizes? Are you adding more data? If so then yes, analyze would be
good, though autovacuum should handle it for you.

> Must I enable statitics collection for this database ?

Well, autovacuum depends on it; see
http://www.postgresql.org/docs/8.1/interactive/maintenance.html#AUTOVACUUM
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Why pgAdmin III guru suggests VACUUM in 8.1

2005-11-22 Thread Andrus
Jim,

> Keep in mind that if analyze has never been run on a table the database
> will assume 1000 rows, which is definately off from 122 rows.

autovacuum processes this tabele regularly.
I believed that autovacuum can update the row count to be real.

> You might want to ask on the pgAdmin list. Though I'd recommend against
> calling the guru 'stupid' over there. :)

I'm creating application which upsizes existing data to new postgres 8.1
database.

Will I need to run VACUUM ANALYZE after upsizing ?

Must I enable statitics collection for this database ?

Andrus.




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Why pgAdmin III guru suggests VACUUM in 8.1

2005-11-22 Thread Jim C. Nasby
Keep in mind that if analyze has never been run on a table the database
will assume 1000 rows, which is definately off from 122 rows.

You might want to ask on the pgAdmin list. Though I'd recommend against
calling the guru 'stupid' over there. :)

On Sun, Nov 20, 2005 at 09:13:36PM +0200, Andrus Moor wrote:
> I'm using Postgres 8.1 in Windows XP
> 
> Sometimes when using pgAdmin the following Guru hint appears suddenly:
> 
> "Running VACUUM recommended
> The estimated rowcount on the table "firma1.algsa" deviates significantly 
> from the actual rowcount. You should run VACUUM ANALYZE on this table. 
> Instead of issuing a manual VACUUM ANALYZE command on this table (you can 
> use the pgAdmin III maintenance menu for this), running VACUUM ANALYZE on a 
> regular or automated basis should be considered. This can be achieved using 
> a scheduler. PostgreSQL also supplies the pg_autovacuum daemon, which will 
> track changes made to the database and issue vacuum commands as required 
> automatically. In most cases, pg_autovacuum will be the best choice. "
> 
> I have noticed in Postgres log, that autovacuum processes my cluster 
> regulary.
> So in my knowledge, this hint is wrong.
> Please confirm that guru is stupid.
> 
> Also I followed this hint and got the results:
> 
> INFO:  vacuuming "firma1.algsa"
> INFO:  index "algsa_pkey" now contains 122 row versions in 2 pages
> DETAIL:  0 index pages have been deleted, 0 are currently reusable.
> CPU 0.00s/0.00u sec elapsed 0.06 sec.
> INFO:  "algsa": found 0 removable, 122 nonremovable row versions in 4 pages
> DETAIL:  0 dead row versions cannot be removed yet.
> There were 0 unused item pointers.
> 0 pages are entirely empty.
> CPU 0.00s/0.00u sec elapsed 0.06 sec.
> INFO:  analyzing "firma1.algsa"
> INFO:  "algsa": scanned 4 of 4 pages, containing 122 live rows and 0 dead 
> rows; 122 rows in sample, 122 estimated total rows
> 
> So it seems that vacuum did make anything.
> 
> Andrus. 
> 
> 
> 
> ---(end of broadcast)---
> TIP 2: Don't 'kill -9' the postmaster
> 

-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Best way to represent values.

2005-11-22 Thread Dennis Veatch
On Tuesday 22 November 2005 11:40, Lincoln Yeoh wrote:
> At 01:19 PM 11/21/2005 -0500, Dennis Veatch wrote:
> >I had thought just adding some fields called topsoil_start/topsoil_end,
> >gravel_start/gravel_end, etc. But them I'm left with how to take those
> > values and give to total depth for each layer and total depth of the
> > well.
> >
> >But I'm not sure that is the best way to handle this.
> >
> >Does anyone have some other suggestions?
>
> I'm no DB guru, so I am probably a bit out of my depth here.
>
> But how about something like:
>
> create table well (
> id serial,
> name text,
> created timestamp default null,
> -- more fields probably follow - site, location, status etc
> )
>
> create table layers (
> id serial,
> well_id int,
> layertype_id int,
> end_depth int
> )
>
> create table layertype (
> id serial,
> layername text,
> comment text
> -- probably more fields
> )
>
> (you probably might want to add the foreign key constraints etc etc).
>
> Basically you have table of wells.
>
> And then you have lots of rows in layers that are linked to the same well
> by well_id, and you sort them by the end depth.
>
> And then you have a table of layertypes which each layer links to. So you
> can create types of layers.
>
> e.g.
> select layername,startdepth from well,layers,layertype
> where
> well.name='somewell'
> and
> well_id=well.id
> and
> layertype.id=layertype_id
> order by end_depth asc
>
> I've no experience in wells but you might want an "Unknown" layertype to
> fill in the gaps ;).
>
> You might alternatively want to have "start depth" instead of an "end
> depth". I'd do end depth, since your data probably ends at the deepest
> layer (I assume you never reach the core ;) ).
>
> You may need both start and end depths if there are multiple layers per
> depth per well (nonuniform). In that case the queries could be a bit more
> complex...
>
> I might have overlooked a few pitfalls here and there. Oh well...
>
> Good luck!
>
> Link.
>
> *runs and hides*
>

Thanks everyone for the on-line and off-line suggestions. Now I just need to 
sort through them.

-- 
You can tuna piano but you can't tune a fish.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Partial foreign keys, check constraints and inheritance

2005-11-22 Thread Jim C. Nasby
On Thu, Nov 17, 2005 at 02:21:33PM -0500, Eric E wrote:
> >maybe you can solve it adding a new col and allow both to contain null 
> >values.
> >
> >if these are not mutually exclusive you can avoid a check if they are
> >check that if one has a non-null value other has null...
> 
> 
> I did think about that, but I disliked the idea of two fields of nulls for 
> every one full field maybe it's not as bad a way of doing it as I 
> thought.

What's wrong with multiple NULL fields? It's probably the cleanest,
fastest way to do this...
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Group By?

2005-11-22 Thread Bob Pawley

Bruno

The table I previously sent came through distorted and probabley caused 
misunderstanding.


The table control and auto_control are both permanent table. I want to 
reshuffle how the information is associated from one table to another with 
the link between table by way of the device_id.


Following is the example which I stabilized and tested for e-mail.

Bob

 Control









 device_id
type
association


 serial
varchar
int4







 1
mon
1


 2
valve
2


 3
valve
1


 4
mon
2


 5
valve
1












 Auto_control









 loop_id
mon
valve_a
valve_b

 serial
int4
int4
int4






 1
1
3
5

 2
2
4








- Original Message - 
From: "Bruno Wolff III" <[EMAIL PROTECTED]>

To: "Bob Pawley" <[EMAIL PROTECTED]>
Cc: "Guy Rouillier" <[EMAIL PROTECTED]>; "Postgre General" 


Sent: Monday, November 21, 2005 10:07 PM
Subject: Re: Group By?



On Mon, Nov 21, 2005 at 21:53:10 -0800,
 Bob Pawley <[EMAIL PROTECTED]> wrote:

Here's what I want to do.

Table control contains values (mon and valves) that are associated by
numbers inserted into the associated column.

I want to transfer the serial _id number of the items associated by the
value '1' into the appropriate columns of the first row of the table
auto_control. All items associated with the value '2' into the second 
row -

etc. etc.


You don't really want to do that. Tables have fixed numbers of columns and
what you want to do doesn't result in a fixed number of columns.

If you want to generate a report with that format, then I think there is
a contrib module (crosstabs?) that will do this kind of thing. You could
also have a report app do it for you. In the report app method, you would
be best to return rows ordered by association and then device_ID and have 
the

app check for when the association value changes.



Is this best accomplished by a 'group by' command or subset???

Bob
 Control

 device_ID type association
 serial varchar int4

 1 mon 1
 2 valve 2
 3 valve 1
 4 mon 2
 5 valve 1


 Auto_control

 loop_id mon valve valve
 serial int4 int4 int4
 1 1 3 5
 2 2 4



- Original Message - 
From: "Guy Rouillier" <[EMAIL PROTECTED]>

To: "Postgre General" 
Sent: Monday, November 21, 2005 4:25 PM
Subject: Re: [GENERAL] Group By?


Converted your message to plain text as preferred on most mailing lists.

Bob Pawley wrote:
>I want to take the serial ID of several values in different rows in
>one table and insert them into a single row of another table.
>
>Would the 'group by' command be the best way to do this?

Could you provide an actual example?  The wording of your question is a
little vague and an example might help solicit an answer to the actual
problem.  For example, are these serial ID values all in a the same
column in the source table?  Or is each one in a different column?  And
what is the selection criteria that brings these results together?

--
Guy Rouillier

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org 



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] How to trim Bytea fields

2005-11-22 Thread Howard Cole

Joe Conway wrote

trim() will remove '\000' bytes from both ends -- would that work for 
you?


Thanks Joe, just what I was looking for.

Howard

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Createlang plpgsql

2005-11-22 Thread Jeremy Sellors
Well they are probably busy people and I have not got a reply from  
them as yet. The reason I post the problem here is in the hops that  
someone has come across this problem before and has some experience  
that might help. For-instance they might have found an economical  
shared hosting site with PostgreSQL that you can install procedural  
languages on.


 _Jeremy

On Nov 21, 2005, at 10:50 PM, Tino Wildenhain wrote:


Am Montag, den 21.11.2005, 20:12 -0800 schrieb Jeremy Sellors:

I see the problem now. On a shared host the createuser and createdb
are handled by cPanel scripts but not (as yet on my host) createlang.
The createlang utility only installs procedural languages included in
the default PostgreSQL distribution, which fortunately includes
plpgsql language I need.
If the language is added to template1 then all future databases—
including those of other shared users—would have the language
installed by default and this—probably for reasons that all the
languages are not installed by default—might not be desirable.
So what I need to solve the problem is a cPanel script to install the
required procedural languages or alternately—I have to ask the  
support

people to add any required languages when I create a new database.
Unless I have not fully understood the problem then clearly this  
issue
will have to be solved in order to use all of the advanced  
features of

PostgreSQL to be available on shared servers.
Can the createlang utility be made available to an ordinary user just
to apply the lang to his database ?


Shouldnt you rather ask your hoster?

++Tino


---(end of  
broadcast)---

TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Best way to represent values.

2005-11-22 Thread Lincoln Yeoh

At 01:19 PM 11/21/2005 -0500, Dennis Veatch wrote:


I had thought just adding some fields called topsoil_start/topsoil_end,
gravel_start/gravel_end, etc. But them I'm left with how to take those values
and give to total depth for each layer and total depth of the well.

But I'm not sure that is the best way to handle this.

Does anyone have some other suggestions?


I'm no DB guru, so I am probably a bit out of my depth here.

But how about something like:

create table well (
id serial,
name text,
created timestamp default null,
-- more fields probably follow - site, location, status etc
)

create table layers (
id serial,
well_id int,
layertype_id int,
end_depth int
)

create table layertype (
id serial,
layername text,
comment text
-- probably more fields
)

(you probably might want to add the foreign key constraints etc etc).

Basically you have table of wells.

And then you have lots of rows in layers that are linked to the same well 
by well_id, and you sort them by the end depth.


And then you have a table of layertypes which each layer links to. So you 
can create types of layers.


e.g.
select layername,startdepth from well,layers,layertype
where
well.name='somewell'
and
well_id=well.id
and
layertype.id=layertype_id
order by end_depth asc

I've no experience in wells but you might want an "Unknown" layertype to 
fill in the gaps ;).


You might alternatively want to have "start depth" instead of an "end 
depth". I'd do end depth, since your data probably ends at the deepest 
layer (I assume you never reach the core ;) ).


You may need both start and end depths if there are multiple layers per 
depth per well (nonuniform). In that case the queries could be a bit more 
complex...


I might have overlooked a few pitfalls here and there. Oh well...

Good luck!

Link.

*runs and hides*


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] problem with GRANT postgres 8.0.4

2005-11-22 Thread Jacek Balcerski

Richard Huxton napisał(a):



Don't forget to cc: the list when replying

Jacek Balcerski wrote:


Richard Huxton napisał(a):


Jacek Balcerski wrote:


ERROR:  permission denied for relation reviewers
KONTEKST:  SQL statement "SELECT 1 FROM ONLY "public"."reviewers" x 
WHERE "person_id" = $1 FOR UPDATE OF x"


In person table there is ofcourse person with id=569.
User is super user and I did GRANT ALL on ALL TABLES :
public | article_reviewers| table| 
{control=r/control,balcer=arwdRxt/control}
public | articles | table| 
{control=r/control,balcer=arwdRxt/control}
public | persons| table| 
{control=r/control,balcer=arwdRxt/control}



I don't see table "reviewers" in this list, which is the table the 
error statement mentions.




public | reviewers | table| 
{control=r/control,balcer=arwdRxt/control}
I'am serching archives for an clear answer right now, but if any of 
you would have time to explain

what I did wrong it would be helpful :)



I'm guessing you are running as user "control" with only read 
permissions for table "reviewers". If you look at the error message...


ERROR:  permission denied for relation reviewers
KONTEKST:  SQL statement "SELECT 1 FROM ONLY "public"."reviewers" x 
WHERE "person_id" = $1 FOR UPDATE OF x"


This is the foreign-key check. In versions before 8.1 this was handled 
by taking a SELECT ... FOR UPDATE lock which means you need the 
"UPDATE" permission too. To quote the manuals:


"UPDATE
Allows UPDATE of any column of the specified table. SELECT ... FOR 
UPDATE also requires this privilege (besides the SELECT privilege). 
For sequences, this privilege allows the use of the nextval and setval 
functions."


HTH
--
  Richard Huxton
  Archonet Ltd


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Well the proble is I'am running as user balcer...
Jacek


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] problem with GRANT postgres 8.0.4

2005-11-22 Thread Richard Huxton


Don't forget to cc: the list when replying

Jacek Balcerski wrote:

Richard Huxton napisał(a):


Jacek Balcerski wrote:


ERROR:  permission denied for relation reviewers
KONTEKST:  SQL statement "SELECT 1 FROM ONLY "public"."reviewers" x 
WHERE "person_id" = $1 FOR UPDATE OF x"


In person table there is ofcourse person with id=569.
User is super user and I did GRANT ALL on ALL TABLES :
public | article_reviewers| table| 
{control=r/control,balcer=arwdRxt/control}
public | articles | table| 
{control=r/control,balcer=arwdRxt/control}
public | persons| table| 
{control=r/control,balcer=arwdRxt/control}


I don't see table "reviewers" in this list, which is the table the 
error statement mentions.



public | reviewers | table| 
{control=r/control,balcer=arwdRxt/control}
I'am serching archives for an clear answer right now, but if any of you 
would have time to explain

what I did wrong it would be helpful :)


I'm guessing you are running as user "control" with only read 
permissions for table "reviewers". If you look at the error message...


ERROR:  permission denied for relation reviewers
KONTEKST:  SQL statement "SELECT 1 FROM ONLY "public"."reviewers" x 
WHERE "person_id" = $1 FOR UPDATE OF x"


This is the foreign-key check. In versions before 8.1 this was handled 
by taking a SELECT ... FOR UPDATE lock which means you need the "UPDATE" 
permission too. To quote the manuals:


"UPDATE
Allows UPDATE of any column of the specified table. SELECT ... FOR 
UPDATE also requires this privilege (besides the SELECT privilege). For 
sequences, this privilege allows the use of the nextval and setval 
functions."


HTH
--
  Richard Huxton
  Archonet Ltd


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] How to trim Bytea fields

2005-11-22 Thread Joe Conway

Howard Cole wrote:

Hi,

I have an bytea field that contains data with a lot of trailing blank 
space composed of multiple '\000' zero bytes. Does anyone know of a 
quick SQL fix to trim these bytes from the data?


trim() will remove '\000' bytes from both ends -- would that work for you?

select trim('\\000'::bytea from '\\00012\\00034\\000\\000'::bytea);
  btrim
--
 12\00034
(1 row)

Joe

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] problem with GRANT postgres 8.0.4

2005-11-22 Thread Richard Huxton

Jacek Balcerski wrote:

ERROR:  permission denied for relation reviewers
KONTEKST:  SQL statement "SELECT 1 FROM ONLY "public"."reviewers" x 
WHERE "person_id" = $1 FOR UPDATE OF x"


In person table there is ofcourse person with id=569.
User is super user and I did GRANT ALL on ALL TABLES :
public | article_reviewers| table| 
{control=r/control,balcer=arwdRxt/control}
public | articles | table| 
{control=r/control,balcer=arwdRxt/control}
public | persons| table| 
{control=r/control,balcer=arwdRxt/control}


I don't see table "reviewers" in this list, which is the table the error 
statement mentions.

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Java, postgres and jasper help

2005-11-22 Thread Richard Huxton

[EMAIL PROTECTED] wrote:

hi all,

i am trying to fix this bug within my program. its a java, postgres and
jasper based program which generates charts. now i am generating a
chart which does not show any 0 data points if they exist, only
non-zero ones. obviously i am trying to do everything in just one sql
query, but i am not sure if its possible or not. i also need to
generate a temporary table and then delete it.

I have broken the tasks into,
1. create a temp table
2. create records for 0 data points
3. update with non 0 records
4. delete temp table.

now this is what i have and i would appreciate all help with fixing
this,


Two points:
1. If you quote your column-names when you create them, quote when you 
access them too.


2. You don't actually say what the problem is.

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] How to trim Bytea fields

2005-11-22 Thread Howard Cole

Hi,

I have an bytea field that contains data with a lot of trailing blank 
space composed of multiple '\000' zero bytes. Does anyone know of a 
quick SQL fix to trim these bytes from the data?


Thanks

Howard Cole
http://www.selestial.com

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] problem with GRANT postgres 8.0.4

2005-11-22 Thread Jacek Balcerski

There are two tables

article_reviewers
  Column|  Type   | Modifiers
-+-+---
article_id  | integer | not null
reviewer_id | integer | not null
Foreign-key constraints:
   "$1" FOREIGN KEY (reviewer_id) REFERENCES reviewers(person_id)

reviewers
 Column   |  Type   | Modifiers
---+-+---
person_id | integer | not null
status| text|
keywords  | text|
Indexes:
   "reviewers_pkey" PRIMARY KEY, btree (person_id)
Foreign-key constraints:
   "$1" FOREIGN KEY (person_id) REFERENCES persons(id)


When I try to execute sql statement:

INSERT   INTO article_reviewers (article_id, reviewer_id) VALUES (876,569);

ERROR:  permission denied for relation reviewers
KONTEKST:  SQL statement "SELECT 1 FROM ONLY "public"."reviewers" x 
WHERE "person_id" = $1 FOR UPDATE OF x"


In person table there is ofcourse person with id=569.
User is super user and I did GRANT ALL on ALL TABLES :
public | article_reviewers| table| 
{control=r/control,balcer=arwdRxt/control}
public | articles | table| 
{control=r/control,balcer=arwdRxt/control}
public | persons| table| 
{control=r/control,balcer=arwdRxt/control}


What is wrong, sorry if this is stupid question.
Jacek

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Best way to represent values.

2005-11-22 Thread Adrian Klaver
The problem is that each well can have a different number of and types  of 
layers. Trying to pre-plan all the combinations could be a big headache. My 
first thought is the following layout-
well_number   layer_number   bottom_depth   layer_type
1   1 10 topsoil
1   2  25gravel
and so on. The bottom_depth of one layer is the top_depth of the one below.
The final bottom_depth is the depth of the well. 
The layer_types can be pulled from another table to maintain consistency and 
allow for new types as needed. Come report time you order by 
well_no,layer_number to get the desired information.
 
On Monday 21 November 2005 05:29 pm, Dennis Veatch wrote:
> On Monday 21 November 2005 20:04, Michael Glaesemann wrote:
> > On Nov 22, 2005, at 3:19 , Dennis Veatch wrote:
> > > I had thought just adding some fields called topsoil_start/
> > > topsoil_end,
> > > gravel_start/gravel_end, etc. But them I'm left with how to take
> > > those values
> > > and give to total depth for each layer and total depth of the well.
> > >
> > > But I'm not sure that is the best way to handle this.
> > >
> > > Does anyone have some other suggestions?
> >
> > This is similar in concept to temporal intervals. You might want to
> > look at "Temporal Data and the Relational Model" by Date, Darwen, and
> > Lorentzos for general theory, and "Developing Time-Oriented Database
> > Applications"  by Richard Snodgrass for implementations in SQL. The
> > latter is available as a PDF download (the book itself is out of print):
> > http://www.cs.arizona.edu/people/rts/tdbbook.pdf
> >
> > Hope this helps!
>
> Hee, well that's um, kinda over my head. Hee and I'm not all the way
> through the PostgreSQL book I just bought. There's probably a gap there. :)
>
> Though I will try to glean something out of the link.

-- 
Adrian Klaver   
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Rule appears not to fire on insert w/ "except"

2005-11-22 Thread A.j. Langereis
Dear Chris,

Sorry, I forgot the ():

insert into test1
  select id, data from test2
  where (id, data) not in (select id, data from test1);

With the story of Tom Lane, your solution would be a before trigger I guess:

create or replace function trg_test() returns "trigger" as
'
begin
insert into test_que (row_id) values (new.id);
return new;
end;
'
language plpgsql;

create trigger trigger_test
  before insert
  on test1
  for each row
  execute procedure trg_test();

Yours,

Aarjan

- Original Message -
From: "Chris Kratz" <[EMAIL PROTECTED]>
To: 
Cc: "A.j. Langereis" <[EMAIL PROTECTED]>
Sent: Tuesday, November 22, 2005 2:54 PM
Subject: Re: [GENERAL] Rule appears not to fire on insert w/ "except"


> On Tuesday 22 November 2005 08:34 am, A.j. Langereis wrote:
> > Dear Chris,
> >
> > What about this:
> >
> > insert into test1
> > select id, data from test2
> > where id, data not in (select id, data from test1);
> >
> > of which one would expect the same results...
> >
> > Yours,
> >
> > Aarjan
> >
> > Ps. notice that you are inserting data into a serial column (in your
> > examples as well), as far as I know this is not common practice since
your
> > sequence is now not correct anymore.
> > Therefore, better would be:
> >
> > insert into test1 (data)
> > select data from test2
> > where id, data not in (select id, data from test1);
> >
> > or to make the id column in table test1 of the integer type.
>
> Hello Aarjan,
>
> Thanks for the hint, but I get the same behavior with the not in syntax.
And
> you are right about the serial issue.  My example was somewhat contrived
as I
> was trying to get it down to a minimal set of steps to reproduce.  The
real
> table is actually a denormalized table we use for reporting, so the serial
> comes from test2 always.  Anyway, the testcase with the not in clause
showing
> the same behavior is at the end of this email.
>
> Also, the id,data not in (select id,data... clause didn't work.  I changed
it
> to use only id in my test case below.  Actually if there is a way to do
> multiple column comparisons like this it would be interesting.  We
generally
> have done it with a where not exists (select 1 from table where col1=id
and
> col2=data).
>
> As Tom Lane pointed out in an earlier email.  The problem is happening
because
> when the rule is processed, the inserts have already happened.  So, to get
> the new.id value, it reruns the select * from test2 except select * from
> test1, there is no data returned because the except removes everything, so
> the rule never fires.  I actually had a problem recently where a serial
was
> incremented 2x because a rule referenced new.id instead of the currval on
the
> appropriate sequence.  Same problem.  The rule causes a re-evaluation of
the
> orginal sql statement to get the new.* values.
>
> Thanks,
>
> -Chris
>
>  Test Cases 
>
> -- Not working case, insert w/ except clause
> begin;
>
> create table test1(id serial, data text);
>
> create table test2(id serial, data text);
> insert into test2(data) values('abc');
>
> create table test_que(row_id integer);
>
> CREATE OR REPLACE RULE debug_rule AS
> ON INSERT TO test1
>do INSERT INTO test_que (row_id)
>   VALUES (new.id);
>
> insert into test1 (data)
> select data from test2
>where id not in (select id from test1);
>
> --   We will have 1 row inserted
> select * from test1;
>
> -- But no rows here even though a row was placed in test1
> select * from test_que;
>
> rollback;
>
> -- Working test case
> begin;
>
> create table test1(id serial, data text);
>
> create table test2(id serial, data text);
> insert into test2(data) values('abc');
>
> create table test_que(row_id integer);
>
> CREATE OR REPLACE RULE debug_rule AS
> ON INSERT TO test1
>do INSERT INTO test_que (row_id)
>   VALUES (new.id);
>
> insert into test1 (data)
> select data from test2;
>
> --   We will have 1 row inserted
> select * from test1;
>
> -- But no rows here even though a row was placed in test1
> select * from test_que;
>
> rollback;
> --
> Chris Kratz
>
>



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Rule appears not to fire on insert w/ "except"

2005-11-22 Thread Chris Kratz
On Tuesday 22 November 2005 08:34 am, A.j. Langereis wrote:
> Dear Chris,
>
> What about this:
>
> insert into test1
>     select id, data from test2
>     where id, data not in (select id, data from test1);
>
> of which one would expect the same results...
>
> Yours,
>
> Aarjan
>
> Ps. notice that you are inserting data into a serial column (in your
> examples as well), as far as I know this is not common practice since your
> sequence is now not correct anymore.
> Therefore, better would be:
>
> insert into test1 (data)
>     select data from test2
>     where id, data not in (select id, data from test1);
>
> or to make the id column in table test1 of the integer type.

Hello Aarjan,

Thanks for the hint, but I get the same behavior with the not in syntax.  And 
you are right about the serial issue.  My example was somewhat contrived as I 
was trying to get it down to a minimal set of steps to reproduce.  The real 
table is actually a denormalized table we use for reporting, so the serial 
comes from test2 always.  Anyway, the testcase with the not in clause showing 
the same behavior is at the end of this email.  

Also, the id,data not in (select id,data... clause didn't work.  I changed it 
to use only id in my test case below.  Actually if there is a way to do 
multiple column comparisons like this it would be interesting.  We generally 
have done it with a where not exists (select 1 from table where col1=id and 
col2=data).

As Tom Lane pointed out in an earlier email.  The problem is happening because 
when the rule is processed, the inserts have already happened.  So, to get 
the new.id value, it reruns the select * from test2 except select * from 
test1, there is no data returned because the except removes everything, so 
the rule never fires.  I actually had a problem recently where a serial was 
incremented 2x because a rule referenced new.id instead of the currval on the 
appropriate sequence.  Same problem.  The rule causes a re-evaluation of the 
orginal sql statement to get the new.* values.

Thanks,

-Chris

 Test Cases 

-- Not working case, insert w/ except clause
begin;

create table test1(id serial, data text);

create table test2(id serial, data text);
insert into test2(data) values('abc');

create table test_que(row_id integer);

CREATE OR REPLACE RULE debug_rule AS
ON INSERT TO test1
   do INSERT INTO test_que (row_id)
  VALUES (new.id);

insert into test1 (data)
select data from test2
   where id not in (select id from test1);

--   We will have 1 row inserted
select * from test1;

-- But no rows here even though a row was placed in test1
select * from test_que;

rollback;

-- Working test case
begin;

create table test1(id serial, data text);

create table test2(id serial, data text);
insert into test2(data) values('abc');

create table test_que(row_id integer);

CREATE OR REPLACE RULE debug_rule AS
ON INSERT TO test1
   do INSERT INTO test_que (row_id)
  VALUES (new.id);

insert into test1 (data)
select data from test2;

--   We will have 1 row inserted
select * from test1;

-- But no rows here even though a row was placed in test1
select * from test_que;

rollback;
-- 
Chris Kratz

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Rule appears not to fire on insert w/ "except"

2005-11-22 Thread A.j. Langereis
Dear Chris,

What about this:

insert into test1
select id, data from test2
where id, data not in (select id, data from test1);

of which one would expect the same results...

Yours,

Aarjan

Ps. notice that you are inserting data into a serial column (in your
examples as well), as far as I know this is not common practice since your
sequence is now not correct anymore.
Therefore, better would be:

insert into test1 (data)
select data from test2
where id, data not in (select id, data from test1);

or to make the id column in table test1 of the integer type.

- Original Message -
From: "Chris Kratz" <[EMAIL PROTECTED]>
To: 
Cc: "Jerry Sievers" <[EMAIL PROTECTED]>
Sent: Tuesday, November 22, 2005 2:12 PM
Subject: Re: [GENERAL] Rule appears not to fire on insert w/ "except"


> On Monday 21 November 2005 08:05 pm, Jerry Sievers wrote:
> > Chris Kratz <[EMAIL PROTECTED]> writes:
> > > Hello All,
> > >
> > > We have finally tracked down a bug in our application to a rewrite
rule
> > > on a table.  In essence, the rewrite rule in question logs any inserts
to
> > > another table.  This works correctly in all cases except where an
> > > "except" clause is used in the insert statement.  In this case, the
rows
> > > are inserted into the primary table as expected, but the rule either
does
> > > not fire, or fires in such a way that nothing is placed in the changes
> > > table.
> >
> > You must be referring to something like;
> >
> > insert into foo
> > select *
> > from sometable
> > except
> > select *
> > from someothertable
> > ;
> >
> > If there's an EXCEPT clause on INSERT, I've never seen it.
> >
> > Perhaps you should post your insert query and your rule declaration.
> >
> > >  As a side note, is there a way to see the final sql after all
"rewrite"
> > > rules have been processed?  It might help us understand what is going
on.
> >
> > Not SQL but see config setting;
> >
> > debug_print_rewritten
>
> Hello Jerry,
>
> The insert statement is included in the test case.  Here it is again.
>
> insert into test1
>   select id,data from test2
>   except select id,data from test1;
>
> The goal of the except was to only insert items from test2 that don't
already
> exist in test1.
>
> Thanks for the hint on debug_print_rewritten.  I'll look into that.
>
> -Chris
>
> --
> Chris Kratz
>
> ---(end of broadcast)---
> TIP 1: if posting/reading through Usenet, please send an appropriate
>subscribe-nomail command to [EMAIL PROTECTED] so that your
>message can get through to the mailing list cleanly
>
>



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Java, postgres and jasper help

2005-11-22 Thread sconeek
hi all,

i am trying to fix this bug within my program. its a java, postgres and
jasper based program which generates charts. now i am generating a
chart which does not show any 0 data points if they exist, only
non-zero ones. obviously i am trying to do everything in just one sql
query, but i am not sure if its possible or not. i also need to
generate a temporary table and then delete it.

I have broken the tasks into,
1. create a temp table
2. create records for 0 data points
3. update with non 0 records
4. delete temp table.

now this is what i have and i would appreciate all help with fixing
this,

public void createTempTubProdTable(String filter, String tableName)
throws SQLException {

String sqlStr = "";
String sqlWhereStr = " WHERE ";

Debug.println("FILTER: " + filter);
if (filter != null)
{
sqlWhereStr = sqlWhereStr + filter;
}

else
{
sqlWhereStr = sqlWhereStr + "'?' = '?'";

}

String sqlCreateTableStr = "" +
"CREATE TABLE \"public\".\"tubprod_temp\" ( " +
" \"tubprod_event\" BIGSERIAL, " +
" \"edit_time\" TIMESTAMP(0) WITHOUT TIME ZONE DEFAULT now(), " +
" \"edit_time_count\" BIGINT, " +
" CONSTRAINT \"tubprod_temp_pkey\" PRIMARY KEY(\"tubprod_event\") " +
") WITHOUT OIDS; ";

String sqlInsertDataStr = " " +
" INSERT INTO tubprod_temp (edit_time,edit_time_count) " +
" SELECT to_timestamp(to_char (last_edit_timestamp,'-MM-DD
HH24:00:00'), '-MM-DD HH24:00:00')AS edit_time, count(to_char
(last_edit_timestamp,'-MM-DD HH24:00:00')) as edit_time_count " +
" FROM " + tableName + " " + sqlWhereStr +
" GROUP BY to_char (last_edit_timestamp,'-MM-DD HH24:00:00') " +
" ORDER BY to_char (last_edit_timestamp,'-MM-DD HH24:00:00'); ";

Debug.println("sqlInsertDataStr: " + sqlInsertDataStr);

Statement stmt = conn.createStatement();
// Create temporary tub production table
sqlStr = sqlCreateTableStr + sqlInsertDataStr;
Debug.println(sqlStr);
stmt.executeUpdate(sqlStr);

}

public void deleteTempTubProdTable() throws SQLException {

Statement stmt = conn.createStatement();

// Drop temporary tub production table
String sql = "DROP TABLE tubprod_temp;";

stmt.executeUpdate(sql);


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] settings for multi-language unicode DB

2005-11-22 Thread Janet Bagg
Please could somebody give me advice on settings for PGSQL with a database
with UTF-8 strings in a large number of languages? I've had no problems so
far in storing/retrieving UTF-8 strings but can't find clear answers to
other issues. 

What locale would be best for sorting this dataset? Does it have to be the
same as the one set for the OS (these seem very limited for Solaris)?

As well as European accented characters, I need to handle some others used
for transcription. I have to produce simple to use, sorted lists by initial
letter for users and staff.

The database is for a library which takes journals from all over the world
and has to store titles in all european langauges (broadly defined) plus
western transcriptions of others. We are in the process of moving to
unicode. The DBMS currently runs on Solaris 9 with a backup system on
Fedora 3 Linux and user/editor access is via servlets (Java).

Thanks for any help.

Janet Bagg, CSAC, University of Kent, UK



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] TSearch2 Questions

2005-11-22 Thread Hannes Dorbath

On 21.11.2005 18:24, Bruno Wolff III wrote:

On Mon, Nov 21, 2005 at 16:50:00 +0300, Oleg Bartunov  wrote:

On Mon, 21 Nov 2005, Hannes Dorbath wrote:

I'm playing a bit with it ATM. Indexing one Gigabyte of plain text worked 
well, with 10 GB I yet have some performance problems. I read the TSearch 
Tuning Guide and will start optimizing some things, but is it a realistic 
goal to index ~90GB plain text and get sub-second response times on 
hardware that ~4000 EUR can buy?

What's ATM ?  As for the sub-second response times it'd very depend on
your data and queries. It'd be certainly possible with our tsearch daemon
which we postponed, because we inclined to implement inverted indices first
and then build fts index on top of inverted index. But this is long-term
plan.


I believe in this context, 'ATM' is an ancronym for 'at the moment' which
has little impact on the meaning of the paragraph.


For whatever reason I cannot find Oleg's reply on this server, so I 
reply to this post instead. Thanks for your time Oleg, your answers 
really helped me. I still have two questions about compound words and 
UTF-8, but I'll create a new specific post.


Thanks again.

--
Regards,
Hannes Dorbath

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Rule appears not to fire on insert w/ "except"

2005-11-22 Thread Chris Kratz
On Monday 21 November 2005 08:16 pm, David Fetter wrote:
> On Mon, Nov 21, 2005 at 08:05:19PM -0500, Jerry Sievers wrote:
> > Chris Kratz <[EMAIL PROTECTED]> writes:
> > > Hello All,
> > >
> > > We have finally tracked down a bug in our application to a rewrite rule
> > > on a table.  In essence, the rewrite rule in question logs any inserts
> > > to another table.  This works correctly in all cases except where an
> > > "except" clause is used in the insert statement.  In this case, the
> > > rows are inserted into the primary table as expected, but the rule
> > > either does not fire, or fires in such a way that nothing is placed in
> > > the changes table.
> >
> > You must be referring to something like;
> >
> > insert into foo
> > select *
> > from sometable
> > except
> > select *
> > from someothertable
> > ;
> >
> > If there's an EXCEPT clause on INSERT, I've never seen it.
>
> I suppose you could wrap the SELECT...EXCEPT in parens.
>
> WARNING Untested Code:
>
> INSERT INTO foo
> (SELECT a,b,c FROM bar
> EXCEPT
> SELECT a,b,c FROM baz);
>
> HTH :)
>
> Cheers,
> D

Hello David,

Yes, we actually did try the parenthesis, but we still observed the same 
behavior.

Thanks though,

-Chris

-- 
Chris Kratz

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Rule appears not to fire on insert w/ "except"

2005-11-22 Thread Chris Kratz
On Monday 21 November 2005 08:05 pm, Jerry Sievers wrote:
> Chris Kratz <[EMAIL PROTECTED]> writes:
> > Hello All,
> >
> > We have finally tracked down a bug in our application to a rewrite rule
> > on a table.  In essence, the rewrite rule in question logs any inserts to
> > another table.  This works correctly in all cases except where an
> > "except" clause is used in the insert statement.  In this case, the rows
> > are inserted into the primary table as expected, but the rule either does
> > not fire, or fires in such a way that nothing is placed in the changes
> > table.
>
> You must be referring to something like;
>
> insert into foo
> select *
> from sometable
> except
> select *
> from someothertable
> ;
>
> If there's an EXCEPT clause on INSERT, I've never seen it.
>
> Perhaps you should post your insert query and your rule declaration.
>
> >  As a side note, is there a way to see the final sql after all "rewrite"
> > rules have been processed?  It might help us understand what is going on.
>
> Not SQL but see config setting;
>
> debug_print_rewritten

Hello Jerry,

The insert statement is included in the test case.  Here it is again.

insert into test1
  select id,data from test2
  except select id,data from test1;

The goal of the except was to only insert items from test2 that don't already 
exist in test1.  

Thanks for the hint on debug_print_rewritten.  I'll look into that.

-Chris

-- 
Chris Kratz

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Timestamp with Timezone

2005-11-22 Thread Norberto Meijome

Matthew Terenzio wrote:


reading the docs . . . let's see if I've got it.

1. Timestamp with timezone accepts a timestamp with the additional 
timezone, converts it and stores it as GMT
2. It returns the value as the timestamp converted to the timezone of 
the local machine?


that's what it says, that's what it seems to do :)
you may also want to read section 9.9.3 of the 8.1 pgsql docs on how to 
return the given timestamp in another TZ.


cheers,
Beto

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] not null error in trigger on unrelated column

2005-11-22 Thread CSN

I removed the not null contraint on members.admin and "update items set active 
= false where
member_id=38" results in the count columns in members getting updated AND 
members.admin getting
set to NULL. Really bizarre.

I dropped the trigger function and the trigger, then recreated both, and now 
they both appear to
function properly. I don't remember seeing anything in the docs stating that 
this is necessary, so
I guess perhaps it's a bug?

csn


--- CSN <[EMAIL PROTECTED]> wrote:

> I have a members table and an items table. Every time items.member_id or 
> items.active gets
> changed, members.items_submitted and members.items_approved gets updated by a 
> trigger on items.
> I
> added an "admin" column to members, and now this happens:
> 
> => update items set active = false where member_id=38;
> ERROR:  null value in column "admin" violates not-null constraint
> CONTEXT:  SQL statement "update members set items_approved=items_approved-1 
> where id= $1 "
> PL/pgSQL function "update_member_item_counts" line 54 at SQL statement
> 
> The relevant part of the (after) trigger function on items is:
> 
> ELSIF OLD.active is true and NEW.active is false then
>   update members set
>   items_approved=items_approved-1
>   where id=NEW.member_id;
> END IF;
> 
> Is it necessary to drop and recreate triggers and/or corresponding functions 
> after changing a
> table's schema? I don't know how something is trying to set members.admin to 
> null (table members
> has no triggers).
> 
> thanks
> csn
> Postgresql 8.0.x
> 
> 
>   
> __ 
> Yahoo! FareChase: Search multiple travel sites in one click.
> http://farechase.yahoo.com
> 






__ 
Yahoo! Mail - PC Magazine Editors' Choice 2005 
http://mail.yahoo.com

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Difference in indexes

2005-11-22 Thread A.j. Langereis
Ok, I didn't look at it from that point of view. It makes it all clear!

Thanks for the explanation!

Yours,

Aarjan Langereis

- Original Message -
From: "Qingqing Zhou" <[EMAIL PROTECTED]>
To: 
Sent: Tuesday, November 22, 2005 2:23 AM
Subject: Re: [GENERAL] Difference in indexes


>
> ""A.j. Langereis"" <[EMAIL PROTECTED]> wrote
> >
> > "Bitmap Heap Scan on hosts  (cost=2.07..11.34 rows=21 width=59) (actual
> > time=0.175..0.287 rows=21 loops=1)"
> > "  Recheck Cond: ((hostname)::text = 'Fabian'::text)"
> > "  ->  Bitmap Index Scan on hosts_hostname  (cost=0.00..2.07 rows=21
> > width=0) (actual time=0.145..0.145 rows=21 loops=1)"
> > "Index Cond: ((hostname)::text = 'Fabian'::text)"
> > "Total runtime: 0.510 ms"
> >
> > This result was achieved by setting enable_seqscan to off
> > (postgresql.conf).
> > Turning off enable_bitmapscan as well resulted in a index scan which was
> > even more faster:
> >
> > "Index Scan using hosts_hostname on hosts  (cost=0.00..37.28 rows=21
> > width=59) (actual time=0.068..0.281 rows=21 loops=1)"
> > "  Index Cond: ((hostname)::text = 'Fabian'::text)"
> > "Total runtime: 0.492 ms"
> >
>
> If you compare the difference among the *estimated* cost ("cost=0.00 .."):
>
> seqscan: cost=0.00..10.25
> Bitmap: cost=2.07..11.34
> indexscan: cost=0.00..37.28
>
> Then you will know why the optimizer prefers sequential scan.  Yes, in
your
> case, the *real* cost("actual time = ...") is quite different from the
> estimated cost. That's because the optimizer can't collect enough
> information of the environment at execution. For example, the optimizer
does
> not know if a data page is in buffer or not(which will incurs IO cost) and
> it always assumes not. There is a long story about the why the optimizer
> does this. In short, since PG uses small buffer pool and the optimizer is
> mainly useful for big tables, so this assumption is reasonable -- but for
> small tables, may not that good.
>
> Regards,
> Qingqing
>
>
>
>
>
>
> ---(end of broadcast)---
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>choose an index scan if your joining column's datatypes do not
>match
>
>



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org