[GENERAL] any way to remove a password?

2004-10-13 Thread Neil Berkman
Is there any way to remove a password, specifically from the postgres
user?  I installed the 8.0 Windows version via the installer, and it
forces you to set a password.  I'd like to reset this so that there is
no password (basically, to match the setup on several other machines -
and no, for a number of reasons it's not feasible to just set
passwords on those other machines).

I know about the alter user command, but I haven't seen a way to use
it to remove a password.  I've tried:

alter user postgres with password '';
alter user postgres with password null;

The first one doesn't work and the second returns an error.

Does anyone know whether this is possible?  Thanks.

Neil

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] PostgreSQL CE started

2004-10-13 Thread Marc G. Fournier
On Wed, 13 Oct 2004, Aaron Glenn wrote:
What is the PostgreSQL project's official stance/view/comment on this?
Should we have one?  And why?

Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[GENERAL] unsubscribe from the mailing list.

2004-10-13 Thread Keow Yeong Huat Joseph



I would like to 
unsubscribe my address from the mailing list, please advise how to go about 
this?
 
Regards
Joseph

 


Re: [GENERAL] PostgreSQL CE started

2004-10-13 Thread Bruce Momjian
Aaron Glenn wrote:
> What is the PostgreSQL project's official stance/view/comment on this?

Uh, I work for SRA, but the community has always stated that the project
can not certify anyone.  Just like the Linux kernel group doesn't
certify anyone, Red Hat and other companies do, the same is true of
PostgreSQL, so we are fine with SRA doing certification.  Of course it
is an "SRA-certified PostgreSQL engineer", not a community-certified
PostgreSQL engineer, because the later makes no sense.

---


> 
> 
> aaron.glenn
> 
> 
> On Thu, 14 Oct 2004 10:48:26 +0900 (JST), Tatsuo Ishii
> <[EMAIL PROTECTED]> wrote:
> > Hi all,
> > 
> > We, Software Research Associates, Inc., have started "PostgreSQL CE"
> > (PostgreSQL Certificated Engineer), on Oct 1st. PostgreSQL CE is an
> > entry level PostgreSQL engineer certification program. We expect at
> > least several hundreds of people will take the examin by April
> > 2005. We also hope that PostgreSQL CE makes PostgreSQL more and more
> > popular.
> > 
> > For the present examins are held in about 100 test centers in Japan
> > and the examin itself is written in Japanese. However since we have a
> > partnership with Pearson VUE (http://www.pearsonvue.com/) to operate
> > the examin, it is possible that PostgreSQL CE could be taken effect in
> > US and/or rest of the world.
> > 
> > Software Research Associates, Inc.(http://www.sra.co.jp/index-en.html)
> > is headquartered in Tokyo and is doing lots of PostgreSQL businesses.
> > --
> > Tatsuo Ishii
> > 
> > ---(end of broadcast)---
> > TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
> >
> 
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
> 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] creating audit tables

2004-10-13 Thread Scott Cain
Hello,

I am trying to create audit tables for all of the tables in my
database.  The function, table and trigger create statements are below. 
Apparently, I am not doing it quite right, because I get these messages
when I try to run the create statements below:

CREATE FUNCTION
CREATE FUNCTION
CREATE TABLE
CREATE TABLE
GRANT
ERROR:  function audit_update() does not exist
ERROR:  function audit_delete() does not exist

Why do I get a message that the functions don't exist when they were
just successfully created?

Thanks much,
Scott

Here's the ddl:
CREATE FUNCTION audit_update(varchar) RETURNS trigger
  AS '
DECLARE
audit_table varchar;
table_name  varchar;
BEGIN
table_name  = $1;
audit_table = ''audit_'' || table_name;
INSERT INTO audit_table VALUES (SELECT OLD.*,now(),''U'' FROM table_name);
return NEW;
END
'
LANGUAGE plpgsql;
   

CREATE FUNCTION audit_delete(varchar) RETURNS trigger
  AS '
DECLARE
audit_table varchar;
table_name  varchar;
BEGIN
table_name  = $1;
audit_table = ''audit_'' || table_name;
INSERT INTO audit_table VALUES (SELECT OLD.*,now(),''D'' FROM table_name);
return OLD;
END
'
LANGUAGE plpgsql;
   

create table tableinfo (
tableinfo_id serial not null,
primary key (tableinfo_id),
name varchar(30) not null,
primary_key_column varchar(30) null,
is_view int not null default 0,
view_on_table_id int null,
superclass_table_id int null,
is_updateable int not null default 1,
modification_date date not null default now(),
constraint tableinfo_c1 unique (name)
);
   

   CREATE TABLE audit_tableinfo (
   tableinfo_id integer,
   name varchar,
   primary_key_column varchar,
   is_view integer,
   view_on_table_id integer,
   superclass_table_id integer,
   is_updateable integer,
   modification_date date,
   transaction_date timestamp not null,
   transaction_type char not null
   );
   GRANT ALL on audit_tableinfo to PUBLIC;
   

   CREATE TRIGGER tableinfo_audit_u
   BEFORE UPDATE ON tableinfo
   FOR EACH ROW
   EXECUTE PROCEDURE audit_update('tableinfo');
   

   CREATE TRIGGER tableinfo_audit_d
   BEFORE DELETE ON tableinfo
   FOR EACH ROW
   EXECUTE PROCEDURE audit_delete('tableinfo');


-- 

Scott Cain, Ph. D. [EMAIL PROTECTED]
GMOD Coordinator (http://www.gmod.org/) 216-392-3087
Cold Spring Harbor Laboratory


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL] psycopg help

2004-10-13 Thread Scott Frankel
First, apologies in advance for this somewhat OT post ...
I'm looking for a source of information on using the psycopg
interface to postgresql.  A mailing list would be ideal.  I've
poked at their wiki, but not found what I'm looking for.
Also, new to both postrgresql & psycopg, my questions appear
too basic for the doc/examples they provide with their installation.
i.e.:
- What's the appropriate syntax for specifying a primary key?
- What is the data type "text?"  And how does it differ from
  CHAR(len), &c.?
My very simple table creation test (based on their "first.py"
example is failing ...  Here's what I'm trying.  Non-pythonated
syntax works in pgsql:
no go:
curs.execute("""CREATE TABLE key_test (
key_col CHAR(9) PRIMARY KEY,
nother_col CHAR(256))""")
pure joy:
cs_test=# CREATE TABLE key_test (
cs_test(# key_col CHAR(9) PRIMARY KEY,
cs_test(# nother_col CHAR(256)
cs_test(# );
Thanks!
Scott
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] creating audit tables

2004-10-13 Thread Tom Lane
Scott Cain <[EMAIL PROTECTED]> writes:
> I am trying to create audit tables for all of the tables in my
> database.  The function, table and trigger create statements are below. 
> Apparently, I am not doing it quite right, because I get these messages
> when I try to run the create statements below:

Trigger functions don't take any explicit parameters.  Everything they
need they get through specialized mechanisms (in plpgsql, it's special
variables like tgargv).

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Level of replication support?

2004-10-13 Thread Christopher Browne
In an attempt to throw the authorities off his trail, [EMAIL PROTECTED] ("Joshua D. 
Drake") transmitted:
> Slony replicates data every (10?) transactions.

No, Slony-I replicates each and every transaction that it processes,
identifying it as a transaction independent of others.

In practice, it is usually preferable to group updates together when
_applying_ them into destination systems; how much or how little
grouping is done is configurable.

> Mammoth Replicator replicates every transaction.

Just like Slony-I ;-).

> Mammoth is older than Slony and backed by my company Command Prompt,
> Inc.

> Neither is slated to be "integrated" with PostgreSQL as they are both
> good products that serve different purposes.

An excellent reason NOT to integrate these systems tightly is that it
allows them to be used between _different_ versions of PostgreSQL,
between different platforms, and such.

One of the common "use cases" people have been finding finding for
Slony-I hasn't got to do with maintaining replicas, but rather to do
with doing quick upgrades to a new version of PostgreSQL.

Rather than doing a pg_dump, and having to sit in downtime from the
time the dump starts until when it is applied, you set up a
replication target on the newer version of PostgreSQL.  If it takes 3
days to bring the target "online" and up to date, that doesn't
"matter" because it isn't downtime for the live system.  

Once the target is up to date, it can take seconds to minutes to
merely switch over to the new PG database, rather than the hours
needed by less sophisticated methods.  No doubt the same can be done
with Mammoth Replicator.

Tight integration with the database discourages that sort of thing.
-- 
(reverse (concatenate 'string "gro.mca" "@" "enworbbc"))
http://www3.sympatico.ca/cbbrowne/spreadsheets.html
Signs of   a Klingon Programmer -   1.  "Defensive  programming? Never!
Klingon programs are always on the offense. Yes, offensive programming
is what we do best."

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] PostgreSQL CE started

2004-10-13 Thread Aaron Glenn
What is the PostgreSQL project's official stance/view/comment on this?


aaron.glenn


On Thu, 14 Oct 2004 10:48:26 +0900 (JST), Tatsuo Ishii
<[EMAIL PROTECTED]> wrote:
> Hi all,
> 
> We, Software Research Associates, Inc., have started "PostgreSQL CE"
> (PostgreSQL Certificated Engineer), on Oct 1st. PostgreSQL CE is an
> entry level PostgreSQL engineer certification program. We expect at
> least several hundreds of people will take the examin by April
> 2005. We also hope that PostgreSQL CE makes PostgreSQL more and more
> popular.
> 
> For the present examins are held in about 100 test centers in Japan
> and the examin itself is written in Japanese. However since we have a
> partnership with Pearson VUE (http://www.pearsonvue.com/) to operate
> the examin, it is possible that PostgreSQL CE could be taken effect in
> US and/or rest of the world.
> 
> Software Research Associates, Inc.(http://www.sra.co.jp/index-en.html)
> is headquartered in Tokyo and is doing lots of PostgreSQL businesses.
> --
> Tatsuo Ishii
> 
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
>

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] PostgreSQL CE started

2004-10-13 Thread Tatsuo Ishii
Hi all,

We, Software Research Associates, Inc., have started "PostgreSQL CE"
(PostgreSQL Certificated Engineer), on Oct 1st. PostgreSQL CE is an
entry level PostgreSQL engineer certification program. We expect at
least several hundreds of people will take the examin by April
2005. We also hope that PostgreSQL CE makes PostgreSQL more and more
popular.

For the present examins are held in about 100 test centers in Japan
and the examin itself is written in Japanese. However since we have a
partnership with Pearson VUE (http://www.pearsonvue.com/) to operate
the examin, it is possible that PostgreSQL CE could be taken effect in
US and/or rest of the world.

Software Research Associates, Inc.(http://www.sra.co.jp/index-en.html)
is headquartered in Tokyo and is doing lots of PostgreSQL businesses.
--
Tatsuo Ishii

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Time at end of transaction

2004-10-13 Thread Phil Endecott
Thanks to Patrick and Richard for pointing out timeofday(); I was 
imagining that I'd have to do some server-side-code magic to get that.

But I'm still hoping that someone will have a better solution - using 
this I still need a near-global lock between setting the timestamps and 
committing the transaction.  Surely last-modified timestamps and 
cache-refreshing are being used all over the place...

Regards,
--Phil.
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Level of replication support?

2004-10-13 Thread Marc G. Fournier
On Wed, 13 Oct 2004, Joshua D. Drake wrote:
Hello,
There are two heavily supported and active replication projects.
1. Slony - http://www.slony.info
2. Mammoth Replicator - http://www.commandprompt.com/
Three, actually ...
  3. eRServer - http://www.pgsql.com
We're currently working on the next version ...

Marc G. Fournier   Hub.Org Networking Services (http://www.hub.org)
Email: [EMAIL PROTECTED]   Yahoo!: yscrappy  ICQ: 7615664
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] error opening pg_clog file

2004-10-13 Thread Tom Lane
Steve Wolfe <[EMAIL PROTECTED]> writes:
>So, I did "dd if=/dev/zero of=/usr/local/pgsql/data/pg_clog/0089 
> bs=8k count=1".  I did an ls to verify that the file existed.  I started 
> the postmaster back up, tried a VACUUM, and got:

> vacuumdb: vacuuming of database "hyperseek" failed: ERROR:  could not 
> access status of transaction 144565028
> DETAIL:  could not open file "/usr/local/pgsql/data/pg_clog/0089": No 
> such file or directory

>I looked, and the "0089" file was gone again.  Is there anything I 
> can do to save the situation?  (The PG version is 7.4.2)

Try vacuuming the damaged database *first*.  vacuumdb is probably
starting off with something that hasn't got a problem.  CLOG will only
get truncated at the end of a successful database-wide vacuum ... but
in this case that's too soon.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] Has anyone tried Date/Darwen/Lorentzos's model for temporal data?

2004-10-13 Thread Eric D. Nielsen
I'm in the process of adding more historic information to one of my 
databases.  I've liked the theoretical treatment of the concept in 
"Temporal Data and the Relational Model", by Date, Darwen, & Lorentzos. 
 A lot of it is not realizable without a lot of user defined 
types/functions/etc.  I was wondering if anyone else has tried to use 
their approach as a base for their historical databases in PostGreSQL 
and has any "lessons learned" to share.

Thank you.
Eric Nielsen
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] Recovering data from corrupted table. Urgent Help!!

2004-10-13 Thread ruben
Hi Gaetano:
This procedure to recover data from a corrupted table should be 
documented somewhere... If it is, I could not find it!

Now I wonder if I have lost any data, because after creating the 
pg_clog/0004 and running VACCUM everything seems ok.

Thanks a lot for your help.
Ruben.
Gaetano Mendola wrote:
[EMAIL PROTECTED] wrote:

Hi:
Is there any way to recover data from a corrupted table? I can only
run SELECTs on certain WHERE conditions.
I cannot vacuum, pg_dump, I've deleted the indexes and try to reindex,
always get error:
ERROR:  could not access status of transaction 4244329
DETAIL:  could not open file "/usr/local/pgsql/data/pg_clog/0004": No
existe el fichero o el directorio
Thanks a lot.

Again:
create an empty file:
# touch /usr/local/pgsql/data/pg_clog/0004
at this point postgres will complain about the fact that an offset is
missing,
at this point fill with 0 your file ( blocks of 8K ) till reach that
offset reclaimed.

I forgot to suggest you to do:
dd bs=8k count=1 < /dev/zero >> /usr/local/pgsql/data/pg_clog/0004
you have to repeat this command till the offset is covered.
Regards
Gaetano Mendola


---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] correct representation of timestamp difference

2004-10-13 Thread phil campaigne
Hi
I'm trying to create a select statement that will return all rows that 
are older than 30 milleseconds.  Is either of these correct?
select event_id from event where (current_timestamp-timestamp)>.030
or
select event_id from event where 
(current_timestamp-timestamp)>00030.00
thanks,
Phil

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Date format for bulk copy

2004-10-13 Thread David Rysdam
Michael Fuhr wrote:
On Wed, Oct 13, 2004 at 01:32:01PM -0400, David Rysdam wrote:
 

Michael Fuhr wrote:
   

You could filter the data through a script that reformats certain
fields, then feed the reformatted data to PostgreSQL.  This is
usually a trivial task for Perl, awk, sed, or the like.
 

Right, I *can* do this.  But then I have to build knowledge into that 
script so it can find each of these date fields (there's like 20 of them 
across 10 different files) and then update that knowledge each time it 
changes.
   

In your case that's a reasonable argument against filtering the
data with a script.  Using a regular expression in the script might
reduce or eliminate the need for some of the logic, but then you'd
run the risk of reformatting data that shouldn't have been touched.
 

I'm still leaning towards just making postgres accept at ':' 
delimiter for milliseconds.
   

Based on your requirements, that might indeed be a better solution.
I'd probably choose to extend PostgreSQL rather than hack what
already exists, though.  Doing the latter might break something
else and you have to remember to add the hack every time you upgrade
the server software.  That can cause headaches for whoever inherits
the system from you unless it's well-documented.
 

By "extend PostgreSQL" do you mean create a custom input_function for 
timestamp?  Are there docs that give hints for replacing the input 
function of an existing type?  Someone else replied similarly, but I'm 
afraid I'm not familiar enough with PG to decipher it all.

Why not the user-defined type with associated user-defined input function?
   

If filtering the data is awkward, then that might be a better way
to go.
 

I think I will, when I get to that point.
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Date format for bulk copy

2004-10-13 Thread Pierre-Frédéric Caillaud

Right, I *can* do this.  But then I have to build knowledge into that
script so it can find each of these date fields (there's like 20 of them
across 10 different files) and then update that knowledge each time it
changes.
In your case that's a reasonable argument against filtering the
data with a script.  Using a regular expression in the script might
reduce or eliminate the need for some of the logic, but then you'd
run the risk of reformatting data that shouldn't have been touched.
Yes, but :
	You can have your script make a query in the database to fetch the data  
types of the fields and then know which ones are to be transformed and  
how. The script would take as arguments a dump file and a  
database,schema.table, would read the file and pipe the transformed data  
into a psql with a COPY FROM stdin command... could save you a lot of work  
no ?

	A bonus is that your script can complain if it detects incompatibilities,  
and be more fool-proof. Plu


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Level of replication support?

2004-10-13 Thread Ted Shab
Here is a good overview.  

http://jeremy.zawodny.com/blog/archives/000846.html

Note that http://pgreplicator.sourceforge.net/ is a
multi-master solution using TCL.  Despite the dire
home page, it is "actively" supported.  However, our
experience with it was that it was not industrial
strenght.  We aren't TCL experts though, so maybe that
would have helped.

--Ted
--- "Joshua D. Drake" <[EMAIL PROTECTED]> wrote:

> Hello,
> 
> There are two heavily supported and active
> replication projects.
> 
> 1. Slony - http://www.slony.info
> 2. Mammoth Replicator -
> http://www.commandprompt.com/
> 
> Slony is Open Source and uses triggers and a
> replication schema to 
> replicate data.
> 
> Mammoth Replicator is commercial and uses a
> transaction log and a master 
> control process to replicate data.
> 
> Slony replicates data every (10?) transactions.
> Mammoth Replicator replicates every transaction.
> 
> Mammoth is older than Slony and backed by my company
> Command Prompt, Inc.
> 
> Neither is slated to be "integrated" with PostgreSQL
> as they are both
> good products that serve different purposes.
> 
> Sincerely,
> 
> Joshua D. Drake
> 
> 
> [EMAIL PROTECTED] wrote:
> > Hello
> > 
> > I am going to do a comparison betweem PgSQL and
> MySQL replication system.
> > 
> > I hear there are some replication projects
> available for PgSQL.  Which are
> > still active and serious, because I hear that some
> are not active or
> > incomplete?
> > 
> > Will any of these projects be merged with PgSQL
> soon?
> > 
> > I appreciate all information.
> > 
> > Thank you.
> > 
> > Tim
> > 
> > 
> > 
> > ---(end of
> broadcast)---
> > TIP 5: Have you checked our extensive FAQ?
> > 
> >   
> http://www.postgresql.org/docs/faqs/FAQ.html
> 
> 
> -- 
> Command Prompt, Inc., home of PostgreSQL
> Replication, and plPHP.
> Postgresql support, programming shared hosting and
> dedicated hosting.
> +1-503-667-4564 - [EMAIL PROTECTED] -
> http://www.commandprompt.com
> Mammoth PostgreSQL Replicator. Integrated
> Replication for PostgreSQL
> > begin:vcard
> fn:Joshua D. Drake
> n:Drake;Joshua D.
> org:Command Prompt, Inc.
> adr:;;PO Box 215;Cascade Locks;Oregon;97014;USA
> email;internet:[EMAIL PROTECTED]
> title:Consultant
> tel;work:503-667-4564
> tel;fax:503-210-0334
> note:Command Prompt, Inc. is the largest and oldest
> US based commercial PostgreSQL support provider. We 
> provide the only commercially viable integrated
> PostgreSQL replication solution, but also custom
> programming, and support. We authored  the book
> Practical PostgreSQL, the procedural language plPHP,
> and adding trigger capability to plPerl.
> x-mozilla-html:FALSE
> url:http://www.commandprompt.com/
> version:2.1
> end:vcard
> 
> > 
> ---(end of
> broadcast)---
> TIP 6: Have you searched our list archives?
> 
>http://archives.postgresql.org
> 




___
Do you Yahoo!?
Declare Yourself - Register online to vote today!
http://vote.yahoo.com

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Date format for bulk copy

2004-10-13 Thread Michael Fuhr
On Wed, Oct 13, 2004 at 01:32:01PM -0400, David Rysdam wrote:
> Michael Fuhr wrote:
> >You could filter the data through a script that reformats certain
> >fields, then feed the reformatted data to PostgreSQL.  This is
> >usually a trivial task for Perl, awk, sed, or the like.
> >
> Right, I *can* do this.  But then I have to build knowledge into that 
> script so it can find each of these date fields (there's like 20 of them 
> across 10 different files) and then update that knowledge each time it 
> changes.

In your case that's a reasonable argument against filtering the
data with a script.  Using a regular expression in the script might
reduce or eliminate the need for some of the logic, but then you'd
run the risk of reformatting data that shouldn't have been touched.

> I'm still leaning towards just making postgres accept at ':' 
> delimiter for milliseconds.

Based on your requirements, that might indeed be a better solution.
I'd probably choose to extend PostgreSQL rather than hack what
already exists, though.  Doing the latter might break something
else and you have to remember to add the hack every time you upgrade
the server software.  That can cause headaches for whoever inherits
the system from you unless it's well-documented.

> Also, how much would a secondary script slow down the bulk copy,
> if any?

Probably some, but perhaps not enough to be significant.  I'd expect
the database to be the bottleneck, but I'd have to run tests to say
for certain.

> >Sounds like Sybase is dumping in hex, whereas PostgreSQL expects
> >octal.  If you can't change the dump format, then again, filtering
> >the data through a script might work.
> >
> Oh, so I can load binary data into PG if it's ASCII-encoded octal?

Yes -- see the "Binary Data Types" documentation:

http://www.postgresql.org/docs/7.4/static/datatype-binary.html

> Why not the user-defined type with associated user-defined input function?

If filtering the data is awkward, then that might be a better way
to go.

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Level of replication support?

2004-10-13 Thread Joshua D. Drake
Hello,
There are two heavily supported and active replication projects.
1. Slony - http://www.slony.info
2. Mammoth Replicator - http://www.commandprompt.com/
Slony is Open Source and uses triggers and a replication schema to 
replicate data.

Mammoth Replicator is commercial and uses a transaction log and a master 
control process to replicate data.

Slony replicates data every (10?) transactions.
Mammoth Replicator replicates every transaction.
Mammoth is older than Slony and backed by my company Command Prompt, Inc.
Neither is slated to be "integrated" with PostgreSQL as they are both
good products that serve different purposes.
Sincerely,
Joshua D. Drake
[EMAIL PROTECTED] wrote:
Hello
I am going to do a comparison betweem PgSQL and MySQL replication system.
I hear there are some replication projects available for PgSQL.  Which are
still active and serious, because I hear that some are not active or
incomplete?
Will any of these projects be merged with PgSQL soon?
I appreciate all information.
Thank you.
Tim

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
   http://www.postgresql.org/docs/faqs/FAQ.html

--
Command Prompt, Inc., home of PostgreSQL Replication, and plPHP.
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com
Mammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL
begin:vcard
fn:Joshua D. Drake
n:Drake;Joshua D.
org:Command Prompt, Inc.
adr:;;PO Box 215;Cascade Locks;Oregon;97014;USA
email;internet:[EMAIL PROTECTED]
title:Consultant
tel;work:503-667-4564
tel;fax:503-210-0334
note:Command Prompt, Inc. is the largest and oldest US based commercial PostgreSQL support provider. We  provide the only commercially viable integrated PostgreSQL replication solution, but also custom programming, and support. We authored  the book Practical PostgreSQL, the procedural language plPHP, and adding trigger capability to plPerl.
x-mozilla-html:FALSE
url:http://www.commandprompt.com/
version:2.1
end:vcard


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Level of replication support?

2004-10-13 Thread Robby Russell
On Wed, 2004-10-13 at 10:59 -0700, Robby Russell wrote:
> On Wed, 2004-10-13 at 20:02 +0200, [EMAIL PROTECTED] wrote:
> > Hello
> > 
> > I am going to do a comparison betweem PgSQL and MySQL replication system.
> > 
> > I hear there are some replication projects available for PgSQL.  Which are
> > still active and serious, because I hear that some are not active or
> > incomplete?
> > 
> 
> Slony-I is the most active *community* project currently. 
> 

forgot the url

http://gborg.postgresql.org/project/slony1/projdisplay.php


> 
> > Will any of these projects be merged with PgSQL soon?
> > 
> 

It's something you would install along with pgsql. Not likely to ever be
bundled together.

-Robby

-- 
/***
* Robby Russell | Owner.Developer.Geek
* PLANET ARGON  | www.planetargon.com
* Portland, OR  | [EMAIL PROTECTED]
* 503.351.4730  | blog.planetargon.com
* PHP/PostgreSQL Hosting & Development
/



signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] Level of replication support?

2004-10-13 Thread Robby Russell
On Wed, 2004-10-13 at 20:02 +0200, [EMAIL PROTECTED] wrote:
> Hello
> 
> I am going to do a comparison betweem PgSQL and MySQL replication system.
> 
> I hear there are some replication projects available for PgSQL.  Which are
> still active and serious, because I hear that some are not active or
> incomplete?
> 

Slony-I is the most active *community* project currently. 


> Will any of these projects be merged with PgSQL soon?
> 

-- 
/***
* Robby Russell | Owner.Developer.Geek
* PLANET ARGON  | www.planetargon.com
* Portland, OR  | [EMAIL PROTECTED]
* 503.351.4730  | blog.planetargon.com
* PHP/PostgreSQL Hosting & Development
/



signature.asc
Description: This is a digitally signed message part


[GENERAL] Level of replication support?

2004-10-13 Thread nd02tsk
Hello

I am going to do a comparison betweem PgSQL and MySQL replication system.

I hear there are some replication projects available for PgSQL.  Which are
still active and serious, because I hear that some are not active or
incomplete?

Will any of these projects be merged with PgSQL soon?

I appreciate all information.

Thank you.

Tim



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Date format for bulk copy

2004-10-13 Thread David Rysdam
Greg Stark wrote:
David Rysdam <[EMAIL PROTECTED]> writes:
 

In my brute force port, I just bulk copied the date
fields into temporary tables and then did a to_timestamp(field, 'Mon DD 
HH:MI:SS:MSAM'). 
   

 

Again, I created a temporary table and did a decode(field, 'hex') to the
real table.
   

This is the standard approach. You're rather lucky these are the only
data representation changes you've had to do so far. I fear you'll run into
more and more complex changes over time and trying to avoid the temporary
table will get harder and harder.
 

No, I think I'm OK there.  These are programmatically-generated values 
and I've already been through them all once.  Just the millisecond issue 
and the hex binary issue AFAIK.

If it were me I would consider processing the files in perl. It should be
pretty easy to do both of these modifications very quickly.
 

Very quick and easy to do one time.  A little trickier to handle in an 
elegant, maintainable way for the dozens of data reloads I do every 
month for GBs of data onto two different server types.

If you really want to go with a custom C code then you might be able to just
grab the byteain/byteaout functions from src/backend/util/adt/varlena into a
separate module and create new functions with modified names. Load it with
CREATE FUNCTION byteain ... AS 'my_bytea_funcs.so' 'my_byteain';
Or maybe create the function as my_byteain in postgres and then update the
catalog entries somehow. I'm not sure how to do that but it shouldn't be too
hard. And it might make it easier to do the substitution for the data load and
then undo the change afterwards.
 

Why not create a type and then define the load function to be the 
equivalent of "decode('hex')"?

Doing the same for timmestamp is a bit trickier but you could copy
ParseDateTime from datetime.c as a static function for your module.
Be careful though, test this out thoroughly on a test database. I'm not sure
of all the impacts of altering the in/out functions for data types. I expect
it would break pg_dump, for example. And I would worry about the statistics
tables too.
 

This is kind of a hybrid of my suggestions and the problems are a hybrid 
as well.  :)

1) Just change the timestamp type so that it allows a ':' delimiter for 
milliseconds.  Potential problems: Other parts of the code won't expect 
it. People don't want that.

2) Create a new type.  Potential problem: Things like date ranges 
probably wouldn't work anymore, since the server wouldn't know it's a 
date now.


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Date format for bulk copy

2004-10-13 Thread David Rysdam
Michael Fuhr wrote:
On Wed, Oct 13, 2004 at 10:06:58AM -0400, David Rysdam wrote:
 

Sybase bulk copies the date fields out in this format:
Mar  4 1973 10:28:00:000AM
Postgresql's COPY (or psql \copy) doesn't like that format.
   

You could filter the data through a script that reformats certain
fields, then feed the reformatted data to PostgreSQL.  This is
usually a trivial task for Perl, awk, sed, or the like.
 

Right, I *can* do this.  But then I have to build knowledge into that 
script so it can find each of these date fields (there's like 20 of them 
across 10 different files) and then update that knowledge each time it 
changes.  I'm still leaning towards just making postgres accept at ':' 
delimiter for milliseconds.  Also, how much would a secondary script 
slow down the bulk copy, if any?

I have a similarish problem with another field type.  In Sybase it's a 
binary format.  In postgres it is a binary format (bytea).  But Sybase 
bcps the data out in ASCII.  Sybase recognizes that when it is a binary 
field and auto-converts the ASCII back to binary.  Postgres doesn't.  
Again, I created a temporary table and did a decode(field, 'hex') to the 
real table.
   

Sounds like Sybase is dumping in hex, whereas PostgreSQL expects
octal.  If you can't change the dump format, then again, filtering
the data through a script might work.
 

Oh, so I can load binary data into PG if it's ASCII-encoded octal?  Why 
not the user-defined type with associated user-defined input function?

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Date format for bulk copy

2004-10-13 Thread Michael Fuhr
On Wed, Oct 13, 2004 at 10:06:58AM -0400, David Rysdam wrote:

> Sybase bulk copies the date fields out in this format:
> 
> Mar  4 1973 10:28:00:000AM
> 
> Postgresql's COPY (or psql \copy) doesn't like that format.

You could filter the data through a script that reformats certain
fields, then feed the reformatted data to PostgreSQL.  This is
usually a trivial task for Perl, awk, sed, or the like.

> I have a similarish problem with another field type.  In Sybase it's a 
> binary format.  In postgres it is a binary format (bytea).  But Sybase 
> bcps the data out in ASCII.  Sybase recognizes that when it is a binary 
> field and auto-converts the ASCII back to binary.  Postgres doesn't.  
> Again, I created a temporary table and did a decode(field, 'hex') to the 
> real table.

Sounds like Sybase is dumping in hex, whereas PostgreSQL expects
octal.  If you can't change the dump format, then again, filtering
the data through a script might work.

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] ODBC

2004-10-13 Thread Robby Russell
On Wed, 2004-10-13 at 11:17 -0400, Alexander Cohen wrote:
> Hi,
> 
> Im looking for instructions on installing the postgresql ODBC driver on 
> mac osx and windows. Is there any reference for that. Any help is 
> apprecitaed. BTW, i need to compile it and install it wihtout using the 
> windows installer.
> 
> Alex

Try here:

http://gborg.postgresql.org/project/psqlodbc/projdisplay.php


-- 
/***
* Robby Russell | Owner.Developer.Geek
* PLANET ARGON  | www.planetargon.com
* Portland, OR  | [EMAIL PROTECTED]
* 503.351.4730  | blog.planetargon.com
* PHP/PostgreSQL Hosting & Development
/



signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] Date format for bulk copy

2004-10-13 Thread Greg Stark

David Rysdam <[EMAIL PROTECTED]> writes:

> In my brute force port, I just bulk copied the date
> fields into temporary tables and then did a to_timestamp(field, 'Mon DD 
> HH:MI:SS:MSAM'). 

> Again, I created a temporary table and did a decode(field, 'hex') to the
> real table.

This is the standard approach. You're rather lucky these are the only
data representation changes you've had to do so far. I fear you'll run into
more and more complex changes over time and trying to avoid the temporary
table will get harder and harder.

If it were me I would consider processing the files in perl. It should be
pretty easy to do both of these modifications very quickly.


If you really want to go with a custom C code then you might be able to just
grab the byteain/byteaout functions from src/backend/util/adt/varlena into a
separate module and create new functions with modified names. Load it with
CREATE FUNCTION byteain ... AS 'my_bytea_funcs.so' 'my_byteain';

Or maybe create the function as my_byteain in postgres and then update the
catalog entries somehow. I'm not sure how to do that but it shouldn't be too
hard. And it might make it easier to do the substitution for the data load and
then undo the change afterwards.

Doing the same for timmestamp is a bit trickier but you could copy
ParseDateTime from datetime.c as a static function for your module.

Be careful though, test this out thoroughly on a test database. I'm not sure
of all the impacts of altering the in/out functions for data types. I expect
it would break pg_dump, for example. And I would worry about the statistics
tables too.

-- 
greg


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Still more pg_clog errors

2004-10-13 Thread Steve Wolfe
  In reference to the pg_clog errors I'm having, I am still looking for 
tips or help.  Here's the info again:

 "ERROR:  could not access status of transaction 143934068
DETAIL:  could not open file "/usr/local/pgsql/data/pg_clog/0089": No 
such file or directory

  Now, despite creating an 8k file of zeros (or a 256k file of zeros to 
match the others in the directory), when I start the database and try a 
vacuum, I get the same message, and upon inspection, the file is gone - 
as if Postgres is removing the file.

   Any tips or help on what I can do?
steve
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Proposal: GRANT cascade to implicit sequences

2004-10-13 Thread Tom Lane
Greg Stark <[EMAIL PROTECTED]> writes:
> I can't think of a good approach for migration of old pg_dumps though, so
> perhaps this is more trouble than it's worth.

That would probably be the major objection to any redefinition of the
meanings of the individual sequence permissions.  We could possibly
invent a couple of brand new permission bits though, and stipulate that
"UPDATE" incorporates them both.

> Implicit sequences on the other hand can be migrated easily by ignoring all
> explicit grants and just looking at the grants on the table.

It's not really that easy.  Before we hack up the permissions system like
this I'd want to see a complete solution, which this is not, because it
doesn't work in the context of rules.  Consider

CREATE TABLE t (id SERIAL ...);

CREATE VIEW v AS SELECT * FROM t;

CREATE RULE r AS ON INSERT TO v DO INSTEAD INSERT INTO t ...

GRANT INSERT ON v TO joeuser;

joeuser will be able to invoke the insertion rule, but nextval() will
still fail because it doesn't know about the rule context --- it'll
see joeuser as the current user, not the owner of the rule.

Eventually I'd like to replace the nextval('foo') notation with a parsed
construct foo.nextval, which is (a) Oracle compatible, (b) able to
withstand renamings of the foo sequence, and (c) amenable to having the
permissions check done during rangetable scanning, which would fix the
rule problem.  There is some discussion of this in the pghackers archives.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Proposal: GRANT cascade to implicit sequences

2004-10-13 Thread Greg Stark

Bruno Wolff III <[EMAIL PROTECTED]> writes:

> I also think there is some merit in splitting the access rights for nextval
> and setval, so that insert access grants access to nextval and update access
> grants access to setval (or perhaps both nextval and setval). That way people
> who can just insert in the table can't set the sequence number backwards.

That might be a useful thing to do to sequences in general. Being able to
grant INSERT on a sequence to allow nextval without allowing setval could be
useful even for explicit sequences.

I can't think of a good approach for migration of old pg_dumps though, so
perhaps this is more trouble than it's worth.

Implicit sequences on the other hand can be migrated easily by ignoring all
explicit grants and just looking at the grants on the table.

-- 
greg


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] update query confusion

2004-10-13 Thread Ian Harding
Leave assembliesBatch out of the FROM and just put the condition in the
WHERE.  Something like 

UPDATE assembliesBatch
FROM assemblies 
JOIN .
WHERE assembliesBatch.AssemblyID = assemblies.assemblyID
AND assembliesBatch.batchID = 5

Ian Harding
Programmer/Analyst II
Tacoma-Pierce County Health Department
[EMAIL PROTECTED]
Phone: (253) 798-3549
Pager: (253) 754-0002

>>> Sim Zacks <[EMAIL PROTECTED]> 10/12/04 7:22 AM >>>
Ok. I got it working by adding
"and assembliesBatch.AssembliesBatchID=a.AssembliesBatchID"
to the where clause. This seems a bit awkward sytactically. Is there a
cleaner way of doing it?

Thank You
Sim Zacks
IT Manager
CompuLab
04-829-0145 - Office
04-832-5251 - Fax



The following query updated all the rows in the
AssembliesBatch table, not just where batchID=5.

There are 2 rows in the AssembliesBatch table with batch ID of
5 and I wanted to update both of them with their price, based
on the data in the from clause. One row has 105 units and the
other row has 2006 units. the active price in both rows is 6.6
and the pricedifferential is 0. My expectation is that the
first row would be updated to 693 and the second to be updated
to 13239.6. Instead every row in the table was updated to 693.

This syntax works in MS SQL Server to update exactly as I
expected, with the difference that you have to use the
aliasname after the update keyword and postgresql does not
allow that.
If anyone can help, I would greatly appreciate it.

update AssembliesBatch set
BuildPrice=a.units*(coalesce(ActivePrice,0) +
coalesce(PriceDifferential,0))
from AssembliesBatch a join assemblies b on
a.AssemblyID=b.assemblyID
left join qry_AssemblyPrices c on c.AssemblyID=b.assemblyID
left join ProductQuantityPrice d on d.ProductID=b.ProductID
inner join qry_TotalBatchProductCards e on
e.ProductID=b.ProductID and e.BatchID=a.BatchID 
and e.TotalCards between minquantity and maxquantity
where a.BatchID=5;

Thank You
Sim Zacks
IT Manager
CompuLab
04-829-0145 - Office
04-832-5251 - Fax


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if
your
  joining column's datatypes do not match


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] [PERFORM] query problem

2004-10-13 Thread Robin Ericsson
Sorry, this should have been going to performance.



Regards,
Robin


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] [PERFORM] query problem

2004-10-13 Thread Robin Ericsson
On Wed, 2004-10-13 at 18:01 +0200, Robin Ericsson wrote:
> Using exact timestamp makes the query go back as it should in speed (see
> explain below). However I still have the problem using a stored
> procedure or even using the "ago"-example from above.

Well, changing ago() to use timestamp without time zone it goes ok in
the query. This query now takes ~2ms.

SELECT
data.entered,
data.machine_id,
datatemplate_intervals.template_id,
data_values.value
FROM
data, data_values, datatemplate_intervals
WHERE
datatemplate_intervals.id = data_values.template_id AND
data_values.data_id = data.id AND
data.machine_id IN (SELECT machine_id FROM machine_group_xref
WHERE group_id = 1) AND
ago('60 seconds') < data.entered

Using it in this procedure.
select * from get_current_machine_status('60 seconds', 1);
takes ~100s. Maybe there's some obvious wrong I do about it?

CREATE TYPE public.mstatus_holder AS
   (entered timestamp,
machine_id int4,
template_id int4,
value varchar);
CREATE OR REPLACE FUNCTION public.get_current_machine_status(interval,
int4)
  RETURNS SETOF mstatus_holder AS
'
SELECT
data.entered,
data.machine_id,
datatemplate_intervals.template_id,
data_values.value
FROM
data, data_values, datatemplate_intervals
WHERE
datatemplate_intervals.id = data_values.template_id AND
data_values.data_id = data.id AND
data.machine_id IN (SELECT machine_id FROM machine_group_xref WHERE
group_id = $2) AND
ago($1) < data.entered
'
  LANGUAGE 'sql' VOLATILE;


Regards,
Robin



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Proper Sizing of Shared Buffer Cache

2004-10-13 Thread Chris Browne
"Don Kelloway" <[EMAIL PROTECTED]> writes:
> I'm a first-time user with PostgreSQL so please forgive my ignorance.
>
> I've purchased (and read) Practical PostgreSQL (O'Reilly) and
> PostgreSQL Essential Reference (New Riders).  So far, so good.  I
> think learning PostgreSQL will not be as difficult as I thought it
> would be.  I've also been googling for the last few days, but I have
> a question in regards to determining the proper size of the buffer
> cache parameter.
>
> http://www.postgresql.org/docs/aw_pgsql_book/hw_performance/node6.html
>
> The above webpage states that ideally, the POSTGRESQL shared buffer cache 
> will be:
>
> - Large enough to hold most commonly-accessed tables
> - Small enough to avoid swap pagein activity
>
> My question is how do you determine how large the most
> commonly-accessed table(s) are?  I thought maybe I could view the
> pg_stat_database, but I don't think that provides the answer I'm
> seeking.  Can someone point me in the right direction?  It would be
> very much appreciated.

Alas, the slickest book in this regard is Douglas & Douglas (New
Riders), which has a section that can guide you through how PostgreSQL
arranges its filesystem usage, which is kind of what you _really_ need
for this.

Although that may be a bit of a red herring.

The "rule of thumb" is that you should devote about 10% of available
memory (on a dedicated DBMS server, that would presumably be 10% of
the memory on the machine; on a machine doing other things, scale it
down...) to shared buffer cache.

If 10% is much more than 82MB, then you can pretty safely limit
yourself to about 1-15000 as the # of 8K blocks.  There isn't
evidence available to establish that having much more buffer cache
than that is particularly helpful.  

The problem with having a larger buffer cache is twofold:

 1.  It will compete with the OS file cache.  Data loaded into the
 buffer cache firstly has to be read by the OS, which is therefore
 in the OS file cache already.  The bigger the buffer cache, the
 more redundant cacheing takes place.

 2.  Backends need to scan through the buffer cache to look for data;
 the bigger the cache, the more that scan costs.
-- 
let name="cbbrowne" and tld="cbbrowne.com" in String.concat "@" [name;tld];;
http://www.ntlug.org/~cbbrowne/linuxxian.html
A VAX is virtually a computer, but not quite.

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Proposal: GRANT cascade to implicit sequences

2004-10-13 Thread Bruno Wolff III
On Wed, Oct 13, 2004 at 00:37:35 -0600,
  Michael Fuhr <[EMAIL PROTECTED]> wrote:
> Comments?  Can anybody think of why cascading GRANT and REVOKE to
> implicit sequences might be A Bad Idea?

Since you can do odd things using explicit sequences, limiting implicit
sequences to make things convenient in the common case seems like a
reasonable goal.

If you go that route it may be a good idea to not allow direct grants and
revokes on implicit sequences and just have their access rights derived from
the tables.

I also think there is some merit in splitting the access rights for nextval
and setval, so that insert access grants access to nextval and update access
grants access to setval (or perhaps both nextval and setval). That way people
who can just insert in the table can't set the sequence number backwards.

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Commit / Rollback in PL/pgSQL ?

2004-10-13 Thread Tino Wildenhain
Am Mi, den 13.10.2004 schrieb Michael Kleiser um 17:44:
> I found on
> http://www.physiol.ox.ac.uk/Computing/Online_Documentation/postgresql/plpgsql-porting.html
> that it is not poosible to use start or end a transaction in plpgsl.
> 
> I tried to create a plplsql-function on PostgreSQL 8.0 beta 3
> I can comile
> 
> CREATE OR REPLACE FUNCTION insert_many_commit( integer ) RETURNS void AS '
> DECLARE
>counter INTEGER := $1;
> BEGIN
>WHILE counter > 0 LOOP
>  INSERT INTO testtab (id, modification_date, description )
>  VALUES ( NEXTVAL(''seq_testtab''),now(), ''Eintrag von insert_many() '' || 
> counter );
>  COMMIT;
>  counter := counter-1;
>END LOOP;
>RETURN;
> END;
> ' LANGUAGE 'plpgsql';
> 
> So I think it's possible to have COMMIT / ROLLBACK in PLPgSQL

No, you cant. The whole execution is part of one statement which is 
then automatically encapsulated in one transaction. Maybe the 
checkpoint features of the upcoming pg8.x help you.

Otoh, why do you want to commit here anyway?

Regards
Tino


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] LISTEN/NOTIFY for lightweight replication

2004-10-13 Thread Ted Shab
Thanks.  I was thinking iirc was the transport
protocol :-)

Looks like dblink is the best bet here.

--Ted
--- Richard Huxton <[EMAIL PROTECTED]> wrote:

> Ted Shab wrote:
> > Richard,
> > 
> > Thanks for the response.
> > 
> > I'll look into both the dblink and iirc.  
> > 
> > Do you know of any extended examples of either?
> 
> dblink is in the contrib/ folder of the source
> distribution and possibly 
> your packaged version if you use such a thing. Never
> needed it myself, 
> but the documentation looks clear enough.
> 
> As for listen/notify possibly dropping duplicate
> notifications... Ah! 
> it's in the "SQL COMMANDS" reference part of the
> manuals
> 
> NOTIFY behaves like Unix signals in one important
> respect: if the same 
> condition name is signaled multiple times in quick
> succession, 
> recipients may get only one notify event for several
> executions of 
> NOTIFY. So it is a bad idea to depend on the number
> of notifies 
> received. Instead, use NOTIFY to wake up
> applications that need to pay 
> attention to something, and use a database object
> (such as a sequence) 
> to keep track of what happened or how many times it
> happened.
> 
> --
>Richard Huxton
>Archonet Ltd
> 




___
Do you Yahoo!?
Declare Yourself - Register online to vote today!
http://vote.yahoo.com

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] LISTEN/NOTIFY for lightweight replication

2004-10-13 Thread Richard Huxton
Ted Shab wrote:
Richard,
Thanks for the response.
I'll look into both the dblink and iirc.  

Do you know of any extended examples of either?
dblink is in the contrib/ folder of the source distribution and possibly 
your packaged version if you use such a thing. Never needed it myself, 
but the documentation looks clear enough.

As for listen/notify possibly dropping duplicate notifications... Ah! 
it's in the "SQL COMMANDS" reference part of the manuals

NOTIFY behaves like Unix signals in one important respect: if the same 
condition name is signaled multiple times in quick succession, 
recipients may get only one notify event for several executions of 
NOTIFY. So it is a bad idea to depend on the number of notifies 
received. Instead, use NOTIFY to wake up applications that need to pay 
attention to something, and use a database object (such as a sequence) 
to keep track of what happened or how many times it happened.

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] LISTEN/NOTIFY for lightweight replication

2004-10-13 Thread Alvaro Herrera
On Wed, Oct 13, 2004 at 08:32:04AM -0700, Ted Shab wrote:

> Thanks for the response.
> 
> I'll look into both the dblink and iirc.  

That's actually only dblink.  IIRC is an acronym, meaning "if I recall
correctly", IIRC.

-- 
Alvaro Herrera ()
"The West won the world not by the superiority of its ideas or values
or religion but rather by its superiority in applying organized violence.
Westerners often forget this fact, non-Westerners never do."
(Samuel P. Huntington)


---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] Commit / Rollback in PL/pgSQL ?

2004-10-13 Thread Michael Kleiser
I found on
http://www.physiol.ox.ac.uk/Computing/Online_Documentation/postgresql/plpgsql-porting.html
that it is not poosible to use start or end a transaction in plpgsl.
I tried to create a plplsql-function on PostgreSQL 8.0 beta 3
I can comile
CREATE OR REPLACE FUNCTION insert_many_commit( integer ) RETURNS void AS '
DECLARE
  counter INTEGER := $1;
BEGIN
  WHILE counter > 0 LOOP
INSERT INTO testtab (id, modification_date, description )
VALUES ( NEXTVAL(''seq_testtab''),now(), ''Eintrag von insert_many() '' || 
counter );
COMMIT;
counter := counter-1;
  END LOOP;
  RETURN;
END;
' LANGUAGE 'plpgsql';
So I think it's possible to have COMMIT / ROLLBACK in PLPgSQL
But I can't execute this funktion this way:
# select insert_many_commit(1000);
ERROR:  SPI_execute_plan failed executing query "COMMIT": SPI_ERROR_TRANSACTION
Is there an other way to execute tis function ?
If the latter, is it poosible in other languages like PL/Python or PL/Perl ?
regards
Michael Kleiser
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] could not access status of transaction 4244329

2004-10-13 Thread Tom Lane
"[EMAIL PROTECTED]" <[EMAIL PROTECTED]> writes:
>ERROR:  could not access status of transaction 4244329
>DETAIL:  could not open file "/usr/local/pgsql/data/pg_clog/0004": No 
> existe el fichero o el directorio

What files actually appear in /usr/local/pgsql/data/pg_clog/ ?

The standard advice for working around this sort of thing is to create a
dummy pg_clog/0004 file and fill it with 256K of zeroes, so that the
VACUUM will decide that the affected row is dead.  However it would be
a good idea to first try to understand what's gone wrong.  Is this an
isolated dropped-bit in a transaction status field, or a symptom of more
general corruption in the table?  You could try to determine which page
of the table contains the corrupted row, and then dump out that page
with pg_filedump for visual analysis.  (See past discussions of
corrupted-data recovery in the list archives for details.)

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] LISTEN/NOTIFY for lightweight replication

2004-10-13 Thread Ted Shab
Richard,

Thanks for the response.

I'll look into both the dblink and iirc.  

Do you know of any extended examples of either?

--Ted
--- Richard Huxton <[EMAIL PROTECTED]> wrote:

> Ted Shab wrote:
> > Hi,
> > 
> > I'm trying to come up with a relatively simple
> > multi-master replication solution.  This is for
> > multiple databases that need to be discreet, and
> > change relatively infrequently (10-30 updates an
> > hour), and almost never update each others data
> (less
> > than once a day).  
> > 
> > The TCL-based replication project for multi-master
> is
> > troublesome to configure and seems to really
> impact
> > performance.  It can be assumed that the
> master-slave
> > setup will not work for me, nor do we want to
> purchase
> > a commercial soluton, nor can we run this all from
> one
> > central database.
> 
> > e.  If there is a field level conflict, raise an
> > exception (TBD).
> 
> Exception handling and failure recovery are what
> makes for all the work 
> in replication.
> 
> I don't think a pure listen/notify setup will be
> enough because iirc the 
> system doesn't guarantee delivery of multiple
> notifications if >1 are 
> queued.
> 
> Have you looked into the possibility of using dblink
> to handle updates 
> of each others' data? That would mean your problem
> reverting to one of 
> single-master replication.
> 
> -- 
>Richard Huxton
>Archonet Ltd
> 
> ---(end of
> broadcast)---
> TIP 9: the planner will ignore your desire to choose
> an index scan if your
>   joining column's datatypes do not match
> 




___
Do you Yahoo!?
Declare Yourself - Register online to vote today!
http://vote.yahoo.com

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Recovering data from corrupted table. Urgent Help!!

2004-10-13 Thread Martijn van Oosterhout
Create a file with that name filled with zeros with the same length as
the other files in that directory. That should get you far enough to
dump the data. Then run a complete set of memory and disk checks on
your system...

On Wed, Oct 13, 2004 at 02:56:37PM +0100, [EMAIL PROTECTED] wrote:
> Hi:
> 
> Is there any way to recover data from a corrupted table? I can only run 
> SELECTs on certain WHERE conditions.
> 
> I cannot vacuum, pg_dump, I've deleted the indexes and try to reindex, 
> always get error:
> 
> ERROR:  could not access status of transaction 4244329
> DETAIL:  could not open file "/usr/local/pgsql/data/pg_clog/0004": No 
> existe el fichero o el directorio
> 
> Thanks a lot.
> 
> 
> 
> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
> 
>   http://www.postgresql.org/docs/faqs/FAQ.html

-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
> tool for doing 5% of the work and then sitting around waiting for someone
> else to do the other 95% so you can sue them.


pgpOOY3ikI4zN.pgp
Description: PGP signature


Re: [GENERAL] Recovering data from corrupted table. Urgent Help!!

2004-10-13 Thread Gaetano Mendola
Gaetano Mendola wrote:
[EMAIL PROTECTED] wrote:
Hi:
Is there any way to recover data from a corrupted table? I can only 
run SELECTs on certain WHERE conditions.

I cannot vacuum, pg_dump, I've deleted the indexes and try to reindex, 
always get error:

ERROR:  could not access status of transaction 4244329
DETAIL:  could not open file "/usr/local/pgsql/data/pg_clog/0004": No 
existe el fichero o el directorio

Thanks a lot.

Again:
create an empty file:
# touch /usr/local/pgsql/data/pg_clog/0004
at this point postgres will complain about the fact that an offset is 
missing,

at this point fill with 0 your file ( blocks of 8K ) till reach that 
offset reclaimed.
I forgot to suggest you to do:
dd bs=8k count=1 < /dev/zero >> /usr/local/pgsql/data/pg_clog/0004
you have to repeat this command till the offset is covered.

Regards
Gaetano Mendola



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Recovering data from corrupted table. Urgent Help!!

2004-10-13 Thread Gaetano Mendola
[EMAIL PROTECTED] wrote:
Hi:
Is there any way to recover data from a corrupted table? I can only run 
SELECTs on certain WHERE conditions.

I cannot vacuum, pg_dump, I've deleted the indexes and try to reindex, 
always get error:

ERROR:  could not access status of transaction 4244329
DETAIL:  could not open file "/usr/local/pgsql/data/pg_clog/0004": No 
existe el fichero o el directorio

Thanks a lot.
Again:
create an empty file:
# touch /usr/local/pgsql/data/pg_clog/0004
at this point postgres will complain about the fact that an offset is missing,
at this point fill with 0 your file ( blocks of 8K ) till reach that offset reclaimed.

Regards
Gaetano Mendola

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] psql : how to make it more silent....

2004-10-13 Thread Gaetano Mendola
Patrick Fiche wrote:
Hi,
 
When I execute a function, I would like psql to show me only RAISE 
NOTICE messages but not all function calls
Indeed, I currently get some messages that I don't care about :
 

* PL/pgSQL function "adm_user" line 321..
* CONTEXT: SQL query "SELECT."
Is there a way to get rid of these messages
 
modify your log_error_verbosity  to "terse"

Regards
Gaetano Mendola
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] ODBC

2004-10-13 Thread Alexander Cohen
Hi,
Im looking for instructions on installing the postgresql ODBC driver on 
mac osx and windows. Is there any reference for that. Any help is 
apprecitaed. BTW, i need to compile it and install it wihtout using the 
windows installer.

Alex
---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] converting database to unicode

2004-10-13 Thread Jason Tesser
I have a database in sql_ascii that I need to convert to Unicode.  I tried using
pg_dump -Fc ..  but it fails on certain characters.  like this one "è"
How can I get the data transferred?  


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] could not access status of transaction 4244329

2004-10-13 Thread Gaetano Mendola
[EMAIL PROTECTED] wrote:
Hi:
I've migrated a couple of weeks ago from 7.4.2 to 7.4.5 and I am getting 
this error after executing a query:

  Warning: pg_exec() query failed: ERROR: could not access status of 
transaction 4244329 in /home/wisconsin/www/_proc/bbdd/_c_bbdd.php on 
line 160
  ERROR ACCESO BASE DE DATOSERROR: could not access status of 
transaction 4244329

[SNIP]
I tried reindexing:
  DROP INDEX movimientos_c_c_i01;
  CREATE INDEX movimientos_c_c_i01 ON movimientos_c_c (cod_empresa, 
cod_per_emp, cod_movimiento, fecha_movimiento);

  ERROR:  could not access status of transaction 4244329
  DETAIL:  could not open file "/usr/local/pgsql/data/pg_clog/0004": No 
existe el fichero o el directorio

create a empty file:
# touch /usr/local/pgsql/data/pg_clog/0004
at this point postgres will complain about the fact that an offset is missing,
at this point fill with 0 your file ( blocks of 8K ) till reach that offset reclaimed.

Regards
Gaetano Mendola

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


[GENERAL] Date format for bulk copy

2004-10-13 Thread David Rysdam
I have a large amount of data that I copy in and out of Sybase very 
often.  Now I also want to copy this data in and out of postgres.  I 
have an existing script that creates the entire database(s) from scratch 
in Sybase and then uses the Sybase bulk copy tool "bcp" to copy the data 
in. 

I already did a brute force port of this script to postgres once, but 
I'm trying to do it more elegantly now that I know what issues I'm going 
to run into.  One of them is date formats in the bcp files.  Sybase bulk 
copies the date fields out in this format:

Mar  4 1973 10:28:00:000AM
Postgresql's COPY (or psql \copy) doesn't like that format.  In 
particular, it doesn't like the millisecond field at the end.  If I 
understand the docs correctly, postgres wants the millisecond field to 
be proceeded by a decimal point instead of a colon.  In my brute force 
port, I just bulk copied the date fields into temporary tables and then 
did a to_timestamp(field, 'Mon DD  HH:MI:SS:MSAM'). 

That worked, but required a lot of additional logic in my script to 
handle the temp tables and conversions.  I'd hate to have to keep all 
that overhead in there to basically handle a conversion of a colon to a 
decimal point. 

So my questions are these:
0) I thought of creating a user-defined data type for this, but it seems 
like overkill, especially if I'd have to provide all kinds of helper 
functions for things like date incrementation or comparison or 
whatever.  Am I off track?
1) Are there any tools out there that allow for specifying the field 
format of a COPY?
2) If not, is it reasonable or unreasonable to modify the postgresql 
source (I'm running Beta 3) to handle a colon as a millisecond 
delimiter?  (If so, where do I look?)
3) If I did create such a patch, would the postgresql accept it into the 
tree?

I have a similarish problem with another field type.  In Sybase it's a 
binary format.  In postgres it is a binary format (bytea).  But Sybase 
bcps the data out in ASCII.  Sybase recognizes that when it is a binary 
field and auto-converts the ASCII back to binary.  Postgres doesn't.  
Again, I created a temporary table and did a decode(field, 'hex') to the 
real table.  It seems reasonable to expect to be able to bulk copy 
ASCII-encoded binary values into binary fields.  Probably this field is 
best described by a user-defined type?

---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] Recovering data from corrupted table. Urgent Help!!

2004-10-13 Thread [EMAIL PROTECTED]
Hi:
Is there any way to recover data from a corrupted table? I can only run 
SELECTs on certain WHERE conditions.

I cannot vacuum, pg_dump, I've deleted the indexes and try to reindex, 
always get error:

ERROR:  could not access status of transaction 4244329
DETAIL:  could not open file "/usr/local/pgsql/data/pg_clog/0004": No 
existe el fichero o el directorio

Thanks a lot.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Proposal: GRANT cascade to implicit sequences

2004-10-13 Thread Alvaro Herrera
On Wed, Oct 13, 2004 at 12:37:35AM -0600, Michael Fuhr wrote:

> Comments?  Can anybody think of why cascading GRANT and REVOKE to
> implicit sequences might be A Bad Idea?

In current devel sources, ALTER OWNER cascades to implicit sequences.
It may be a precedent for making GRANT and REVOKE do so too.

-- 
Alvaro Herrera ()
"Having your biases confirmed independently is how scientific progress is
made, and hence made our great society what it is today" (Mary Gardiner)


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] memory leak of PQmakeEmptyPGresult??

2004-10-13 Thread Gaetano Mendola
Ann wrote:
> I found the reason of this question and fixed the bug :))
> 

Why then don't you share it ?



Regards
Gaetano Mendola


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Management software for Postgresql

2004-10-13 Thread Frank Kurzawa
I have just started exploring this product:
http://www.hyperic.net/products/manager/product-specs/postgresql-management.htm

Hope it helps.

On Wed, 2004-10-13 at 07:25, Bob Powell wrote:
> Hello everyone,
> 
> I would like to know if anyone has found or developed any monitoring
> software for Postgres.  I would like to be able to gather statistics
> about the database. 
> 
> How often certain tables get hit, whats the current status of things
> etc.  I couldn't find much in the lists about this.
> 
> Thanks.
> 
> Bob Powell
> Database Administrator
> -- 
> Frank Kurzawa <[EMAIL PROTECTED]>
> Topaz Software, Inc.


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [GENERAL] Management software for Postgresql

2004-10-13 Thread Richard Huxton
Bob Powell wrote:
Hello everyone,
I would like to know if anyone has found or developed any monitoring
software for Postgres.  I would like to be able to gather statistics
about the database. 

How often certain tables get hit, whats the current status of things
etc.  I couldn't find much in the lists about this.
You'll be wanting the stats monitoring section of the manuals:
  http://www.postgresql.org/docs/7.4/static/monitoring.html
--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Time at end of transaction

2004-10-13 Thread Richard Huxton
Phil Endecott wrote:
Dear All,
Within a transaction, now() and current_timestamp are constant and give 
the time that the transaction started.  This is normally what you want. 
 But I have a case where I actually need the time that the transaction 
is committed, or something similar.  Is there a way to get it?  Here is 
the problem:
You want timeofday() - see the "date/time functions" section of the 
manuals. Note that it returns text not timestamptz.

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] Management software for Postgresql

2004-10-13 Thread Bob Powell
Hello everyone,

I would like to know if anyone has found or developed any monitoring
software for Postgres.  I would like to be able to gather statistics
about the database. 

How often certain tables get hit, whats the current status of things
etc.  I couldn't find much in the lists about this.

Thanks.

Bob Powell
Database Administrator

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Time at end of transaction

2004-10-13 Thread Patrick Fiche
Hi,

I think that timeofday() should solve your issue.
Just take care that this function returns text instead of timestamp... You
will have to cast it.

Patrick

> --
-
> Patrick Fiche
> email : [EMAIL PROTECTED]
> tél : 01 69 29 36 18
> --
-
>
>
>


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Phil Endecott
Sent: mercredi 13 octobre 2004 14:11
To: [EMAIL PROTECTED]
Subject: [GENERAL] Time at end of transaction


Dear All,

Within a transaction, now() and current_timestamp are constant and give
the time that the transaction started.  This is normally what you want.
  But I have a case where I actually need the time that the transaction
is committed, or something similar.  Is there a way to get it?  Here is
the problem:

The timestamps that I am recording are "last modified" times.  The
client may have kept a local copy of something, and asks the server to
"send a new copy if it has been modified since time X".  This is
actually HTTP's in-modified-since behaviour.

Consider three transactions W, R1 and R2.  W writes to the tables and is
long-running.  R1 and R2 only read the tables and are short-running.
They are interleaved as follows:

W starts
R1 starts
R1 finishes
W finishes
R2 starts
R2 finishes

R1 and R2 come from the same client.  In R2 the client asks "send me
everything that has changed since (time of R1)".  It needs to get the
changes made by W, since R1 saw the state of the database before W started.

W currently finishes with a statement like this:

update . set last_modified_time = current_timestamp where ;

This doesn't do what I want - I need to record the time when W will
finish and its changes become visible to other transactions, not the
time that it started.

Of course it is impossible to know when a transaction that is still in
progress will finish so some sort of trick is needed.  The best that I
can think of is:

begin;
...main work of transaction, no or few locks held...
LOCK some important lock that blocks reads
update set last_modified_time = really_now
end;

Any ideas anyone?

(What happens, or should happen, to current_timestamp inside nested
transactions?)


Regards,

--Phil.






---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly




Protected by Polesoft Lockspam
http://www.polesoft.com/refer.html


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Time at end of transaction

2004-10-13 Thread Phil Endecott
Dear All,
Within a transaction, now() and current_timestamp are constant and give 
the time that the transaction started.  This is normally what you want. 
 But I have a case where I actually need the time that the transaction 
is committed, or something similar.  Is there a way to get it?  Here is 
the problem:

The timestamps that I am recording are "last modified" times.  The 
client may have kept a local copy of something, and asks the server to 
"send a new copy if it has been modified since time X".  This is 
actually HTTP's in-modified-since behaviour.

Consider three transactions W, R1 and R2.  W writes to the tables and is 
long-running.  R1 and R2 only read the tables and are short-running. 
They are interleaved as follows:

W starts
R1 starts
R1 finishes
W finishes
R2 starts
R2 finishes
R1 and R2 come from the same client.  In R2 the client asks "send me 
everything that has changed since (time of R1)".  It needs to get the 
changes made by W, since R1 saw the state of the database before W started.

W currently finishes with a statement like this:
update . set last_modified_time = current_timestamp where ;
This doesn't do what I want - I need to record the time when W will 
finish and its changes become visible to other transactions, not the 
time that it started.

Of course it is impossible to know when a transaction that is still in 
progress will finish so some sort of trick is needed.  The best that I 
can think of is:

begin;
...main work of transaction, no or few locks held...
LOCK some important lock that blocks reads
update set last_modified_time = really_now
end;
Any ideas anyone?
(What happens, or should happen, to current_timestamp inside nested 
transactions?)

Regards,
--Phil.


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


[GENERAL] PostgreSQLv8 native Windows NAMEDATALEN = 128

2004-10-13 Thread DEHAINSALA Hondjack
Hi
I want to test PostgreSQL v8 native windows taht allows to create a table or
column with 128 characters !!

Where can i get it ?
Thanks U for help  !!

Hondjack
- Original Message -
From: "DEHAINSALA Hondjack" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, October 13, 2004 1:45 PM
Subject: Bad system call postmaster


> Hi
> I have just to install postgreSQL-7.5. in windows 2003.
> and the system send me this error :
> >Bad system call  postmaster
> Someone can help me ?
>
> thanks U
>
>
>
>
>
>


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Bad system call postmaster

2004-10-13 Thread DEHAINSALA Hondjack
Hi 
I have just to install postgreSQL-7.5. in windows 2003.
and the system send me this error : 
>Bad system call  postmaster 
Someone can help me ?

thanks U 







---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] could not access status of transaction 4244329

2004-10-13 Thread [EMAIL PROTECTED]
Hi:
I've migrated a couple of weeks ago from 7.4.2 to 7.4.5 and I am getting 
this error after executing a query:

  Warning: pg_exec() query failed: ERROR: could not access status of 
transaction 4244329 in /home/wisconsin/www/_proc/bbdd/_c_bbdd.php on 
line 160
  ERROR ACCESO BASE DE DATOSERROR: could not access status of 
transaction 4244329

I tried VACCUM:
  wisconsin=# VACUUM ANALYZE verbose movimientos_c_c;
  INFO:  vacuuming "public.movimientos_c_c"
  INFO:  index "movimientos_c_cod_movimient_key" now contains 3658193 
row versions in 13316 pages
  DETAIL:  1397781 index row versions were removed.
  0 index pages have been deleted, 0 are currently reusable.
  CPU 0.95s/7.85u sec elapsed 90.56 sec.
  INFO:  index "movimientos_c_c_i01" now contains 3658193 row versions 
in 24737 pages
  DETAIL:  1397781 index row versions were removed.
  0 index pages have been deleted, 0 are currently reusable.
  CPU 1.57s/7.28u sec elapsed 126.50 sec.
  INFO:  "movimientos_c_c": removed 1397781 row versions in 56621 pages
  DETAIL:  CPU 2.93s/5.94u sec elapsed 60.67 sec.
  INFO:  index "movimientos_c_cod_movimient_key" now contains 2260414 
row versions in 13316 pages
  DETAIL:  1397780 index row versions were removed.
  0 index pages have been deleted, 0 are currently reusable.
  CPU 1.07s/6.40u sec elapsed 140.73 sec.

  INFO:  index "movimientos_c_c_i01" now contains 2260414 row versions 
in 24737 pages
  DETAIL:  1397780 index row versions were removed.
  0 index pages have been deleted, 0 are currently reusable.
  CPU 1.62s/6.27u sec elapsed 199.39 sec.
  INFO:  "movimientos_c_c": removed 1397780 row versions in 56642 pages
  DETAIL:  CPU 2.85s/5.98u sec elapsed 66.71 sec.
  INFO:  index "movimientos_c_cod_movimient_key" now contains 862646 
row versions in 13316 pages
  DETAIL:  1397769 index row versions were removed.
  0 index pages have been deleted, 0 are currently reusable.
  CPU 1.00s/5.21u sec elapsed 136.48 sec.
  INFO:  index "movimientos_c_c_i01" now contains 862646 row versions 
in 24737 pages
  DETAIL:  1397769 index row versions were removed.
  0 index pages have been deleted, 0 are currently reusable.
  CPU 1.72s/5.13u sec elapsed 212.03 sec.
  INFO:  "movimientos_c_c": removed 1397769 row versions in 56650 pages
  DETAIL:  CPU 2.76s/5.86u sec elapsed 45.32 sec.
  ERROR:  could not access status of transaction 4244329
  DETAIL:  could not open file "/usr/local/pgsql/data/pg_clog/0004": No 
existe el fichero o el directorio

I tried reindexing:
  DROP INDEX movimientos_c_c_i01;
  CREATE INDEX movimientos_c_c_i01 ON movimientos_c_c (cod_empresa, 
cod_per_emp, cod_movimiento, fecha_movimiento);

  ERROR:  could not access status of transaction 4244329
  DETAIL:  could not open file "/usr/local/pgsql/data/pg_clog/0004": No 
existe el fichero o el directorio


Any help will be much appreciated.
Ruben.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


[GENERAL] psql : how to make it more silent....

2004-10-13 Thread Patrick Fiche



Hi,
 
When I execute a 
function, I would like psql to show me only RAISE NOTICE messages but not all 
function calls
Indeed, I currently 
get some messages that I don't care about :
 

  PL/pgSQL function 
  "adm_user" line 321..
  CONTEXT: SQL query 
  "SELECT."
Is there a way to 
get rid of these messages
 
Thanks
 

  --- 
  Patrick Fiche email : [EMAIL PROTECTED] tél : 01 69 29 36 
  18 --- 
  
 





Protected by Polesoft Lockspam

http://www.polesoft.com/refer.html


Re: [GENERAL] about permissions...

2004-10-13 Thread Richard Huxton
Henriksen, Jonas F wrote:
Hi,
how come, if you create a user with no permissions at all, having
been granted nothing, he can still log into any database, list
available tables, create new here, and then delete them again. Seems
odd...:

Is this right, or is there something wrong with my settings in some
way?
Schema public has default access to group public, which your new user 
has access to...

richardh=# GRANT ALL ON SCHEMA public TO richardh;
GRANT
richardh=# SELECT * FROM pg_namespace ;
   nspname   | nspowner |  nspacl
-+--+---
 public  |1 | {=UC,richardh=UC}
...
richardh=# REVOKE ALL ON SCHEMA public FROM GROUP public;
REVOKE
richardh=# SELECT * FROM pg_namespace ;
   nspname   | nspowner | nspacl
-+--+-
 public  |1 | {=,richardh=UC}
...
*DO* make sure that one user has explict access before revoking all on 
public though.

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] adding two tables

2004-10-13 Thread Tino Wildenhain
Hi again,

On Wed, 2004-10-13 at 10:55, fx gamoy wrote:
> hello everybody,
> i ve got two big tables with the same structure.
>  
> i would like to add the second one to the first one directly without
> generating a sql file. (each table is about 1 Million line)
> Is it a way with an sql command? 
> INSERT INTO TAB1... (select * from TAB2) ???

without ... and ( ) of course.

 


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] adding two tables

2004-10-13 Thread Tino Wildenhain
Hi,

On Wed, 2004-10-13 at 10:55, fx gamoy wrote:
> hello everybody,
> i ve got two big tables with the same structure.
>  
> i would like to add the second one to the first one directly without
> generating a sql file. (each table is about 1 Million line)
> Is it a way with an sql command? 
> INSERT INTO TAB1... (select * from TAB2) ???

without the ... your statement is complete.

Have fun
Tino


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[GENERAL] adding two tables

2004-10-13 Thread fx gamoy



hello everybody,
i ve got two big tables with the same 
structure.
 
i would like to add the second one to the first one 
directly without generating a sql file. (each table is about 1 Million 
line)
Is it a way with an sql command? 
INSERT INTO TAB1... (select * from TAB2) 
???
thanks
fx
 


[GENERAL] about permissions...

2004-10-13 Thread Henriksen, Jonas F
Hi, 

how come, if you create a user with no permissions at all, having been granted 
nothing, he can still log into any database, list available tables, create new here, 
and then delete them again. Seems odd...:

medusa:~% createuser odd
Shall the new user be allowed to create databases? (y/n) n
Shall the new user be allowed to create more new users? (y/n) n
CREATE USER
medusa:~% psql -U odd cnv
Welcome to psql 7.3.7, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
   \h for help with SQL commands
   \? for help on internal slash commands
   \g or terminate with semicolon to execute query
   \q to quit

cnv=> \dt
List of relations
 Schema | Name  | Type  |  Owner
+---+---+-
 public | theaders  | table | jonasfh
 public | theadervalues | table | jonasfh
(2 rows)

cnv=> create table oddtable();
CREATE TABLE
cnv=> \dt
List of relations
 Schema | Name  | Type  |  Owner
+---+---+-
 public | oddtable  | table | odd
 public | theaders  | table | jonasfh
 public | theadervalues | table | jonasfh
 
(3 rows)

cnv=> drop table oddtable;
DROP TABLE

Is this right, or is there something wrong with my settings in some way?

regards Jonas:))

--
Jonas F Henriksen
Institute of Marine Research
Norsk Marint Datasenter
PO Box 1870 Nordnes
5817 Bergen
Norway
 
Phone: +47 55238441


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] LISTEN/NOTIFY for lightweight replication

2004-10-13 Thread Richard Huxton
Ted Shab wrote:
Hi,
I'm trying to come up with a relatively simple
multi-master replication solution.  This is for
multiple databases that need to be discreet, and
change relatively infrequently (10-30 updates an
hour), and almost never update each others data (less
than once a day).  

The TCL-based replication project for multi-master is
troublesome to configure and seems to really impact
performance.  It can be assumed that the master-slave
setup will not work for me, nor do we want to purchase
a commercial soluton, nor can we run this all from one
central database.

e.  If there is a field level conflict, raise an
exception (TBD).
Exception handling and failure recovery are what makes for all the work 
in replication.

I don't think a pure listen/notify setup will be enough because iirc the 
system doesn't guarantee delivery of multiple notifications if >1 are 
queued.

Have you looked into the possibility of using dblink to handle updates 
of each others' data? That would mean your problem reverting to one of 
single-master replication.

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] how to open stat mode in db

2004-10-13 Thread Neil Conway
On Wed, 2004-10-13 at 17:26, postgres2008 wrote:
> and how to monitor its concurrent connection number as well as the
> current sql(s) execution. thanks!

http://www.postgresql.org/docs/7.4/static/monitoring-stats.html

-Neil



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] how to open stat mode in db

2004-10-13 Thread postgres2008
hi,
could anyone tell me how to open stat log in postgres? and how to monitor its 
concurrent connection number as well as the current sql(s) execution. thanks!
 

---
马上到http://www.126.com申请260M全国最大免费邮箱;!
提供新邮件到达手机短信提醒功能, 随时掌握邮件信息!

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]