[GENERAL] SSL connection option from client side?

2010-02-03 Thread dipti shah
I got the information about how to set server configuration to make SSL
aware only from http://www.postgresql.org/docs/8.4/static/ssl-tcp.html.
Please see below and let me know if I am missing anything.

   1. I need to switch on "ssl" attribute in postgresql.conf and compile the
server again.
   2. In pg_hba.conf, I have to use "hostssl" for all client connections.

Also, after setting above configurations, if I connect to my database using
"psql -d Techdb -U postgres" then automatically it will be SSL connection or
I need to specify any option from client side?

Thanks,
Dipti


Re: [GENERAL] Shall I apply normalization in the following case?

2010-02-03 Thread Sim Zacks

> For example, for the following table, 
>
>
> measurement (without normalization)
> ===
> id | value | measurement_unit | measurement_type
> 
> 1   0.23 mmwidth
> 2   0.38 mmwidth
> 2   0.72 mmwidth
>
>
> If I normalize to the following format, I will encounter several problem 
> compared to table without normalization
>
>
>
> measurement (normalization)
> ===
> id | value | measurement_unit_id | measurement_type_id
> --
> 1   0.23 11
> 2   0.38 11
> 2   0.72 11
>
>
> measurement_unit_id
> ===
> id | value
> --
> 1  | mm
>
>
> measurement_type_id
> ===
> id | value
> --
> 1  | width
>   
1) foreign key constraints are important, so you don't have things
misspelled or spelled differently and to define the "official" value.
2) querying on an int is quicker then querying on a string, so if you
query on the values without the join you will have better performance.
3) You might want to have more information in the other tables one day,
such as unit conversion information or descriptions, etc..
4) depending on the size of the string, it might take less space for an
int. Though a varchar with mm only takes 3 bytes, width takes 6 bytes,
while a regular int takes 4.
5) As Jorge mentioned you can make the value your pk instead of a serial
int and then you have it normalized and readable.

For the specific design that you are showing, there is no real benefit
to normalization, other then it would make it more scalable.


Sim

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is it necessary to have index for child table in following case?

2010-02-03 Thread Joe Conway
On 02/03/2010 06:59 PM, Yan Cheng Cheok wrote:
> PQexec(Database::instance().getConnection(), "copy unit_1 from
> stdin"); // | serial | int | int /* But I just do not want to put
>  as serial. I want it to be auto-increment. However, I have no
> idea how to make serial auto-increment, without using INSERT. */ 
> PQputline(Database::instance().getConnection(),"\t1\t888\n"); 
> PQputline(Database::instance().getConnection(),"\\.\n"); 
> PQendcopy(Database::instance().getConnection());

You really need to get up close and personal with the fine manual.

See:
---
http://developer.postgresql.org/pgdocs/postgres/sql-copy.html

Specifically:
---
Synopsis

COPY table_name [ ( column [, ...] ) ]
FROM { 'filename' | STDIN }
[ [ WITH ] ( option [, ...] ) ]

Example:
---
regression=# create table foo(f1 serial, f2 text);
NOTICE:  CREATE TABLE will create implicit sequence "foo_f1_seq" for
serial column "foo.f1"
CREATE TABLE
regression=# copy foo (f2) from stdin;
Enter data to be copied followed by a newline.
End with a backslash and a period on a line by itself.
>> hello
>> world
>> \.

regression=# select * from foo;
 f1 |  f2
+---
  1 | hello
  2 | world
(2 rows)

HTH,

Joe




signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] Is it necessary to have index for child table in following case?

2010-02-03 Thread Yan Cheng Cheok
Sorry. I do read your previous post.However, I ignore and soon forget quite a 
while, as I do not how to use it with the column contains serial

PQexec(Database::instance().getConnection(), "copy unit_1 from stdin");
// | serial | int | int 
/* But I just do not want to put  as serial. I want it to be 
auto-increment. However, I have no idea how to make serial auto-increment, 
without using INSERT.
*/
PQputline(Database::instance().getConnection(),"\t1\t888\n");
PQputline(Database::instance().getConnection(),"\\.\n");
PQendcopy(Database::instance().getConnection());

Thanks and Regards
Yan Cheng CHEOK


--- On Thu, 2/4/10, Joe Conway  wrote:

> From: Joe Conway 
> Subject: Re: [GENERAL] Is it necessary to have index for child table in 
> following case?
> To: "Yan Cheng Cheok" 
> Cc: "Alban Hertroys" , 
> pgsql-general@postgresql.org
> Date: Thursday, February 4, 2010, 9:08 AM
> On 02/03/2010 05:02 PM, Yan Cheng
> Cheok wrote:
> >> 
> >> Are you using INSERT or COPY to insert your data?
> COPY tends to be
> >> a lot faster than separate INSERTs, especially if
> you don't wrap
> >> the INSERTs in a transaction block and COMMIT them
> in batches.
> > 
> > But I do not use File or Stdio. The data is coming in
> row by row
> > real-time. I need to insert the data programmatic
> real-time into the
> > database. That's why I use INSERT. But maybe I miss
> out something on
> > the usage of COPY, please advice :)
> 
> I already answered that about a month ago but you ignored
> it:
> http://archives.postgresql.org/pgsql-general/2010-01/msg00287.php
> 
> Joe
> 
> 


  


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why lots of temp schemas are being created

2010-02-03 Thread Tom Lane
Alvaro Herrera  writes:
> Temp schemas are not destroyed on session shutdown; they are rather
> destroyed the next time the backend ID is reused.  Normally that's not a
> problem, because a backend ID is reused pretty soon.  It's only a
> problem when you use so high a backend ID due to high load, that a very
> long time passes before it's reused.  Those temp tables linger and can
> cause Xid wraparound problems.

Not correct --- ordinarily temp tables are removed at backend shutdown.
The only time that wouldn't happen is in event of a backend crash.  In
which case cleanup would happen at next use, as you describe.

The schemas are indeed left around, but they're empty in the normal case.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Shall I apply normalization in the following case?

2010-02-03 Thread Jorge Godoy
You can always normalize and not use an artificial key.

You'd end up with:


measurement (normalization)
===
id | value | measurement_unit_id | measurement_type_id
--

> 1   0.23 mmwidth
> 2   0.38 mmwidth
> 2   0.72 mmwidth
>
>
> measurement_unit_id
> ===
> value
> --
>   mm
>
 m
  cm
  in
  cm²
  m³

>
>
> measurement_type_id
> ===
>   value
> --
>   width
>
 area
 depth
 volume



And so on.  You'd benefit from a normalized structure, you'd have
constraints checking for valid units and types and you wouldn't need join to
get the resulting information.



--
Jorge Godoy 


On Wed, Feb 3, 2010 at 23:20, Yan Cheng Cheok  wrote:

>
> For example, for the following table,
>
>
> measurement (without normalization)
> ===
> id | value | measurement_unit | measurement_type
> 
> 1   0.23 mmwidth
> 2   0.38 mmwidth
> 2   0.72 mmwidth
>
>
> If I normalize to the following format, I will encounter several problem
> compared to table without normalization
>
>
>
> measurement (normalization)
> ===
> id | value | measurement_unit_id | measurement_type_id
> --
> 1   0.23 11
> 2   0.38 11
> 2   0.72 11
>
>
> measurement_unit_id
> ===
> id | value
> --
> 1  | mm
>
>
> measurement_type_id
> ===
> id | value
> --
> 1  | width
>
>
> (1) When rows grow to few millions in table measurement, the join operation
> on normalization table, is *much* slower compared to non-normalization
> table.
>
> One of the most frequent used query, To retrieve "value",
> "measurement_unit" and "measurement_type",  I need to join measurement +
> measurement_unit_id + measurement_type_id.
>
> For non-normalization table, I need NOT to join.
>
> Right now, I cannot justify why I need to use normalization. I afraid I
> miss out several important points when I turn into un-normalize solution.
> Please advice if you realize I miss out several important points.
>
> Thanks
> Yan Cheng CHEOK
>
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] varchar(n) and text

2010-02-03 Thread Bruce Momjian
Yan Cheng Cheok wrote:
> According to 
> http://www.postgresql.org/docs/8.0/interactive/datatype-character.html
> 
> I always use TEXT, for column which store text so that I need not to worry on 
> text length.
> 
> However, in certain situation, I know that my text length will always < 10 
> characters
> 
> I was wondering, if I using TEXT, will there have any overhead on
> 
> (1) storage space?
> (2) query performance?
> (3) write performance?
> 
> Shall I just use varchar(10) ?

This is an FAQ, and is mentioned in the manuals.  The short answer is
that there is no different for any of them in any areas, except
documented behavior, e.g. char(10) is always output as 10 characters.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] varchar(n) and text

2010-02-03 Thread Yan Cheng Cheok
According to 
http://www.postgresql.org/docs/8.0/interactive/datatype-character.html

I always use TEXT, for column which store text so that I need not to worry on 
text length.

However, in certain situation, I know that my text length will always < 10 
characters

I was wondering, if I using TEXT, will there have any overhead on

(1) storage space?
(2) query performance?
(3) write performance?

Shall I just use varchar(10) ?

Thanks!

Thanks and Regards
Yan Cheng CHEOK


  


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Shall I apply normalization in the following case?

2010-02-03 Thread Yan Cheng Cheok

For example, for the following table, 


measurement (without normalization)
===
id | value | measurement_unit | measurement_type

1   0.23 mmwidth
2   0.38 mmwidth
2   0.72 mmwidth


If I normalize to the following format, I will encounter several problem 
compared to table without normalization



measurement (normalization)
===
id | value | measurement_unit_id | measurement_type_id
--
1   0.23 11
2   0.38 11
2   0.72 11


measurement_unit_id
===
id | value
--
1  | mm


measurement_type_id
===
id | value
--
1  | width


(1) When rows grow to few millions in table measurement, the join operation on 
normalization table, is *much* slower compared to non-normalization table. 

One of the most frequent used query, To retrieve "value", "measurement_unit" 
and "measurement_type",  I need to join measurement + measurement_unit_id + 
measurement_type_id.

For non-normalization table, I need NOT to join.

Right now, I cannot justify why I need to use normalization. I afraid I miss 
out several important points when I turn into un-normalize solution. Please 
advice if you realize I miss out several important points.

Thanks
Yan Cheng CHEOK


  


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is it necessary to have index for child table in following case?

2010-02-03 Thread Joe Conway
On 02/03/2010 05:02 PM, Yan Cheng Cheok wrote:
>> 
>> Are you using INSERT or COPY to insert your data? COPY tends to be
>> a lot faster than separate INSERTs, especially if you don't wrap
>> the INSERTs in a transaction block and COMMIT them in batches.
> 
> But I do not use File or Stdio. The data is coming in row by row
> real-time. I need to insert the data programmatic real-time into the
> database. That's why I use INSERT. But maybe I miss out something on
> the usage of COPY, please advice :)

I already answered that about a month ago but you ignored it:
http://archives.postgresql.org/pgsql-general/2010-01/msg00287.php

Joe



signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] Need to Remove Constraint, but Don't Know How - Previous attempts failed

2010-02-03 Thread John R Pierce

Wang, Mary Y wrote:

Hi,

I'm sorry but I didn't create the table.  I fiddled with the table for a while 
and didn't get what I expected.  So I'm going to ask the community.

Can some one tell me which constraint would case me the "duplicate key into unique 
index users_pkey"?  I'd like to remove that constraint.
  


I only see one constraint on that table, the primary key.

your table's primary key, named users_pkey,  is on user_id.  you want to 
have duplicate user_id values in your table ?










--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need to Remove Constraint, but Don't Know How - Previous attempts failed

2010-02-03 Thread Adrian Klaver
On Wednesday 03 February 2010 4:57:09 pm Wang, Mary Y wrote:
> Hi,
>
> I'm sorry but I didn't create the table.  I fiddled with the table for a
> while and didn't get what I expected.  So I'm going to ask the community.
>
> Can some one tell me which constraint would case me the "duplicate key into
> unique index users_pkey"?  I'd like to remove that constraint.
>
> Here is the create:
> CREATE TABLE "users" (
> "user_id" integer DEFAULT nextval('users_pk_seq'::text) NOT NULL,
> "user_name" text DEFAULT '' NOT NULL,
> "email" text DEFAULT '' NOT NULL,
> "user_pw" character varying(32) DEFAULT '' NOT NULL,
> "realname" character varying(32) DEFAULT '' NOT NULL,
> "status" character(1) DEFAULT 'A' NOT NULL,
> "shell" character varying(20) DEFAULT '/bin/bash' NOT NULL,
> "unix_pw" character varying(40) DEFAULT '' NOT NULL,
> "unix_status" character(1) DEFAULT 'N' NOT NULL,
> "unix_uid" integer DEFAULT '0' NOT NULL,
> "unix_box" character varying(10) DEFAULT 'shell1' NOT NULL,
> "add_date" integer DEFAULT '0' NOT NULL,
> "confirm_hash" character varying(32),
> "mail_siteupdates" integer DEFAULT '0' NOT NULL,
> "mail_va" integer DEFAULT '0' NOT NULL,
> "authorized_keys" text,
> "email_new" text,
> "people_view_skills" integer DEFAULT '0' NOT NULL,
> "people_resume" text DEFAULT '' NOT NULL,
> "timezone" character varying(64) DEFAULT 'GMT',
> "language" integer DEFAULT '1' NOT NULL,
> "third_party" integer DEFAULT 1 NOT NULL,
> "personal_status" character(32),
> "bemsid" integer,
> "sensitive_info" character(64),
> "reason_access" text,
> "organization" text,
> Constraint "users_pkey" Primary Key ("user_id")
  ^^^



> );
>

>
> 
> Mary Y Wang



-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is it necessary to have index for child table in following case?

2010-02-03 Thread Yan Cheng Cheok
> 
> Are you using INSERT or COPY to insert your data? COPY
> tends to be a lot faster than separate INSERTs, especially
> if you don't wrap the INSERTs in a transaction block and
> COMMIT them in batches.
> 

But I do not use File or Stdio. The data is coming in row by row real-time. I 
need to insert the data programmatic real-time into the database. That's why I 
use INSERT. But maybe I miss out something on the usage of COPY, please advice 
:)

> As an aside, I hope you do realise that your primary key
> isn't guaranteed to be unique across your child tables? The
> reason is the same one that you already quoted for indexes
> spanning multiple tables - a primary key is implemented
> using a unique index after all.
> 
> If that matters, what you can do is make your primary key a
> foreign key to a new table with just the primary key column
> in it. Make sure you always insert a record in the primary
> key table along with the one referencing it, so that you
> will get a unique violation when you try to insert a record
> for which the primary key already exists. This will of
> course slow things down some, but if it's necessary that's
> the price to pay.
> 

Oh. I didn't notice that. Thanks for pointing out. Luckily, thanks God. It 
doesn't matter much at this moment :)


  


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Need to Remove Constraint, but Don't Know How - Previous attempts failed

2010-02-03 Thread Wang, Mary Y
Hi,

I'm sorry but I didn't create the table.  I fiddled with the table for a while 
and didn't get what I expected.  So I'm going to ask the community.

Can some one tell me which constraint would case me the "duplicate key into 
unique index users_pkey"?  I'd like to remove that constraint.

Here is the create:
CREATE TABLE "users" (
"user_id" integer DEFAULT nextval('users_pk_seq'::text) NOT NULL,
"user_name" text DEFAULT '' NOT NULL,
"email" text DEFAULT '' NOT NULL,
"user_pw" character varying(32) DEFAULT '' NOT NULL,
"realname" character varying(32) DEFAULT '' NOT NULL,
"status" character(1) DEFAULT 'A' NOT NULL,
"shell" character varying(20) DEFAULT '/bin/bash' NOT NULL,
"unix_pw" character varying(40) DEFAULT '' NOT NULL,
"unix_status" character(1) DEFAULT 'N' NOT NULL,
"unix_uid" integer DEFAULT '0' NOT NULL,
"unix_box" character varying(10) DEFAULT 'shell1' NOT NULL,
"add_date" integer DEFAULT '0' NOT NULL,
"confirm_hash" character varying(32),
"mail_siteupdates" integer DEFAULT '0' NOT NULL,
"mail_va" integer DEFAULT '0' NOT NULL,
"authorized_keys" text,
"email_new" text,
"people_view_skills" integer DEFAULT '0' NOT NULL,
"people_resume" text DEFAULT '' NOT NULL,
"timezone" character varying(64) DEFAULT 'GMT',
"language" integer DEFAULT '1' NOT NULL,
"third_party" integer DEFAULT 1 NOT NULL,
"personal_status" character(32),
"bemsid" integer,
"sensitive_info" character(64),
"reason_access" text,
"organization" text,
Constraint "users_pkey" Primary Key ("user_id")
);

Here is my table:

 Table "users"
  Attribute   | Type  |Modifier

--+---+-
---
 user_id  | integer   | not null default nextval('users_
pk_seq'::text)
 user_name| text  | not null default ''
 email| text  | not null default ''
 user_pw  | character varying(32) | not null default ''
 realname | character varying(32) | not null default ''
 status   | character(1)  | not null default 'A'
 shell| character varying(20) | not null default '/bin/bash'
 unix_pw  | character varying(40) | not null default ''
 unix_status  | character(1)  | not null default 'N'
 unix_uid | integer   | not null default '0'
 unix_box | character varying(10) | not null default 'shell1'
 add_date | integer   | not null default '0'
 confirm_hash | character varying(32) |
 mail_siteupdates | integer   | not null default '0'
 mail_va  | integer   | not null default '0'
 authorized_keys  | text  |
 email_new| text  |
 people_view_skills   | integer   | not null default '0'
 people_resume| text  | not null default ''
 timezone | character varying(64) | default 'GMT'
 language | integer   | not null default '1'
 third_party  | integer   | not null default 1
 personal_status  | character(32) |
 bemsid   | integer   |
 sensitive_info   | character(64) |
 reason_access| text  |
 organization | text  |
 brass_first_time | character(1)  | default '0'
 mail_sitenews_update | integer   | default '0'
 doclinks_sort_order  | character(1)  | default 'A'
Indices: idx_users_username,
 user_user,
 users_pkey,
 users_user_pw



Mary Y Wang




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why lots of temp schemas are being created

2010-02-03 Thread Walter Coole
Thanks for the pointer!

In case anyone else has the same problem, here's what I did:

I used

SELECT MAX(backendid) FROM (SELECT pg_stat_get_backend_idset() AS backendid) AS 
s;

to get the highest backend ID that is running.  I deleted all the pg*_temp_ 
schemas numbers higher than that.

This didn't seem quite thorough enough, as I found that when a process would 
end (MAX(backendid) went down), the corresponding pg*_temp_ schema would not go 
away.  I think these were schemas created by a previous backend, so would not 
be cleaned up by a backend that hadn't created it.

I restarted the database; forcing it to have just one backend.  Then I repeated 
the above procedure.  I'm fairly sure that pg_toast_temp_1 and pg_temp_1 are 
not actually in use, but I decided to quit while I'm ahead.

I guess these schemas are fairly harmless, but it seems kind of messy to have 
them sloshing around.  It seems like when a new backend starts up, it would be 
better to clear out the temp schemas to avoid accidentally using stale data, 
but this doesn't seem to be happening.  One could also imagine hooking a 
cleanup in the database startup, but I don't see that either.

Walter


-Original Message-
From: Alvaro Herrera [mailto:alvhe...@commandprompt.com] 
Sent: Wednesday, February 03, 2010 3:36 PM
To: Walter Coole
Cc: Merlin Moncure; Grzegorz Jaśkiewicz; Anirban Pal; 
pgsql-nov...@postgresql.org; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Why lots of temp schemas are being created

Walter Coole escribió:

> I would like to drop them, since there are so many of them, they make
> it tedious to look through my databases in pgAdmin.  Is there a
> reliable way to distinguish between temp schemas that exist because
> they are supposed to be there and those that are not?

Run pg_get_backend_idset() (or something like that, maybe there's "stat"
in the name), which returns a list of backend IDs that are running.
Then see which temp schemas have numbers beyond what's listed there;
those shouldn't be there and could cause problems if the numbers are too
high.

> Or even better, a way to tell the database to clean them up itself?

It does, unless one of them gets a very high backend ID that's not
reused.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why lots of temp schemas are being created

2010-02-03 Thread Alvaro Herrera
Walter Coole escribió:

> This didn't seem quite thorough enough, as I found that when a process
> would end (MAX(backendid) went down), the corresponding pg*_temp_
> schema would not go away.  I think these were schemas created by a
> previous backend, so would not be cleaned up by a backend that hadn't
> created it.

Temp schemas are not destroyed on session shutdown; they are rather
destroyed the next time the backend ID is reused.  Normally that's not a
problem, because a backend ID is reused pretty soon.  It's only a
problem when you use so high a backend ID due to high load, that a very
long time passes before it's reused.  Those temp tables linger and can
cause Xid wraparound problems.

> I guess these schemas are fairly harmless, but it seems kind of messy
> to have them sloshing around.  It seems like when a new backend starts
> up, it would be better to clear out the temp schemas to avoid
> accidentally using stale data, but this doesn't seem to be happening.
> One could also imagine hooking a cleanup in the database startup, but
> I don't see that either.

IIRC the time when the previous temp schema is destroyed is when the
first temp table is created in the new backend.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why lots of temp schemas are being created

2010-02-03 Thread Tom Lane
Walter Coole  writes:
> I've gotten a lot of similar schemas, like:
> CREATE SCHEMA pg_toast_temp_1
>   AUTHORIZATION postgres;

These schemas are automatically created to hold temporary tables.  There
is not a lot of point in removing them, because they'll just reappear
next time you have a lot of sessions creating temp tables.

You might want to pester the pgAdmin folk to add an option to ignore
them in pgAdmin's displays.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why lots of temp schemas are being created

2010-02-03 Thread Alvaro Herrera
Walter Coole escribió:

> I would like to drop them, since there are so many of them, they make
> it tedious to look through my databases in pgAdmin.  Is there a
> reliable way to distinguish between temp schemas that exist because
> they are supposed to be there and those that are not?

Run pg_get_backend_idset() (or something like that, maybe there's "stat"
in the name), which returns a list of backend IDs that are running.
Then see which temp schemas have numbers beyond what's listed there;
those shouldn't be there and could cause problems if the numbers are too
high.

> Or even better, a way to tell the database to clean them up itself?

It does, unless one of them gets a very high backend ID that's not
reused.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why lots of temp schemas are being created

2010-02-03 Thread Walter Coole
I've gotten a lot of similar schemas, like:

CREATE SCHEMA pg_toast_temp_1
  AUTHORIZATION postgres;

and

CREATE SCHEMA pg_temp_1
  AUTHORIZATION postgres;

I don't see anything of substance inside those schemas, like tables or 
sequences, so I THINK they are the legacy of some previous problem (OS crash, 
ham-fisted tinkering, bug, etc.).

I would like to drop them, since there are so many of them, they make it 
tedious to look through my databases in pgAdmin.  Is there a reliable way to 
distinguish between temp schemas that exist because they are supposed
to be there and those that are not?

Or even better, a way to tell the database to clean them up itself?

Or way better, a way to put a stop to the process that creates them?

Thanks!
Walter


-Original Message-
From: Merlin Moncure [mailto:mmonc...@gmail.com] 
Sent: Tuesday, June 09, 2009 7:34 AM
To: Grzegorz Jaśkiewicz
Cc: Anirban Pal; pgsql-nov...@postgresql.org; pgsql-general@postgresql.org
Subject: Re: Why lots of temp schemas are being created

2009/6/8 Grzegorz Jaśkiewicz :
> On Mon, Jun 8, 2009 at 6:57 AM, Anirban Pal wrote:
>> Dear all,
>>
>> Our software use postgres as backend database. It works fine, strange thing
>> is that, it creates so many temporary schemas under schema tab, names like
>> pg_toast, pg_toast_temp_1 like this upto pg_toast_temp40. This schemas also
>> reflected in "pg_namespace" table. Can anybody tell me, is there anything
>> wrong in it?  Should I get rid of these temporary schemas?  Any suggestion
>> on this, please.
>
> you are a purist, bad habit.
> Anything beginning with pg_* is an internal schema, don't touch it,
> and no - you can't get rid of it.


actually, you can get dangling temp schemas/tables if the database
crashes or the computer goes through a hard cycle.  This can be a
major disaster on 8.2 and below if autovacuum is running because it
prevents it from doing the database wide vacuum to prevent xid
wraparound.  This is highly mitigated now because of the way xid works
and better warning/notification from the backend.  So we should
distinguish between temp schemas that exist because they are supposed
to be there and those that are not.

merlin



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PQendcopy:resetting connection Problem and Cannot insert a duplicate key into unique index

2010-02-03 Thread Tom Lane
"Wang, Mary Y"  writes:
> What would be the newer version of pgsql (I mean a bit higher version of my 
> current version) that provides that capability of telling me what the 
> duplicated key was ?

Hmm ... [ experiments... ]  For the specific case of COPY into a table
with a pre-existing unique index, it looks like only CVS HEAD will give
you an error message with the exact index value.  However, everything
back to 7.4 will include the text of the whole current line of COPY
data, which probably would be enough to figure out the problem.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PQendcopy:resetting connection Problem and Cannot insert a duplicate key into unique index

2010-02-03 Thread Scott Marlowe
On Wed, Feb 3, 2010 at 3:31 PM, Wang, Mary Y  wrote:
> Thanks Scott and Tom.
> Yes.  I know, I know that I need to upgrade :-)
> What would be the newer version of pgsql (I mean a bit higher version of my 
> current version) that provides that capability of telling me what the 
> duplicated key was ?

I'd recommend going to 8.3.x if possible, or 8.4.2, the latest version.

Not sure how far back it would tell you which value caused the
failure, but more importantly, 8.3 is will run circles around 7.1 and
is much easier to keep happy from a maintenance perspective.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PQendcopy:resetting connection Problem and Cannot insert a duplicate key into unique index

2010-02-03 Thread Wang, Mary Y
Thanks Scott and Tom.
Yes.  I know, I know that I need to upgrade :-)
What would be the newer version of pgsql (I mean a bit higher version of my 
current version) that provides that capability of telling me what the 
duplicated key was ?

Migration is always a challenge, especially I'm not sure if I will have any 
unexpected hiccups when I dump my whole DB. I also need to upgrade the OS.  My 
first priority is to get this problem fixed so that my users will stop IM, 
email or call me. 

Mary



-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us] 
Sent: Wednesday, February 03, 2010 2:21 PM
To: Scott Marlowe
Cc: Wang, Mary Y; pgsql-general@postgresql.org
Subject: Re: [GENERAL] PQendcopy:resetting connection Problem and Cannot insert 
a duplicate key into unique index 

Scott Marlowe  writes:
> On Wed, Feb 3, 2010 at 2:16 PM, Wang, Mary Y  wrote:
>> I still couldn't find that particular line that caused that problem :-(.  
>> Counting was very pain.
>> Is there anyway that I can tell psql just to "ignore" (I mean don't insert 
>> it duplicate key into unique index users_pkey) and just keep going without 
>> doing the PQendcopy:resetting connection?

> Not really directly.  What I'd do is remove the unique constraint, 
> insert, then use something like

> select max(row_id) from table t1 join table t2 on 
> t1.somefield=t2.somefield and t1.row_id<>r2.row_id;

> to find dupes and remove them.

> Then I'd dump the whole db and migrate to a more modern version of pgsql.

If you were using a more modern version of pgsql, it would tell you what the 
duplicated key was ;-).  So maybe you could try loading the dump file into 
something newer as a means of debugging the problem.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PQendcopy:resetting connection Problem and Cannot insert a duplicate key into unique index

2010-02-03 Thread Tom Lane
Scott Marlowe  writes:
> On Wed, Feb 3, 2010 at 2:16 PM, Wang, Mary Y  wrote:
>> I still couldn't find that particular line that caused that problem :-(.  
>> Counting was very pain.
>> Is there anyway that I can tell psql just to "ignore" (I mean don't insert 
>> it duplicate key into unique index users_pkey) and just keep going without 
>> doing the PQendcopy:resetting connection?

> Not really directly.  What I'd do is remove the unique constraint,
> insert, then use something like

> select max(row_id) from table t1 join table t2 on
> t1.somefield=t2.somefield and t1.row_id<>r2.row_id;

> to find dupes and remove them.

> Then I'd dump the whole db and migrate to a more modern version of pgsql.

If you were using a more modern version of pgsql, it would tell you what
the duplicated key was ;-).  So maybe you could try loading the dump
file into something newer as a means of debugging the problem.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] JOIN Record returning Function

2010-02-03 Thread Tim Landscheidt
Daniel Schuchardt  wrote:

> i have a function that returns a record.

> myfunc(IN id INTEGER) RETURNS RECORD.

> in that function the record is build from some subquery's in
> dependence of data.

> Now i need to join that function to its correponding main
> table that holds the id.

> SELECT myfunc.* FROM maintable JOIN myfunc(maintable.pk) ON
> true WHERE maintable.field=statisticdata;

> ERROR:  invalid reference to FROM-clause entry for table "maintable"
> TIP:  There is an entry for table "maintable", but it cannot
> be referenced from this part of the query.

> so far so good, thats clear. But does anyone know a tricky
> solution for that problem?

"SELECT myfunc(pk) FROM maintable WHERE field =
statisticdata;"?

Tim


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PQendcopy:resetting connection Problem and Cannot insert a duplicate key into unique index

2010-02-03 Thread Scott Marlowe
On Wed, Feb 3, 2010 at 2:16 PM, Wang, Mary Y  wrote:
> Thanks Tom.
> I still couldn't find that particular line that caused that problem :-(.  
> Counting was very pain.
> Is there anyway that I can tell psql just to "ignore" (I mean don't insert it 
> duplicate key into unique index users_pkey) and just keep going without doing 
> the PQendcopy:resetting connection?

Not really directly.  What I'd do is remove the unique constraint,
insert, then use something like

select max(row_id) from table t1 join table t2 on
t1.somefield=t2.somefield and t1.row_id<>r2.row_id;

to find dupes and remove them.

Then I'd dump the whole db and migrate to a more modern version of pgsql.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PQendcopy:resetting connection Problem and Cannot insert a duplicate key into unique index

2010-02-03 Thread Wang, Mary Y
Thanks Tom.
I still couldn't find that particular line that caused that problem :-(.  
Counting was very pain.
Is there anyway that I can tell psql just to "ignore" (I mean don't insert it 
duplicate key into unique index users_pkey) and just keep going without doing 
the PQendcopy:resetting connection?

I've a dev server that has some portion of the database (I believe the 
duplicate problem also occurred in that database), and that user_table is 
querable with data.  Would I be able to find any more detailed information of 
which exact statement caused that problem?

Mary




-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Tom Lane
Sent: Wednesday, February 03, 2010 7:43 AM
To: Wang, Mary Y
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] PQendcopy:resetting connection Problem and Cannot insert 
a duplicate key into unique index 

"Wang, Mary Y"  writes:
> I managed to restore my database. However, one table is not restored.
> The error message that I was received was "copy: line 3057, Cannot insert a 
> duplicate key into unique index users_pkey..." and then 
> "...PQendcopy:resetting connection"

> Then I went to the log file (my debug file was set to level 5),
> ProcessUtility: COPY "users"  FROM stdin;
> ERROR:  copy: line 3057, Cannot insert a duplicate key into unique index 
> users_pkey"

> My question is that line #3057.  Do I count it from the line where it did 
> "COPY "users" FROM stdin" as the first line and count up to 3057 lines (that 
> line is the problem)?

IIRC, even as far back as 7.1, that should be read as "the 3057'th row of COPY 
data for this table".  So you can count forward 3057 lines from the COPY 
command in the dump file ... unless you have any embedded newlines in your 
data, in which case it could get a bit painful to count correctly.

regards, tom lane

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make 
changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] add column specify position

2010-02-03 Thread Scott Frankel


Excellent!  Thanks for providing the link.  I think the 'add columns  
and move data' option would best fit my needs.


Thanks!
Scott




On Feb 2, 2010, at 11:44 PM, Shoaib Mir wrote:

On Wed, Feb 3, 2010 at 4:14 PM, Scott Frankel  
 wrote:


Hi all,

Is it possible to specify a position when adding a column to a table?


Not possible, but have a read of http://wiki.postgresql.org/wiki/Alter_column_position 
 and look at the alternative options.


--
Shoaib Mir
http://shoaibmir.wordpress.com/













Re: [GENERAL]

2010-02-03 Thread Andrej
On 4 February 2010 02:26, vijayalakshmi thiruvengadam
 wrote:
> Good Morning,
> I am a developer trying to use postgresql 8.3.2 when
> it was available. Now it says fatal error ie when executing
> postgresql-8.3-int.msi that has two msi files pgadmin3.msi and psqlodbc.msi,
> it shows
May I ask why you're trying to install an ancient version?
The latest in the 8.3.x branch is 8.3.9

> I would be grateful if you would let me know about it.
>
> Many thanks,
>
> Vijayalakshmi Vijaya sankar


Cheers,
Andrej


-- 
Please don't top post, and don't use HTML e-Mail :}  Make your quotes concise.

http://www.american.edu/econ/notes/htmlmail.htm

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] statically linking libpq to psycopg2

2010-02-03 Thread john blair
I am trying to link statically psycopg2-2.0.13 to libpq.a on my mac os 10.5.5. 
The build goes fine and libpq is statically linked:
otool -L build/lib.macosx-10.3-i386-2.6/psycopg2/_psycopg.so 
build/lib.macosx-10.3-i386-2.6/psycopg2/_psycopg.so:
/usr/lib/libssl.0.9.7.dylib (compatibility version 0.9.7, current 
version 0.9.7)
/usr/lib/libcrypto.0.9.7.dylib (compatibility version 0.9.7, current 
version 0.9.7)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current 
version 111.1.2)

But when I try to import psycopg2 I get:

Python 2.6.1 (r261:67515, Dec  6 2008, 10:32:39) 
[GCC 4.0.1 (Apple Computer, Inc. build 5363)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import psycopg2
Traceback (most recent call last):
  File "", line 1, in 
  File "/tools/python-2.6.1/lib/python2.6/site-packages/psycopg2/__init__.py", 
line 60, in 
from _psycopg import BINARY, NUMBER, STRING, DATETIME, ROWID
ImportError: 
dlopen(/tools/python-2.6.1/lib/python2.6/site-packages/psycopg2/_psycopg.so, 
2): Symbol not found: _error_message
  Referenced from: 
/tools/python-2.6.1/lib/python2.6/site-packages/psycopg2/_psycopg.so
  Expected in: dynamic lookup



  

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PQendcopy:resetting connection Problem and Cannot insert a duplicate key into unique index

2010-02-03 Thread Tom Lane
"Wang, Mary Y"  writes:
> I managed to restore my database. However, one table is not restored.
> The error message that I was received was "copy: line 3057, Cannot insert a 
> duplicate key into unique index users_pkey..." and then 
> "...PQendcopy:resetting connection"

> Then I went to the log file (my debug file was set to level 5),
> ProcessUtility: COPY "users"  FROM stdin;
> ERROR:  copy: line 3057, Cannot insert a duplicate key into unique index 
> users_pkey"

> My question is that line #3057.  Do I count it from the line where it did 
> "COPY "users" FROM stdin" as the first line and count up to 3057 lines (that 
> line is the problem)?

IIRC, even as far back as 7.1, that should be read as "the 3057'th row
of COPY data for this table".  So you can count forward 3057 lines from
the COPY command in the dump file ... unless you have any embedded
newlines in your data, in which case it could get a bit painful to
count correctly.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] stat collector is not using

2010-02-03 Thread Tom Lane
AI Rumman  writes:
> I am using Postgresql 8.1.
> I set stats_start_collector = on

> But I found statistics collector is not using;

The collector won't actually collect anything unless you enable at least
one of these settings:

#stats_command_string = off
#stats_block_level = off
#stats_row_level = off

All of them default to off in 8.1 ...

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Deleted Records & file format

2010-02-03 Thread Jonathan Bond-Caron
Is there any documentation of the binary format of the 'base' tables?

 

I found that I can find where a table's data stored with:

 

SELECT oid FROM pg_database WHERE datname = current_database();

SELECT relfilenode, reltoastrelid FROM pg_class WHERE relname = 'table';

 

/usr/local/pgsql/data/base/$oid/$relfilenode

 

>From what I understand, deleted records should still be there unless vacuum
removes them.

Is that right?

 



[GENERAL] Grant on table without access to other tables structure

2010-02-03 Thread Pedro Zorzenon Neto
Hi,

  I will explain what I need with a simple example.

  I have three tables:
 jobs, people, cities

  I have to grant select on people to john.

GRANT SELECT ON PEOPLE TO JOHN;

  Now, when john opens "psql", it can do the selects. That is OK.

  But when john does
\d
\d cities

  He can see the other table names and structures...

  How can I revoke this rights?

  How can I create a new user and give him select rights on some tables,
but not disclosure the full database tables and fields?

  PG is 8.3

  Thanks,
  Pedro

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL]

2010-02-03 Thread vijayalakshmi thiruvengadam

Good Morning,

I am a developer trying to use postgresql 8.3.2 when it 
was available. Now it says fatal error ie when executing postgresql-8.3-int.msi 
that has two msi files pgadmin3.msi and psqlodbc.msi, it shows 

 

"please use the main msi file to install postgresql" 

on postgresql-8.3-int.msi click. 

 

when clicked ok it shows 

 

"Installation ended prematurely because of an error".

 

when installing pdadmin3.msi and psqlodbc.msi it installs these two. 

 

I would be grateful if you would let me know about it.

 

Many thanks,

 

Vijayalakshmi Vijaya sankar

developer
  
_
We want to hear all your funny, exciting and crazy Hotmail stories. Tell us now
http://clk.atdmt.com/UKM/go/195013117/direct/01/

Re: [GENERAL] Attribute a value to a record

2010-02-03 Thread Florent THOMAS
Thanks a lot for this precision.

unfortunately, the cursor doesn't accept "complicated" queries whereas
record type stay more powerfull on this aspect.
I found a solution and BTW it has considerably simplfy my code!
A clue can make you think better!

Le mercredi 03 février 2010 à 14:33 +0100, Wappler, Robert a écrit :

> On 2010-02-03, Florent THOMAS wrote:
>  
> > Dear laurenz Albe,
> > 
> > Thank you for answering so fast. for me, the variable ventilation_local
> > is defined  as a record type. So as I wrote on the other mail, I made
> > some additionnal test because the doc precise that the syntax above is
> > allowed : http://www.postgresql.org/docs/8.4/interactive/plpgsql-stateme
> > nts.html#PLPGSQL-STATEMENTS-ASSIGNMENT (last line)
> > 
> > I understood that in the Loop you can change the values of a
> > variable! Exactly what I needed.
> > but unfortunately all of this seems to be temporary.
> > Consequently, the record in the table won't be updated by the
> > changes we made on the local variable even if it points to a
> > record in the table.
> > I forgot the aspect of the cursor that is temporary.
> > 
> > But in all the case, It could be a great improvement to let
> > the syntax modify directly the table.
> > 
> > I think I will find another way to do it. with EXECUTE!!
> > 
> > Best regards
> > 
> > Le mercredi 03 février 2010 à 10:05 +0100, Albe Laurenz a écrit :
> > 
> > 
> > Florent THOMAS wrote:
> > > I'm currently running on pg8.4 and I have a trigger
> > with a loop :
> > >
> > > FOR ventilation_local IN (SELECT * FROM XXX) LOOP
> > > IF (mytest) THEN
> > > ventilation_local.myfield:=mynewvalue;
> > > END IF;
> > > END LOOP;
> > >
> > > my problem is that the record doen't accept the new value.
> > > I've chek before the value that is not null.
> > > Is it a fonctionnality accepted in pg8.4 on record type?
> > 
> > What do you mean by "the record doen't accept the new value"?
> > 
> > Can you show us some SQL statements that exhibit the problem?
> > 
> > Yours,
> > Laurenz Albe
> > 
> >
>  
> A record variable is not a physical record. It is a type consisting of some 
> fields.
> 
> DECLARE
>   ventilation_local refcursor FOR SELECT * FROM XXX;
> BEGIN
>   OPEN ventilation_local;
>   MOVE ventilation_local;
>   WHILE FOUND LOOP
> UPDATE XXX SET myfield = mynewvalue WHERE CURRENT OF ventilation_local;
> MOVE ventilation_local;
>   END LOOP;
> END;
> 
> This way, ventilation_local is not a record variable, but a cursor, which is 
> indeed updatable.
> 


Re: [GENERAL] Attribute a value to a record

2010-02-03 Thread Florent THOMAS


Le mercredi 03 février 2010 à 14:38 +0100, Florent THOMAS a écrit :

> Thank you,
> 
> As I posted on a french list, whene I start to develop trigger bigger
> than 10lines, I always come back on developper habits and forget
> database aspects.
> So I WILL PRINT IT ON MY WALL : With records everything is temporary.
> 
> Best regards and sorry for the english!
> 
> 
> 
> Le mercredi 03 février 2010 à 14:13 +0100, Albe Laurenz a écrit : 
> 
> > Florent THOMAS wrote:
> > > I understood that in the Loop you can change the values of a 
> > > variable! Exactly what I needed.
> > > but unfortunately all of this seems to be temporary. 
> > > Consequently, the record in the table won't be updated by the 
> > > changes we made on the local variable even if it points to a 
> > > record in the table.
> > > I forgot the aspect of the cursor that is temporary.
> > 
> > I get you now - you expected that the underlying table would
> > be updated if you change a variable in PL/pgSQL.
> > 
> > I don't think that you need dynamic SQL for that -
> > all it takes is an UPDATE statement in your loop, like
> > 
> > UPDATE XXX SET XXX.myfield = mynewvalue
> >WHERE XXX.pkey = ventilation_local.pkey;
> > 
> > (using the names from your sample)
> > 
> > Yours,
> > Laurenz Albe
> > 


Re: [GENERAL] Attribute a value to a record

2010-02-03 Thread Wappler, Robert
On 2010-02-03, Florent THOMAS wrote:
 
> Dear laurenz Albe,
> 
> Thank you for answering so fast. for me, the variable ventilation_local
> is defined  as a record type. So as I wrote on the other mail, I made
> some additionnal test because the doc precise that the syntax above is
> allowed : http://www.postgresql.org/docs/8.4/interactive/plpgsql-stateme
> nts.html#PLPGSQL-STATEMENTS-ASSIGNMENT (last line)
> 
> I understood that in the Loop you can change the values of a
> variable! Exactly what I needed.
> but unfortunately all of this seems to be temporary.
> Consequently, the record in the table won't be updated by the
> changes we made on the local variable even if it points to a
> record in the table.
> I forgot the aspect of the cursor that is temporary.
> 
> But in all the case, It could be a great improvement to let
> the syntax modify directly the table.
> 
> I think I will find another way to do it. with EXECUTE!!
> 
> Best regards  
> 
> Le mercredi 03 février 2010 à 10:05 +0100, Albe Laurenz a écrit :
> 
>   
>   Florent THOMAS wrote:
>   > I'm currently running on pg8.4 and I have a trigger
> with a loop :
>   >
>   > FOR ventilation_local IN (SELECT * FROM XXX) LOOP
>   > IF (mytest) THEN
>   > ventilation_local.myfield:=mynewvalue;
>   > END IF;
>   > END LOOP;
>   >
>   > my problem is that the record doen't accept the new value.
>   > I've chek before the value that is not null.
>   > Is it a fonctionnality accepted in pg8.4 on record type?
>   
>   What do you mean by "the record doen't accept the new value"?
>   
>   Can you show us some SQL statements that exhibit the problem?
>   
>   Yours,
>   Laurenz Albe
> 
>
 
A record variable is not a physical record. It is a type consisting of some 
fields.

DECLARE
  ventilation_local refcursor FOR SELECT * FROM XXX;
BEGIN
  OPEN ventilation_local;
  MOVE ventilation_local;
  WHILE FOUND LOOP
UPDATE XXX SET myfield = mynewvalue WHERE CURRENT OF ventilation_local;
MOVE ventilation_local;
  END LOOP;
END;

This way, ventilation_local is not a record variable, but a cursor, which is 
indeed updatable.

-- 
Robert...
 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Attribute a value to a record

2010-02-03 Thread Albe Laurenz
Florent THOMAS wrote:
> I understood that in the Loop you can change the values of a 
> variable! Exactly what I needed.
> but unfortunately all of this seems to be temporary. 
> Consequently, the record in the table won't be updated by the 
> changes we made on the local variable even if it points to a 
> record in the table.
> I forgot the aspect of the cursor that is temporary.

I get you now - you expected that the underlying table would
be updated if you change a variable in PL/pgSQL.

I don't think that you need dynamic SQL for that -
all it takes is an UPDATE statement in your loop, like

UPDATE XXX SET XXX.myfield = mynewvalue
   WHERE XXX.pkey = ventilation_local.pkey;

(using the names from your sample)

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is it necessary to have index for child table in following case?

2010-02-03 Thread Alban Hertroys
On 3 Feb 2010, at 3:58, Yan Cheng Cheok wrote:

> Due to the fact
> 
> "A serious limitation of the inheritance feature is that indexes (including 
> unique constraints) and foreign key constraints only apply to single tables, 
> not to their inheritance children. This is true on both the referencing and 
> referenced sides of a foreign key constraint. Thus, in the terms of the above 
> example: "
> 
> When I use table inheritance to implement table partition :
> 
> measurement {
>primary_key
>foreign_key1
>foreign_key2
> 
>create foreign_key1_index
>create foreign_key2_index
> }
> 
> measurement_1 inherit from measurement {
>primary_key
>foreign_key1
>foreign_key2
> 
>create foreign_key1_index
>create foreign_key2_index
> }
> 
> However, I realize having foreign_key1_index and foreign_key2_index for child 
> table measurement_1, make up my data insert speed slow down by factor of 2~3

You probably want to check the output of EXPLAIN ANALYZE for your INSERT 
queries. That should point to what's slowing things down and can often tell why 
it does so. If it's not obvious, post the output here ;)

Are you using INSERT or COPY to insert your data? COPY tends to be a lot faster 
than separate INSERTs, especially if you don't wrap the INSERTs in a 
transaction block and COMMIT them in batches.

> I was wondering whether is it necessary for me to create index 
> foreign_key1_index for child table measurement_1?
> 
> (1) All my SELECT query is performed on parent table measurement.
> (2) All my data INSERT code is performed based on child table measurement_1.


That depends on whether your SELECT query actually uses those indexes. Again, 
you can see that by using EXPLAIN on your SELECT query. I think you'll see that 
the indexes on the child tables do get used, but the ones on the parent table 
probably don't.

If you never insert any data into your parent table (or if it never contains 
more than a few rows if you do) then there's no point in having an index on it. 
As you already quoted, indexes can't span multiple tables. The only data in the 
index on the parent table is about the records that live there, which probably 
are none at all.


As an aside, I hope you do realise that your primary key isn't guaranteed to be 
unique across your child tables? The reason is the same one that you already 
quoted for indexes spanning multiple tables - a primary key is implemented 
using a unique index after all.

If that matters, what you can do is make your primary key a foreign key to a 
new table with just the primary key column in it. Make sure you always insert a 
record in the primary key table along with the one referencing it, so that you 
will get a unique violation when you try to insert a record for which the 
primary key already exists. This will of course slow things down some, but if 
it's necessary that's the price to pay.

If your primary key is a serial (meaning it's generated by a sequence) you 
probably don't need to worry about it, serials generate unique numbers (unless 
they wrap around when they run out of numbers, but you control whether they're 
allowed to and you can design them large enough that it won't happen).

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,4b6966ba10441687344184!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] table size is not changed

2010-02-03 Thread Scott Marlowe
On Wed, Feb 3, 2010 at 2:43 AM, AI Rumman  wrote:
> I deleted 7 rows from a table and then execute
> vacuum analyze .
>
> But table size not yet changed.
>
> I am using Postgresql 8.1.
>
> Could anyone please tell me what the problem is?


http://www.postgresql.org/docs/8.1/static/maintenance.html#VACUUM-FOR-SPACE-RECOVERY

explains it pretty well.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] table size is not changed

2010-02-03 Thread Grzegorz Jaśkiewicz
try reindexing table.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] table size is not changed

2010-02-03 Thread AI Rumman
I deleted 7 rows from a table and then execute
vacuum analyze .

But table size not yet changed.

I am using Postgresql 8.1.

Could anyone please tell me what the problem is?


[GENERAL] PQendcopy:resetting connection Problem and Cannot insert a duplicate key into unique index

2010-02-03 Thread Wang, Mary Y
Hi,

I managed to restore my database. However, one table is not restored.
The error message that I was received was "copy: line 3057, Cannot insert a 
duplicate key into unique index users_pkey..." and then "...PQendcopy:resetting 
connection"

Then I went to the log file (my debug file was set to level 5),
ProcessUtility: COPY "users"  FROM stdin;
ERROR:  copy: line 3057, Cannot insert a duplicate key into unique index 
users_pkey"

My question is that line #3057.  Do I count it from the line where it did "COPY 
"users" FROM stdin" as the first line and count up to 3057 lines (that line is 
the problem)?


Any help is appreciated.
My pgverions is postgresql-7.1.3-2.

Thanks
Mary


Mary Y Wang




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Attribute a value to a record

2010-02-03 Thread Florent THOMAS
Dear laurenz Albe,

Thank you for answering so fast.
for me, the variable ventilation_local is defined  as a record type.
So as I wrote on the other mail, I made some additionnal test because
the doc precise that the syntax above is allowed :
http://www.postgresql.org/docs/8.4/interactive/plpgsql-statements.html#PLPGSQL-STATEMENTS-ASSIGNMENT
 (last line)

I understood that in the Loop you can change the values of a variable!
Exactly what I needed.
 but unfortunately all of this seems to be temporary. Consequently, the
record in the table won't be updated by the changes we made on the local
variable even if it points to a record in the table.
I forgot the aspect of the cursor that is temporary.

But in all the case, It could be a great improvement to let the syntax
modify directly the table.

I think I will find another way to do it. with EXECUTE!!

Best regards

Le mercredi 03 février 2010 à 10:05 +0100, Albe Laurenz a écrit :

> Florent THOMAS wrote:
> > I'm currently running on pg8.4 and I have a trigger with a loop :
> > 
> > FOR ventilation_local IN (SELECT * FROM XXX) LOOP
> > IF (mytest) THEN
> > ventilation_local.myfield:=mynewvalue;
> > END IF;
> > END LOOP;
> > 
> > my problem is that the record doen't accept the new value.
> > I've chek before the value that is not null.
> > Is it a fonctionnality accepted in pg8.4 on record type?
> 
> What do you mean by "the record doen't accept the new value"?
> 
> Can you show us some SQL statements that exhibit the problem?
> 
> Yours,
> Laurenz Albe


Re: [GENERAL] Attribute a value to a record

2010-02-03 Thread Albe Laurenz
Florent THOMAS wrote:
> I'm currently running on pg8.4 and I have a trigger with a loop :
> 
> FOR ventilation_local IN (SELECT * FROM XXX) LOOP
> IF (mytest) THEN
> ventilation_local.myfield:=mynewvalue;
> END IF;
> END LOOP;
> 
> my problem is that the record doen't accept the new value.
> I've chek before the value that is not null.
> Is it a fonctionnality accepted in pg8.4 on record type?

What do you mean by "the record doen't accept the new value"?

Can you show us some SQL statements that exhibit the problem?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] surprised by non-strict array_append

2010-02-03 Thread Greg Stark
If it were strict wouldn't it return NULL?

greg

On 3 Feb 2010 07:16, "J. Greg Davidson"  wrote:

I was caught out today by the non-strict behavior of array_append
causing me to get an undesired result for a COALESCE.  My subsequent
attempt to create a STRICT VARIADIC generalization of array_append
led to another surprise.  The problem was easily solved, but might
be of interest to others.  Perhaps someone will enlighten me as to
why the standard array functions are non-strict and why the STRICT
VARIADIC function fails to be strict.  I've boiled it down to make
it clear:

psql (8.4.2)

SELECT COALESCE( ARRAY[1] || NULL::integer, ARRAY[42] );
 coalesce
--
 {1,NULL}
(1 row)

SELECT COALESCE( array_append( ARRAY[1], NULL), ARRAY[42] );
 coalesce
--
 {1,NULL}
(1 row)

CREATE OR REPLACE
FUNCTION array_add(ANYARRAY, VARIADIC ANYARRAY) RETURNS ANYARRAY AS $$
 SELECT $1 || $2
$$ LANGUAGE sql STRICT;
COMMENT ON FUNCTION array_add(ANYARRAY, ANYARRAY)
IS 'a strict generalization version of array_append';

SELECT COALESCE( array_add( ARRAY[1], NULL, NULL), ARRAY[42] );
   coalesce
---
 {1,NULL,NULL}
(1 row)

-- hmm - not very strict!

CREATE OR REPLACE
FUNCTION array_add1(ANYARRAY, ANYELEMENT) RETURNS ANYARRAY AS $$
 SELECT $1 || $2
$$ LANGUAGE sql STRICT;
COMMENT ON FUNCTION array_add1(ANYARRAY, ANYELEMENT)
IS 'a strict version of array_append';

SELECT COALESCE( array_add1( ARRAY[1], NULL), ARRAY[42] );

coalesce
--
 {42}
(1 row)

-- ah, finally!

_Greg

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unusual table size and very slow inserts

2010-02-03 Thread Greg Smith

Ivano Luberti wrote:

This is what I was thinking , but today I was able to look at the
processes running while a client was doing a bunc of inserts. There was
no autovacuum running and every insert was taking many seconds to e
executed.
  


Have you done any basic tuning of the database parameters?  From your 
earlier message, it sounds like you might have checkpoint_segments at 
its default, which can contribute to these inserts taking so long.  See 
http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server for a 
guide to where to start.


Also, if you want to get a better idea what's actually going on with 
your data, you should be looking at "select * from pg_stat_user_tables" 
; that will give you information about things like how many dead rows 
there are in the table, when autovacuum last did some work, etc.  Should 
be possible to figure out what's different about the use pattern of this 
table compared to the ones you suggest work as expected by analyzing 
that data.


P.S. To clean up from one of these messes you might try CLUSTER instead 
of VACUUM FULL. 


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general