[GENERAL] How to access tables using a superuser

2011-08-18 Thread Siva Palanisamy
Hi All,

I have few tables being created using an ordinary user account. When I login 
into the postgresql using the super-user, I can't access my tables!
It says, ERROR:  relation "tablename" does not exist.

As I need to perform some operations using super-user, I want the tables to be 
accessed/mapped to the super-user account. Any clue on this part?

Thanks and Regards,
Siva.



::DISCLAIMER::
---

The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
It shall not attach any liability on the originator or HCL or its affiliates. 
Any views or opinions presented in
this email are solely those of the author and may not necessarily reflect the 
opinions of HCL or its affiliates.
Any form of reproduction, dissemination, copying, disclosure, modification, 
distribution and / or publication of
this message without the prior written consent of the author of this e-mail is 
strictly prohibited. If you have
received this email in error please delete it and notify the sender 
immediately. Before opening any mail and
attachments please check them for viruses and defect.

---


Re: [GENERAL] How to access tables using a superuser

2011-08-18 Thread John R Pierce

On 08/17/11 11:58 PM, Siva Palanisamy wrote:


Hi All,

I have few tables being created using an ordinary user account. When I 
login into the postgresql using the super-user, I can’t access my tables!


It says, ERROR: relation "tablename" does not exist.

As I need to perform some operations using super-user, I want the 
tables to be accessed/mapped to the super-user account. Any clue on 
this part?






are you logged into the same database ? are these tables in a schema 
other than public?



--
john r pierceN 37, W 122
santa cruz ca mid-left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to access tables using a superuser

2011-08-18 Thread Siva Palanisamy
Hi John,

I logged into the same database. I can say the commands, and you can correct me 
if I'm wrong.

Ordinary User: psql -h localhost -d db -U ordinaryusername
Select * from contacts
Now, I can access the tables. I also do have the .SQL file where it states the 
table schema as follows:
CREATE USER sa;
GRANT ALL ON DATABASE db TO sa;
\c db sa
CREATE SCHEMA AUTHORIZATION sa;
ALTER USER sa SET search_path TO sa,public;
CREATE TABLE sa.contacts (
contact_id  SERIAL PRIMARY KEY,
contact_typeINTEGER DEFAULT 0,
display_nameTEXT NOT NULL DEFAULT '',
UNIQUE(display_name)
) WITHOUT OIDS;

I logged into the database as a super-user: psql -h localhost -d db -U postgres
Select * from contacts;
ERROR: relation "contacts" does not exist.

Could you please guide me on this part? I wish to access the table using the 
super-user.

Thanks and Regards,
Siva.


-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of John R Pierce
Sent: Thursday, August 18, 2011 12:40 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] How to access tables using a superuser

On 08/17/11 11:58 PM, Siva Palanisamy wrote:
>
> Hi All,
>
> I have few tables being created using an ordinary user account. When I
> login into the postgresql using the super-user, I can't access my tables!
>
> It says, ERROR: relation "tablename" does not exist.
>
> As I need to perform some operations using super-user, I want the
> tables to be accessed/mapped to the super-user account. Any clue on
> this part?
>
>


are you logged into the same database ? are these tables in a schema
other than public?


--
john r pierceN 37, W 122
santa cruz ca mid-left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

::DISCLAIMER::
---

The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
It shall not attach any liability on the originator or HCL or its affiliates. 
Any views or opinions presented in
this email are solely those of the author and may not necessarily reflect the 
opinions of HCL or its affiliates.
Any form of reproduction, dissemination, copying, disclosure, modification, 
distribution and / or publication of
this message without the prior written consent of the author of this e-mail is 
strictly prohibited. If you have
received this email in error please delete it and notify the sender 
immediately. Before opening any mail and
attachments please check them for viruses and defect.

---

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to access tables using a superuser

2011-08-18 Thread Guillaume Lelarge
On Thu, 2011-08-18 at 12:48 +0530, Siva Palanisamy wrote:
> Hi John,
> 
> I logged into the same database. I can say the commands, and you can correct 
> me if I'm wrong.
> 
> Ordinary User: psql -h localhost -d db -U ordinaryusername
> Select * from contacts
> Now, I can access the tables. I also do have the .SQL file where it states 
> the table schema as follows:
> CREATE USER sa;
> GRANT ALL ON DATABASE db TO sa;
> \c db sa
> CREATE SCHEMA AUTHORIZATION sa;
> ALTER USER sa SET search_path TO sa,public;

This statement changed the search_path of user sa.

> CREATE TABLE sa.contacts (
> contact_id  SERIAL PRIMARY KEY,
> contact_typeINTEGER DEFAULT 0,
> display_nameTEXT NOT NULL DEFAULT '',
> UNIQUE(display_name)
> ) WITHOUT OIDS;
> 

Here you created the table contacts in the schema sa.

> I logged into the database as a super-user: psql -h localhost -d db -U 
> postgres
> Select * from contacts;
> ERROR: relation "contacts" does not exist.
> 

SELECT * FROM sa.contacts
would work.

Or

SET search_patch TO sa, public;
SELECT * FROM contacts
would work too.

And, please, don't top-post.


-- 
Guillaume
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Failover architecture

2011-08-18 Thread Reuven M. Lerner

  
  
Thanks, Tatsuo, and others who commented so helpfully.  It's the
best of all worlds when I get confirmation that my feelings were
right, *and* I learn a lot of new things that I had never
considered, thanks to the generosity of this great community.

Reuven
  



[GENERAL] max_stack_depth error, need suggestion

2011-08-18 Thread AI Rumman
I am using Postgresql 9.0.1 in Centos 5.

Yesterday, I got the error inlog:

2011-08-16 10:57:34 EDT [3868]: [1-1] user=vcrm,db=vtigercrm504 ERROR:
stack depth limit exceeded
2011-08-16 10:57:34 EDT [3868]: [2-1] user=vcrm,db=vtigercrm504 HINT:
Increase the configuration parameter "max_stack_depth", after ensuring the
platform's stack depth limit is adequate.


I found that I need to increase max_stack_depth. But doc says that it is a
bit risky increasing it.

Could any one please suggest me what the maximum safe value I may set in my
environment?

My Server RAM is 32 GB.


Re: [GENERAL] Using Postgresql as application server

2011-08-18 Thread Dimitri Fontaine
Chris Travers  writes:
> I want an email to go out to the ordering manager when the quantity I
> have of an item drops below the re-order point.  I also want this
> email NOT to go out if the transaction rolls back.  (Wait, the order
> of 5 widgets I just processed rolled back because it isn't to a
> valid customer!  We normally only sell 5 per year anyway.  No need
> for the email.)

Just use PGQ and be done with it.  You have transactional and
asynchronous behavior.  Typically, a trigger would produce events in the
queue, and a separate daemon will consume the queue and send emails.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using Postgresql as application server

2011-08-18 Thread Dimitri Fontaine
c k  writes:
> Many users are using it and found it stable and scalable. Important is that
> web server is external to the database and a mod_pgsql like mod_plsql is
> used to connect web server to database. Each page is considered as a stored
> procedure in the oracle database. I am not thinking of implementing as it is

It's been around for a long time already:

  http://asmith.id.au/mod_libpq.html

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_stack_depth error, need suggestion

2011-08-18 Thread Leif Biberg Kristensen
On Thursday 18. August 2011 12.39.31 AI Rumman wrote:
> I am using Postgresql 9.0.1 in Centos 5.
> 
> Yesterday, I got the error inlog:
> 
> 2011-08-16 10:57:34 EDT [3868]: [1-1] user=vcrm,db=vtigercrm504 ERROR:
> stack depth limit exceeded
> 2011-08-16 10:57:34 EDT [3868]: [2-1] user=vcrm,db=vtigercrm504 HINT:
> Increase the configuration parameter "max_stack_depth", after ensuring the
> platform's stack depth limit is adequate.
> 
> 
> I found that I need to increase max_stack_depth. But doc says that it is a
> bit risky increasing it.
> 
> Could any one please suggest me what the maximum safe value I may set in my
> environment?
> 
> My Server RAM is 32 GB.

That error message is usually caused by an infinite recursion.

regards, Leif

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using Postgresql as application server

2011-08-18 Thread Sim Zacks


  
  
On 08/18/2011 07:57 AM, Chris Travers wrote:

  On Wed, Aug 17, 2011 at 9:38 PM, Sim Zacks  wrote:


  
The point was not whether I have a bug in an external application, the point
is that I need an external application which creates more overhead and
another point of failure in the application stack.


  
  1)  Not sure how an external python script is different from a
PL/Python sproc except that the former exists external to transaction
control.


There are many differences. 
1) If I have a database function and I copy my database to another
server, the function still works. 
If I have an external daemon application, I not only have to copy my
database, I also have to copy the daemon application. Then I have to
build an init script and make sure it runs at startup. My
LISTEN/NOTIFY daemon is a c application, so when I move my database
to a server on a different platform, I have to recompile it.  

  2) there is absolutely no reason you can't build redundancy into this system.


Its not a question of whether I can or cannot build redundancy, it
is a question of whether I have to build an entire system in order
to call a database function from another database function. The only
reason this is complicated is because it needs to be in its own
session. That simple issue shouldn't force me to build: a) a daemon
application, b) include redundancy to ensure that it is running, c)
not be included in my database backup/restore.
Remember, I don't want to build a _system_, I basically want an
asynchronous trigger. On specific event call a database function in
its own transaction space and allow the existing transaction to end.


  3)  The overhead really shouldn't be bad, and if your parts are
well-modularized, and carefully designed overhead really should be
minimal.


Any overhead that is not necessary should not be added in. It is the
minor level of frustration that something didn't work when I
migrated servers until the "Oh Yeah" kicked in. Then looking through
all my notes to find the compilation instructions for my daemon
because we moved from a 32 bit server to a 64 bit. Then trying to
figure out the syntax for the init script, because we moved from
Gentoo to Debian and it is slightly different. It isn't a lot of
overhead but it is completely unneccessary in our situation. 
I will agree that this is entirely necessary if your application
actually uses an external system and the database communicates
through Listen/Notify. You have 2 systems to deal with in any case,
but for me the only external component is having the daemon listen
so it can call another function in the database. IOW, I don't
generally deal with anything else on the server.


  Best Wishes,
Chris Travers


Sim

  



[GENERAL] Syncing Data to Production DB Server

2011-08-18 Thread Adarsh Sharma

Dear All,

I want some views on the below requirements :

1. I have a Postgres DB server with 25 GB database. It has more than 110 
tables.

   I am using Postgresql 8.3 on a CentOs.
2. I have another system laptop that contains the same database but it 
is for testing purposes.


What I want ? If someone inserts some data in the laptop ( same database 
and tables as Postgres DB server ) , that data would synk to my 
Production Database server.


I thought the replication would help but it is not necessary that the 
laptop is connected to LAN always and if by mistake issue drop command, 
all goes in vain .


PLease guide me some ways or solutions .


Thanks

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] idle in transaction process

2011-08-18 Thread tamanna madaan
Hi

Yes , restarting the slon resolves the issue. But. there are other processes
also in my application  which connect to postgres . Those processes can also
cause "idle in transaction" postgres connection . So, I was wondering if I
can make use of tcp_keepalives_idle , tcp_keepalives_interval and
tcp_keepalives_count configurations in postgresql.conf to get rid of this
"idle in transaction" process after a certain amount of time . Will this
help or not ??



Thanks...
Tamanna


On Tue, Aug 16, 2011 at 11:56 AM, Abbas  wrote:

>
> Best Regards,
> Abbas
>
>
> On Mon, Aug 15, 2011 at 11:14 PM, tamanna madaan <
> tamanna.mad...@globallogic.com> wrote:
>
>> Hi All
>>
>> I am using postgres-8.4.0 on a cluster setup with slony-2.0.4 being used
>> for replication.
>> Recently , I saw a "idle in transaction" postgres process as below.
>>
>> postgres 13052 14742 0 May13 ? 00:00:00 postgres: slon abc
>> 172.16.1.1(49017) idle in transaction
>> I wonder what could have lead to that hung postgres process . I googled
>> about it a lot and they say that it could be
>> because of abrupt netwotk issue between slony and postgres . But in my
>> case slon was connected
>> to its local postgres database. So, network wont be an issue in this
>> case . What else could be the reason for
>> this hung process ? What should I do to come over this kind of issue in
>> future. I think this hung process would have
>> taken locks on various tables. I wonder if killing the "idle in
>> transaction" process would cause the locks on the tables
>> to be released or not. Can anyone please  help me on that.
>>
>
> Of course it is a slon process if it is not due to a network issue, then
> might be any of your scripts, if not you can try by restarting the slon
> process on origin.
>
> Abbas.
>
>>
>> Thanks in Advance .
>>
>> Tamanna
>>
>>
>>
>>
>>
>>
>
>


-- 
Tamanna Madaan | Associate Consultant | GlobalLogic Inc.
Leaders in Software R&D Services
ARGENTINA | CHILE | CHINA | GERMANY | INDIA | ISRAEL | UKRAINE | UK | USA

Office: +0-120-406-2000 x 2971

www.globallogic.com


Re: [GENERAL] max_stack_depth error, need suggestion

2011-08-18 Thread Nicholson, Brad (Toronto, ON, CA)
> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> ow...@postgresql.org] On Behalf Of Leif Biberg Kristensen
> Sent: Thursday, August 18, 2011 6:49 AM
> To: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] max_stack_depth error, need suggestion
> 
> On Thursday 18. August 2011 12.39.31 AI Rumman wrote:
> > I am using Postgresql 9.0.1 in Centos 5.
> >
> > Yesterday, I got the error inlog:
> >
> > 2011-08-16 10:57:34 EDT [3868]: [1-1] user=vcrm,db=vtigercrm504
> ERROR:
> > stack depth limit exceeded
> > 2011-08-16 10:57:34 EDT [3868]: [2-1] user=vcrm,db=vtigercrm504 HINT:
> > Increase the configuration parameter "max_stack_depth", after
> ensuring the
> > platform's stack depth limit is adequate.
> >
> >
> > I found that I need to increase max_stack_depth. But doc says that it
> is a
> > bit risky increasing it.
> >
> > Could any one please suggest me what the maximum safe value I may set
> in my
> > environment?
> >
> > My Server RAM is 32 GB.
> 
> That error message is usually caused by an infinite recursion.

Slony can also cause this to happen (at least it could - I'm not sure if it 
still does) - it wasn't from infinite recursion though.  I used to have to set 
that higher for some of my clusters.  They may have fixed the query that was 
causing that to happen though.

Brad.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Can we use dblink for insert and update of dynamic values

2011-08-18 Thread Jenish Vyas
Hi All,

Can we user dblink for insert and update of dynamic values??


Sample function :

CREATE OR REPLACE FUNCTION dblink_test()
  RETURNS boolean AS
$BODY$
DECLARE
v1 numeric;
v2 character varying(50);
BEGIN
 v1 := 123;  v2 := 'asdasdasd';
 select dblink_exec('dbname=testing_db_link_1'::text, 'insert into abc
values(v1,v2);'::text);
 select dblink_exec('dbname=testing_db_link_1'::text, 'update abc set b = ''
testing '' where a = v1;'::text);
RETURN FALSE;
END;
$BODY$
  LANGUAGE plpgsql STABLE
  COST 100;

When I am running this function it is giving me following error..

ERROR: column "v1" does not exist
SQL state: 42703
Context: Error occurred on dblink connection named "unnamed": could not
execute command.
SQL statement "select dblink_exec('dbname=testing_db_link_1'::text, 'insert
into abc values(v1,v2);'::text)"
PL/pgSQL function "aaa" line 9 at SQL statement

Plz. guide me.

Thanks & regards,
JENISH VYAS


Re: [GENERAL] Can we use dblink for insert and update of dynamic values

2011-08-18 Thread Ireneusz Pluta



W dniu 2011-08-18 16:26, Jenish Vyas pisze:

Hi All,

Can we user dblink for insert and update of dynamic values??


Sample function :

CREATE OR REPLACE FUNCTION dblink_test()
  RETURNS boolean AS
$BODY$
DECLARE
v1 numeric;
v2 character varying(50);
BEGIN
 v1 := 123;  v2 := 'asdasdasd';
 select dblink_exec('dbname=testing_db_link_1'::text, 'insert into abc 
values(v1,v2);'::text);
 select dblink_exec('dbname=testing_db_link_1'::text, 'update abc set b = '' testing '' where a = 
v1;'::text);

RETURN FALSE;
END;
$BODY$
  LANGUAGE plpgsql STABLE
  COST 100;

When I am running this function it is giving me following error..

ERROR: column "v1" does not exist
SQL state: 42703
Context: Error occurred on dblink connection named "unnamed": could not execute 
command.
SQL statement "select dblink_exec('dbname=testing_db_link_1'::text, 'insert into abc 
values(v1,v2);'::text)"

PL/pgSQL function "aaa" line 9 at SQL statement


What is really dynamic in your example, is the query argument to the 
dblink_exec(connstr, query).

So first construct query := '...' to contain actual literal values you need to insert into remote 
table and then use the result in dblink_exec call.


Queries are interpreted exactly as they are written at the remote end, while the v1 and v2 are only 
variables defined locally in your plpgsql function. That's why remote does not know anything about 
such "columns".


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Pgadmin plugins

2011-08-18 Thread salah jubeh
Hello,

I find  pgadmin  a nice software,  but I think it lacks some fundamental 
functionalities mainly  repositories support including git and cvs. Also, some 
enhancement on the editor would be nice such as auto completion, code ordering, 
coloring, etc. I have seen that pgadmin have a menu called plugins, what is it 
for ? Also, can some one recommend an open source tool like pgadmin that 
support repositories. The community link 
http://wiki.postgresql.org/wiki/Community_Guide_to_PostgreSQL_GUI_Tools 
contains many tools but there is no cross comparison which makes it difficult 
to pick up one. Also, if some one can recommend  a plugin for netbeans, eclipse 
for same purpose above I will  be greatfull. 


Thanks in advance 

Re: [GENERAL] Using Postgresql as application server

2011-08-18 Thread Chris Travers
On Thu, Aug 18, 2011 at 3:40 AM, Dimitri Fontaine
 wrote:
> Chris Travers  writes:
>> I want an email to go out to the ordering manager when the quantity I
>> have of an item drops below the re-order point.  I also want this
>> email NOT to go out if the transaction rolls back.  (Wait, the order
>> of 5 widgets I just processed rolled back because it isn't to a
>> valid customer!  We normally only sell 5 per year anyway.  No need
>> for the email.)
>
> Just use PGQ and be done with it.  You have transactional and
> asynchronous behavior.  Typically, a trigger would produce events in the
> queue, and a separate daemon will consume the queue and send emails.
>
That actually looks quite helpful.  Thanks.

Best Wishes,
Chris Travers

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using Postgresql as application server

2011-08-18 Thread Merlin Moncure
On Thu, Aug 18, 2011 at 5:48 AM, Dimitri Fontaine
 wrote:
> c k  writes:
>> Many users are using it and found it stable and scalable. Important is that
>> web server is external to the database and a mod_pgsql like mod_plsql is
>> used to connect web server to database. Each page is considered as a stored
>> procedure in the oracle database. I am not thinking of implementing as it is
>
> It's been around for a long time already:
>
>  http://asmith.id.au/mod_libpq.html

mod_libpq looks like it hasn't been updated in quite a while (apache
1.3 only) -- I think a node.js http server is superior in just about
every way for this case.  I 100% agree with the comments on the page
though.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using Postgresql as application server

2011-08-18 Thread s...@bestmx.ru

Merlin Moncure пишет:

On Thu, Aug 18, 2011 at 5:48 AM, Dimitri Fontaine
  wrote:

c k  writes:

Many users are using it and found it stable and scalable. Important is that
web server is external to the database and a mod_pgsql like mod_plsql is
used to connect web server to database. Each page is considered as a stored
procedure in the oracle database. I am not thinking of implementing as it is

It's been around for a long time already:

  http://asmith.id.au/mod_libpq.html

mod_libpq looks like it hasn't been updated in quite a while (apache
1.3 only) -- I think a node.js http server is superior in just about
every way for this case.  I 100% agree with the comments on the page
though.

merlin

i still recommend nginx


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using Postgresql as application server

2011-08-18 Thread Dmitriy Igrishin
2011/8/18 s...@bestmx.ru 

> Merlin Moncure пишет:
>
>  On Thu, Aug 18, 2011 at 5:48 AM, Dimitri Fontaine
>>   wrote:
>>
>>> c k  writes:
>>>
 Many users are using it and found it stable and scalable. Important is
 that
 web server is external to the database and a mod_pgsql like mod_plsql is
 used to connect web server to database. Each page is considered as a
 stored
 procedure in the oracle database. I am not thinking of implementing as
 it is

>>> It's been around for a long time already:
>>>
>>>  http://asmith.id.au/mod_libpq.**html
>>>
>> mod_libpq looks like it hasn't been updated in quite a while (apache
>> 1.3 only) -- I think a node.js http server is superior in just about
>> every way for this case.  I 100% agree with the comments on the page
>> though.
>>
>> merlin
>>
> i still recommend nginx
>
> I recommend Wt:
http://www.webtoolkit.eu/
:-)


-- 
// Dmitriy.


Re: [GENERAL] Using Postgresql as application server

2011-08-18 Thread s...@bestmx.ru

Dmitriy Igrishin пишет:



2011/8/18 s...@bestmx.ru  >


Merlin Moncure пишет:

On Thu, Aug 18, 2011 at 5:48 AM, Dimitri Fontaine
mailto:dimi...@2ndquadrant.fr>>  wrote:

c kmailto:shreeseva.learn...@gmail.com>>  writes:

Many users are using it and found it stable and
scalable. Important is that
web server is external to the database and a mod_pgsql
like mod_plsql is
used to connect web server to database. Each page is
considered as a stored
procedure in the oracle database. I am not thinking of
implementing as it is

It's been around for a long time already:

http://asmith.id.au/mod_libpq.html

mod_libpq looks like it hasn't been updated in quite a while
(apache
1.3 only) -- I think a node.js http server is superior in just
about
every way for this case.  I 100% agree with the comments on
the page
though.

merlin

i still recommend nginx

I recommend Wt:
http://www.webtoolkit.eu/
:-)


it looks like feces
"and uses well-tested patterns of desktop GUI development"


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using Postgresql as application server

2011-08-18 Thread Dmitriy Igrishin
2011/8/18 s...@bestmx.ru 

> Dmitriy Igrishin пишет:
>
>>
>>
>> 2011/8/18 s...@bestmx.ru  > s...@bestmx.ru>>
>>
>>
>>Merlin Moncure пишет:
>>
>>On Thu, Aug 18, 2011 at 5:48 AM, Dimitri Fontaine
>>mailto:dimi...@2ndquadrant.fr**>>  wrote:
>>
>>
>>c k>
>> >
>>  writes:
>>
>>
>>Many users are using it and found it stable and
>>scalable. Important is that
>>web server is external to the database and a mod_pgsql
>>like mod_plsql is
>>used to connect web server to database. Each page is
>>considered as a stored
>>procedure in the oracle database. I am not thinking of
>>implementing as it is
>>
>>It's been around for a long time already:
>>
>>
>> http://asmith.id.au/mod_libpq.**html
>>
>>mod_libpq looks like it hasn't been updated in quite a while
>>(apache
>>1.3 only) -- I think a node.js http server is superior in just
>>about
>>every way for this case.  I 100% agree with the comments on
>>the page
>>though.
>>
>>merlin
>>
>>i still recommend nginx
>>
>> I recommend Wt:
>> http://www.webtoolkit.eu/
>> :-)
>>
>>  it looks like feces
> "and uses well-tested patterns of desktop GUI development"
>
Oh oh. So unprofessional comment!
Well, have a nice coding a Web 2.0 application with nginx + PostgreSQL :-)

-- 
// Dmitriy.


Re: [GENERAL] Using Postgresql as application server

2011-08-18 Thread s...@bestmx.ru

Dmitriy Igrishin пишет:



2011/8/18 s...@bestmx.ru  >


Dmitriy Igrishin пишет:



2011/8/18 s...@bestmx.ru 
> mailto:s...@bestmx.ru> >>


   Merlin Moncure пишет:

   On Thu, Aug 18, 2011 at 5:48 AM, Dimitri Fontaine
mailto:dimi...@2ndquadrant.fr>
>>  wrote:


   c kmailto:shreeseva.learn...@gmail.com>
>>  writes:


   Many users are using it and found it stable and
   scalable. Important is that
   web server is external to the database and a
mod_pgsql
   like mod_plsql is
   used to connect web server to database. Each
page is
   considered as a stored
   procedure in the oracle database. I am not
thinking of
   implementing as it is

   It's been around for a long time already:

http://asmith.id.au/mod_libpq.html

   mod_libpq looks like it hasn't been updated in quite a
while
   (apache
   1.3 only) -- I think a node.js http server is superior
in just
   about
   every way for this case.  I 100% agree with the comments on
   the page
   though.

   merlin

   i still recommend nginx

I recommend Wt:
http://www.webtoolkit.eu/
:-)

it looks like feces
"and uses well-tested patterns of desktop GUI development"

Oh oh. So unprofessional comment!
Well, have a nice coding a Web 2.0 application with nginx + PostgreSQL :-)

--
// Dmitriy.




who said "web 2.0" ?
i've never used religious idioms in a technical talk.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Suspicious Bill

2011-08-18 Thread shreeseva.it

	Hi ,I just signed a petition asking Prime Minister Manmohan Singh to stop the Biotechnology Regulatory Authority of India (BRAI) Bill from being passed.
	This bill will create a body which will single-handedly clear all genetically modified (GM) crops. Public opposition made the Karnataka government ban field trials of GM crops in the state recently. Now we need to get the national government to stop this bill.
	
	You should ask Prime Minister Manmohan Singh to stop the dangerous BRAI bill too:
	http://www.greenpeace.in/take-action/save-your-food/stop-the-brai-bill.php
		  Regards,
		  		
		
		
		Click here to ask PM Manmohan Singh to save your food
		
		
		
		Our government is working on a disastrous recipe. It’s a bill that will place genetically modified (GM) food on our plates, without our knowledge.
		
		The Biotechnology Regulatory Authority of India (BRAI) bill can be tabled anytime in this monsoon session of Parliament.  The bill, which is not even public, is being pushed in a hasty and secretive fashion.  According to an earlier version leaked by the media, the bill will create a centralised non transparent body which will become the sole approver for GM crops. [1]
		
		By being secretive, the government is only creating more suspicion about the intention of the bill. Public opposition helped stop Bt Brinjal, last year.[2] Now we need to get the national government to stop this bill and save our food.
		
		You should ask Prime Minister Manmohan Singh to stop this bill because it is not transparent.
		
		http://www.greenpeace.in/take-action/save-your-food/stop-the-brai-bill.php
		
		Opposition from people against GM crops in Karnataka  made the government declare a ban on the field trials of GM crops in the state. [3] This can be achieved again. Support from lakhs of people will help put pressure on the PM to stop this bill.
		
		The last known draft of the bill had a clause which allowed BRAI to escape the purview of Right to Information.[4] If the bill is passed as is, we will not be able to stop genetic modification of 56 other crops in the pipeline.
		Ask the PM to stop this bill now!
		http://www.greenpeace.in/take-action/save-your-food/stop-the-brai-bill.php
		
		Thanks a billion!
		
		
		Kapil Mishra
		Sustainable Agriculture Campaigner
		Greenpeace India
		
		P.S. Want to support our campaigns? We don't take money from any corporation, government or political party! We never have, and we never will. Do help Greenpeace remain fiercely and proudly independent. We will send you a Greenpeace T-shirt as a thank you for showing support for our campaigns. Click here to chip in.
		
		Sources:
		
		A law unto itself, www.outlookindia.com, March 8, 2010
		http://www.outlookindia.com/article.aspx?264454
		India says no to Bt brinjal, for now, www.rediff.com, February 9, 2010
		http://business.rediff.com/report/2010/feb/09/india-says-no-to-bt-brinjal-for-now.htm
		No GM trials in State: Katti, Deccan Herald, July 20, 2011  
		http://www.deccanherald.com/content/177877/no-gm-trials-state-katti.html
		Biotech Bill: Sweeping powers, glaring omissions, www.rediff.com, March 11, 2010
		http://business.rediff.com/column/2010/mar/11/guest-biotech-bill-glaring-omissions.htm
		
		shreeseva...@gmail.com
	You are receiving this email because someone you know sent it to you from the Greenpeace site. Greenpeace retains no information about individuals contacted through its site, and will not send you further messages without your consent -- although your friends could, of course, send you another message.
			


Re: [GENERAL] altering foreign key without a table scan

2011-08-18 Thread Jerry Sievers
Vincent de Phily  writes:

> Hi list,
>
> as part of a db schema update, I'd like to alter the "on update" property of 
> a 
> fkey, for example going from :
>> ALTER TABLE ONLY foo ADD CONSTRAINT foo_barid_fkey FOREIGN KEY (barid)
>> REFERENCES bar(id) ON UPDATE CASCADE ON DELETE CASCADE;
> to :
>> ALTER TABLE ONLY foo ADD CONSTRAINT foo_barid_fkey FOREIGN KEY (barid)
>> REFERENCES bar(id) ON UPDATE RESTRICT ON DELETE CASCADE;
>
> I understand I can create the new fkey and drop the old one, but this 
> requires 
> a scan of the table (to check that no existing data violates the new fkey) 
> which, on this large, heavily-updated, no-downtime table I can't really aford.
>
> The thing is, I know there is no violation by existing data, because of the 
> existing fkey. So locking and scaning the table to add the "duplicate" fkey 
> is 
> not necessary. In a sense, I'm looking for :
>> ALTER TABLE foo ALTER CONSTRAINT foo_barid_fkey ON UPDATE RESTRICT;
> I'm guessing/wishfull-thinking that some hackery with the system catalog 
> could 
> emulate that ?
>
> I'm currently using postgres 8.3 (we want to upgrade, but it's hard to 
> schedule).

Two things first...

1. I assume this is same for 8.3
2. Someone from Hackers best to answer if this is safe on live system
   or might require at least a restart.

Your 2 catalog fields of interest are; pg_constraint.(confupdtype|confdeltype)

Changing those for the relevant FKs should satisfy your needs.  I am
not aware of those field values being duplicated anywhere.

Strongly suggest you approach this with caution, as is standard
advice regarding any manual catalog fiddling. 

HTH

>
> Thanks in advance.
>
> -- 
> Vincent de Phily
>
>
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Jerry Sievers
Postgres DBA/Development Consulting
e: postgres.consult...@comcast.net
p: 305.321.1144

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using Postgresql as application server

2011-08-18 Thread Chris Travers
On Thu, Aug 18, 2011 at 4:32 AM, Sim Zacks  wrote:

> There are many differences.
> 1) If I have a database function and I copy my database to another server,
> the function still works.
> If I have an external daemon application, I not only have to copy my
> database, I also have to copy the daemon application. Then I have to build
> an init script and make sure it runs at startup. My LISTEN/NOTIFY daemon is
> a c application, so when I move my database to a server on a different
> platform, I have to recompile it.


Ok, so you have made a decision to favor performance well ahead of
flexibility.  I guess the question is what the performance cost
writing it in python actually is and what the flexibility cost of
writing it in C actually is.  Presumably you have already answered
this for yourself, but this strikes me as coming out of that tradeoff
rather than being inherent in the idea.

>
> 2) there is absolutely no reason you can't build redundancy into this
> system.
>
> Its not a question of whether I can or cannot build redundancy, it is a
> question of whether I have to build an entire system in order to call a
> database function from another database function. The only reason this is
> complicated is because it needs to be in its own session. That simple issue
> shouldn't force me to build: a) a daemon application, b) include redundancy
> to ensure that it is running, c) not be included in my database
> backup/restore.

Emailing IMHO isn't a database function.

> Remember, I don't want to build a _system_, I basically want an asynchronous
> trigger. On specific event call a database function in its own transaction
> space and allow the existing transaction to end.
>
> 3)  The overhead really shouldn't be bad, and if your parts are
> well-modularized, and carefully designed overhead really should be
> minimal.
>
> Any overhead that is not necessary should not be added in. It is the minor
> level of frustration that something didn't work when I migrated servers
> until the "Oh Yeah" kicked in. Then looking through all my notes to find the
> compilation instructions for my daemon because we moved from a 32 bit server
> to a 64 bit. Then trying to figure out the syntax for the init script,
> because we moved from Gentoo to Debian and it is slightly different. It
> isn't a lot of overhead but it is completely unneccessary in our situation.
> I will agree that this is entirely necessary if your application actually
> uses an external system and the database communicates through Listen/Notify.
> You have 2 systems to deal with in any case, but for me the only external
> component is having the daemon listen so it can call another function in the
> database. IOW, I don't generally deal with anything else on the server.

In general I would be opposed to allowing functions to exist outside
of transactional control.  While it is true you save some conceptual
complexity in moving everything into the database, allowing stored
proc functions to commit/start transactions would add a tremendous
amount conceptual complexity in the database itself.  At the moment I
don't think this is generally worth it.  The beauty of the current
approach is that the transactional control works in very well-defined
ways.  This significantly saves testing and QA effort.I would be
concerned that a capability like this would be sufficiently disruptive
to the assumptions of testing, that the costs would always be far
higher than the benefits.

Best Wishes,
Chris Travers

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Suspicious Bill

2011-08-18 Thread Scott Ribe
Mods: FYI, this is not a one-off thing. I've seen this email on 4 other lists 
so far this morning. So some turd is spamming every list he can subscribe to.

-- 
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] call initdb as regular user

2011-08-18 Thread alexondi
Hi!
Can I call initdb with some params as regular user (not root or postgres)?
May I have some problem with replication, backup or with some other
subsystem?
Thank you!

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/call-initdb-as-regular-user-tp4712980p4712980.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] call initdb as regular user

2011-08-18 Thread Rodrigo Gonzalez

On 08/18/2011 03:20 PM, alexondi wrote:

Hi!
Can I call initdb with some params as regular user (not root or postgres)?

postgres is a normal userso no problem at all.
initdb cannot be run as root

May I have some problem with replication, backup or with some other
subsystem?
Thank you!

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/call-initdb-as-regular-user-tp4712980p4712980.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] call initdb as regular user

2011-08-18 Thread Peter Eisentraut
On tor, 2011-08-18 at 11:20 -0700, alexondi wrote:
> Can I call initdb with some params as regular user (not root or postgres)?

Sure.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] call initdb as regular user

2011-08-18 Thread John Cheng
Sure you can. The initdb command just sets up the directory you
specified and that's all it does. The files in the directory will be
created with that user's permission. So the directory you specify must
be accessible to that "regular user".

man page - http://linux.die.net/man/1/initdb

"Creating a database cluster consists of creating the directories in
which the database data will live..."

Be warned - The files are created with the user's permissions, which
mean you need to now start the postgres process (i.e., pg_ctl or
postmaster) as the SAME user. If you ran initdb as a regular then try
to start the database as "postgres", the attempt might fail due to
permission denied errors.

So you can do it, but it might not be what you are trying to do. Any
reason why you want to use a user that is not "postgres" (assuming
postgres is a system user you created specifically for running
PostgreSQL database)

On Thu, Aug 18, 2011 at 11:20 AM, alexondi  wrote:
> Hi!
> Can I call initdb with some params as regular user (not root or postgres)?
> May I have some problem with replication, backup or with some other
> subsystem?
> Thank you!
>
> --
> View this message in context: 
> http://postgresql.1045698.n5.nabble.com/call-initdb-as-regular-user-tp4712980p4712980.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
---
John L Cheng

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Syncing Data to Production DB Server

2011-08-18 Thread Ben Chobot
On Aug 18, 2011, at 5:36 AM, Adarsh Sharma wrote:

> Dear All,
> 
> I want some views on the below requirements :
> 
> 1. I have a Postgres DB server with 25 GB database. It has more than 110 
> tables.
>   I am using Postgresql 8.3 on a CentOs.
> 2. I have another system laptop that contains the same database but it is for 
> testing purposes.
> 
> What I want ? If someone inserts some data in the laptop ( same database and 
> tables as Postgres DB server ) , that data would synk to my Production 
> Database server.
> 
> I thought the replication would help but it is not necessary that the laptop 
> is connected to LAN always and if by mistake issue drop command, all goes in 
> vain .
> 
> PLease guide me some ways or solutions .

So to rephrase (and simplify):

- you have a main database
- you have another database which is a superset of the main database
- you want both databases to be able to accept inserts, deletes, etc.
- you want to replicate inserts (only?) on the overlapping tables of the second 
database back to the main database (or do you want bi-directional replication?)
- these databases will often not be able to talk to each other


It sounds like Buccardo *might* be a solution for you. It allows for 
multi-master operation, which is what you seem to be looking for.

But if you want, say, inserts only to be replicated, and not deletes, you 
probably need to look into writing your own replication system in your 
application. If you keep the requirements strict enough it's really not that 
hard. 
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] call initdb as regular user

2011-08-18 Thread alexondi
I have some single-purpose system and user can interact only with special
software (on computer would start only this software{daemon and gui},
postgresql and  other system daemons). And I don't wont change user when I
call psql, pg_ctl, rsync and other stuff.

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/call-initdb-as-regular-user-tp4712980p4713090.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using Postgresql as application server

2011-08-18 Thread Dmitriy Igrishin
2011/8/18 s...@bestmx.ru 

> Dmitriy Igrishin пишет:
>
>>
>>
>> 2011/8/18 s...@bestmx.ru  > s...@bestmx.ru>>
>>
>>Dmitriy Igrishin пишет:
>>
>>
>>
>>2011/8/18 s...@bestmx.ru 
>>> > >
>>>>
>>
>>
>>   Merlin Moncure пишет:
>>
>>   On Thu, Aug 18, 2011 at 5:48 AM, Dimitri Fontaine
>>mailto:dimi...@2ndquadrant.fr**>
>>>
>>>>  wrote:
>>
>>
>>   c k>> >
>>>
>> >>
>>  writes:
>>
>>
>>   Many users are using it and found it stable and
>>   scalable. Important is that
>>   web server is external to the database and a
>>mod_pgsql
>>   like mod_plsql is
>>   used to connect web server to database. Each
>>page is
>>   considered as a stored
>>   procedure in the oracle database. I am not
>>thinking of
>>   implementing as it is
>>
>>   It's been around for a long time already:
>>
>>
>> http://asmith.id.au/mod_libpq.**html
>>
>>   mod_libpq looks like it hasn't been updated in quite a
>>while
>>   (apache
>>   1.3 only) -- I think a node.js http server is superior
>>in just
>>   about
>>   every way for this case.  I 100% agree with the comments on
>>   the page
>>   though.
>>
>>   merlin
>>
>>   i still recommend nginx
>>
>>I recommend Wt:
>>http://www.webtoolkit.eu/
>>:-)
>>
>>it looks like feces
>>"and uses well-tested patterns of desktop GUI development"
>>
>> Oh oh. So unprofessional comment!
>> Well, have a nice coding a Web 2.0 application with nginx + PostgreSQL :-)
>>
>> --
>> // Dmitriy.
>>
>>
>>
> who said "web 2.0" ?
> i've never used religious idioms in a technical talk.
>
I see. You're using only "nginx" :-)



-- 
// Dmitriy.


[GENERAL] Dump a database excluding one table DATA?

2011-08-18 Thread Dmitry Koterov
Hello.

Is there any way (or hack) to dump the whole database, but to exclude the
DATA from a table within this dump? (DDL of the table should not be
excluded: after restoring the data the excluded table should look "empty".)

I see -T switch of pg_dump, but seems -T excludes the data AND the DDL of a
table (and possibly all objects which depend on this table?), so after
restoration the database structure becomes broken sometimes.


Re: [GENERAL] call initdb as regular user

2011-08-18 Thread John Cheng
On Thu, Aug 18, 2011 at 11:59 AM, alexondi  wrote:
> I have some single-purpose system and user can interact only with special
> software (on computer would start only this software{daemon and gui},
> postgresql and  other system daemons). And I don't wont change user when I
> call psql, pg_ctl, rsync and other stuff.
>

In that case, as long as all the processes (your daemon, gui, and
PostgreSQL, rsync) are started under the same user account, you should
be fine.

-- 
---
John L Cheng

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Dump a database excluding one table DATA?

2011-08-18 Thread Adrian Klaver
On Thursday, August 18, 2011 12:49:45 pm Dmitry Koterov wrote:
> Hello.
> 
> Is there any way (or hack) to dump the whole database, but to exclude the
> DATA from a table within this dump? (DDL of the table should not be
> excluded: after restoring the data the excluded table should look "empty".)
> 
> I see -T switch of pg_dump, but seems -T excludes the data AND the DDL of a
> table (and possibly all objects which depend on this table?), so after
> restoration the database structure becomes broken sometimes.

One way I know you can do it, is exclude the data from restoring. This requires 
you use the pg_dump custom format. For full details see here:

http://www.postgresql.org/docs/9.0/interactive/app-pgrestore.html

Short version use pg_restore -l to generate a listing from the dump file.
Comment out the line that copys the data into that table.
Use pg_restore ... -L to have pg_restore those items not commented out.

Another way is do it using the -T switch for the 'complete' db dump. Then do a 
separate dump using -s (schema only) and -t some_table and then restore it on 
its own.

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Dump a database excluding one table DATA?

2011-08-18 Thread Dmitry Koterov
1. I need to shorten pg_dump results (for backup purposes), so pg_restore is
too late for that...

2. If I use "pg_dump -s" separately, the data may not load (or load to slow)
after that, because all indices/foreign keys are already there. Is there a
way to split "pg_dump -s" into 2 parts: the first part dumps everything
excluding indices, checks and foreign keys, and the second part - only them?
Not sure it is possible at all, because I think pg_dump may dump data not
between these two blocks of DDLs...



On Fri, Aug 19, 2011 at 12:04 AM, Adrian Klaver wrote:

> On Thursday, August 18, 2011 12:49:45 pm Dmitry Koterov wrote:
> > Hello.
> >
> > Is there any way (or hack) to dump the whole database, but to exclude the
> > DATA from a table within this dump? (DDL of the table should not be
> > excluded: after restoring the data the excluded table should look
> "empty".)
> >
> > I see -T switch of pg_dump, but seems -T excludes the data AND the DDL of
> a
> > table (and possibly all objects which depend on this table?), so after
> > restoration the database structure becomes broken sometimes.
>
> One way I know you can do it, is exclude the data from restoring. This
> requires
> you use the pg_dump custom format. For full details see here:
>
> http://www.postgresql.org/docs/9.0/interactive/app-pgrestore.html
>
> Short version use pg_restore -l to generate a listing from the dump file.
> Comment out the line that copys the data into that table.
> Use pg_restore ... -L to have pg_restore those items not commented out.
>
> Another way is do it using the -T switch for the 'complete' db dump. Then
> do a
> separate dump using -s (schema only) and -t some_table and then restore it
> on
> its own.
>
> --
> Adrian Klaver
> adrian.kla...@gmail.com
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


[GENERAL] Retrieve number of rows from COPY command inside a plpgsql function

2011-08-18 Thread Erwin Brandstetter
Aloha!

I am trying to retrieve the number of rows copied by a COPY command
inside a plpgsql function.

The docs tell me, that the "command tag" holds this information. But
how to access the command tag from within a plpgsql function?
http://www.postgresql.org/docs/9.0/interactive/sql-copy.html

I am COPYing the huge result of a complex SQL query to a file and
would like to output the number of rows. I could write to a temp table
and use
GET DIAGNOSTICS integer_var = ROW_COUNT;
and COPY from there. But that takes twice as long. GET DIAGNOSTICS
does not currently work for COPY ..

Am I missing something?

TIA
Erwin Brandstetter

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Dump a database excluding one table DATA?

2011-08-18 Thread Adrian Klaver
On Thursday, August 18, 2011 1:23:25 pm Dmitry Koterov wrote:
> 1. I need to shorten pg_dump results (for backup purposes), so pg_restore
> is too late for that..

> 
> 2. If I use "pg_dump -s" separately, the data may not load (or load to
> slow) after that, because all indices/foreign keys are already there. Is
> there a way to split "pg_dump -s" into 2 parts: the first part dumps
> everything excluding indices, checks and foreign keys, and the second part
> - only them? Not sure it is possible at all, because I think pg_dump may
> dump data not between these two blocks of DDLs...
> 

I am not sure I follow. Are you saying you eventually restore the data for that 
table as a separate step? If so, from the previous link, this might help:

"
--disable-triggers

This option is only relevant when performing a data-only restore. It 
instructs pg_restore to execute commands to temporarily disable triggers on the 
target tables while the data is reloaded. Use this if you have referential 
integrity checks or other triggers on the tables that you do not want to invoke 
during data reload.

Presently, the commands emitted for --disable-triggers must be done as 
superuser. So, you should also specify a superuser name with -S, or preferably 
run pg_restore as a PostgreSQL superuser. 
"



-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Dump a database excluding one table DATA?

2011-08-18 Thread Dmitry Koterov
Thanks, "pg_dump --data-only --disable-triggers" is the king.

(Unfortunately it is not supported by pg_dumpall, but it is entirely another
story. :-)


On Fri, Aug 19, 2011 at 12:36 AM, Adrian Klaver wrote:

> On Thursday, August 18, 2011 1:23:25 pm Dmitry Koterov wrote:
> > 1. I need to shorten pg_dump results (for backup purposes), so pg_restore
> > is too late for that..
>
> >
> > 2. If I use "pg_dump -s" separately, the data may not load (or load to
> > slow) after that, because all indices/foreign keys are already there. Is
> > there a way to split "pg_dump -s" into 2 parts: the first part dumps
> > everything excluding indices, checks and foreign keys, and the second
> part
> > - only them? Not sure it is possible at all, because I think pg_dump may
> > dump data not between these two blocks of DDLs...
> >
>
> I am not sure I follow. Are you saying you eventually restore the data for
> that
> table as a separate step? If so, from the previous link, this might help:
>
> "
> --disable-triggers
>
>This option is only relevant when performing a data-only restore. It
> instructs pg_restore to execute commands to temporarily disable triggers on
> the
> target tables while the data is reloaded. Use this if you have referential
> integrity checks or other triggers on the tables that you do not want to
> invoke
> during data reload.
>
>Presently, the commands emitted for --disable-triggers must be done as
> superuser. So, you should also specify a superuser name with -S, or
> preferably
> run pg_restore as a PostgreSQL superuser.
> "
>
>
>
> --
> Adrian Klaver
> adrian.kla...@gmail.com
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Pgadmin plugins

2011-08-18 Thread Guillaume Lelarge
On Thu, 2011-08-18 at 08:22 -0700, salah jubeh wrote:
> Hello,
> 
> I find  pgadmin  a nice software,  but I think it lacks some fundamental
> functionalities mainly  repositories support including git and cvs.

I also do think pgAdmin lacks some nice features, but repository
support, I don't even understand what you want :)

>  Also, some enhancement on the editor would be nice such as auto completion

Already in it.

> , code ordering,

Which means?

>  coloring

Already has it.

> , etc. I have seen that pgadmin have a menu called plugins, what is it for ?

To launch tools. The most obvious ones are psql or pg_dump, but every
tool you could imagine. IIRC, the PostGIS guys did a nice plugin (a
shapefile loader, I guess).


-- 
Guillaume
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Problem creating GIN index on multiple weighted columns

2011-08-18 Thread Ryan Fugger
I'm using 8.4.8, attempting to run the following command:

=> create index profile_search_index on profile_profile using
gin(setweight(to_tsvector('english', name), 'A') ||
setweight(to_tsvector('english', description), 'B'));

I get this error:

ERROR:  syntax error at or near "||"
LINE 1: ... gin(setweight(to_tsvector('english', name), 'A') || setweig...
 ^

Is this a bug, or a known limitation, or is my syntax just wrong?  The
following works fine:

select setweight(to_tsvector('english', name), 'A') ||
setweight(to_tsvector('english', description), 'B') from
profile_profile;

I would prefer not to store the precomputed tsvector in a separate
column if I can avoid it, although I'll do that if I can't get this to
work.  Thanks for any help.

Ryan

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Suspicious Bill

2011-08-18 Thread John R Pierce

On 08/18/11 10:16 AM, Scott Ribe wrote:

Mods: FYI, this is not a one-off thing. I've seen this email on 4 other lists 
so far this morning. So some turd is spamming every list he can subscribe to.


my guess is, he handed access to his address book over to the 
organization running that campaign, and they spammed everyone on the 
list for him, much the same way as linkedin does. same net effect, 
albeit different intent.






--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem creating GIN index on multiple weighted columns

2011-08-18 Thread Tom Lane
Ryan Fugger  writes:
> I'm using 8.4.8, attempting to run the following command:
> => create index profile_search_index on profile_profile using
> gin(setweight(to_tsvector('english', name), 'A') ||
> setweight(to_tsvector('english', description), 'B'));

> I get this error:

> ERROR:  syntax error at or near "||"
> LINE 1: ... gin(setweight(to_tsvector('english', name), 'A') || setweig...
>  ^

> Is this a bug, or a known limitation, or is my syntax just wrong?

The latter.  You need an extra pair of parentheses around any index
expression that's more complicated than a single function call.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Problem with 8.3.14 Windows binaries

2011-08-18 Thread Pete Wall
Hello,
I am upgrading the version of PostgreSQL used in our product from 8.3.9 to 
8.3.15 to close some of the vulnerabilities.  After upgrading, I found that it 
wouldn't work on our Windows 2003 boxes, but it was fine on the 2008 ones.  I 
downloaded every binary release from ..9 to ..15 (from here: 
http://www.postgresql.org/ftp/binary/) and found that the problem started with 
8.3.14.

What happens is when I try to launch any of the binaries, I get this message on 
the CLI:
C:\tmp\postgresql-8.3.14-1-binaries-no-installer\pgsql\bin>pg_ctl.exe
The system cannot execute the specified program.

When I double-click it in Explorer, I get a dialog box with this message:
"This application has failed to start because the application configuration 
is incorrect.  Reinstalling the application may fix this problem."

I then opened it up in Dependency Walker and got this message:
Error: The Side-by-Side configuration information for 
"c:\tmp\postgresql-8.3.14-1-binaries-no-installer\pgsql\bin\PG_CTL.EXE" 
contains errors. This application has failed to start because the application 
configuration is incorrect. Reinstalling the application may fix this problem 
(14001).
Error: The Side-by-Side configuration information for 
"c:\tmp\postgresql-8.3.14-1-binaries-no-installer\pgsql\bin\LIBPQ.DLL" contains 
errors. This application has failed to start because the application 
configuration is incorrect. Reinstalling the application may fix this problem 
(14001).
It was also not detecting the MSVCR80.DLL, while 8.3.13 found it automatically.

Here's a screenshot of Dependency Walker comparing the new with the old: 
http://i.imgur.com/FxNkG.jpg

Can someone help me figure out what's missing?  I found "Improve build support 
for Windows version" here 
http://archives.postgresql.org/pgsql-announce/2011-02/msg0.php.  Maybe that 
had something to do with it.

Thanks,
-Pete


Re: [GENERAL] Pgadmin plugins

2011-08-18 Thread salah jubeh


Hello,

I need repository plug in in order to control my DDL, currently I am using text 
files to create sachems, procedures , etc.  and it will be great if I can keep 
track of how the schema changes and even to protect against errors such 
as accidental file deletion. 

Regarding code ordering , I mean indentation and order the sql code to make it 
more readable. this can found in all Integrated development environments for 
c++, and java.  

regarding the plug in you are right I am using an old version so the 
psql console is not there, but is there away to enable repository plug in and 
get the text file in the Pgadmin editor.

Kind regards 



From: Guillaume Lelarge 
To: salah jubeh 
Cc: pgsql 
Sent: Thursday, August 18, 2011 10:48 PM
Subject: Re: [GENERAL] Pgadmin plugins

On Thu, 2011-08-18 at 08:22 -0700, salah jubeh wrote:
> Hello,
> 
> I find  pgadmin  a nice software,  but I think it lacks some fundamental
> functionalities mainly  repositories support including git and cvs.

I also do think pgAdmin lacks some nice features, but repository
support, I don't even understand what you want :)

>  Also, some enhancement on the editor would be nice such as auto completion

Already in it.

> , code ordering,

Which means?

>  coloring

Already has it.

> , etc. I have seen that pgadmin have a menu called plugins, what is it for ?

To launch tools. The most obvious ones are psql or pg_dump, but every
tool you could imagine. IIRC, the PostGIS guys did a nice plugin (a
shapefile loader, I guess).


-- 
Guillaume
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com

Re: [GENERAL] Dump a database excluding one table DATA?

2011-08-18 Thread Dmitry Koterov
Mmm, --disable-triggers is not surely enough - we also have RULEs and (much
worse) INDEXes.

If we create all indices and then restore all data, it is MUCH SLOWER than
restore the data first and then - create all indices.
So I think that there is no work-around really...

I propose to include an option to pg_dump to skip several tables data
restoration. :-)



On Fri, Aug 19, 2011 at 12:44 AM, Dmitry Koterov  wrote:

> Thanks, "pg_dump --data-only --disable-triggers" is the king.
>
> (Unfortunately it is not supported by pg_dumpall, but it is entirely
> another story. :-)
>
>
> On Fri, Aug 19, 2011 at 12:36 AM, Adrian Klaver 
> wrote:
>
>> On Thursday, August 18, 2011 1:23:25 pm Dmitry Koterov wrote:
>> > 1. I need to shorten pg_dump results (for backup purposes), so
>> pg_restore
>> > is too late for that..
>>
>> >
>> > 2. If I use "pg_dump -s" separately, the data may not load (or load to
>> > slow) after that, because all indices/foreign keys are already there. Is
>> > there a way to split "pg_dump -s" into 2 parts: the first part dumps
>> > everything excluding indices, checks and foreign keys, and the second
>> part
>> > - only them? Not sure it is possible at all, because I think pg_dump may
>> > dump data not between these two blocks of DDLs...
>> >
>>
>> I am not sure I follow. Are you saying you eventually restore the data for
>> that
>> table as a separate step? If so, from the previous link, this might help:
>>
>> "
>> --disable-triggers
>>
>>This option is only relevant when performing a data-only restore. It
>> instructs pg_restore to execute commands to temporarily disable triggers
>> on the
>> target tables while the data is reloaded. Use this if you have referential
>> integrity checks or other triggers on the tables that you do not want to
>> invoke
>> during data reload.
>>
>>Presently, the commands emitted for --disable-triggers must be done as
>> superuser. So, you should also specify a superuser name with -S, or
>> preferably
>> run pg_restore as a PostgreSQL superuser.
>> "
>>
>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@gmail.com
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
>


Re: [GENERAL] Syncing Data to Production DB Server

2011-08-18 Thread Adarsh Sharma

Ben Chobot wrote:

On Aug 18, 2011, at 5:36 AM, Adarsh Sharma wrote:

  

Dear All,

I want some views on the below requirements :

1. I have a Postgres DB server with 25 GB database. It has more than 110 tables.
  I am using Postgresql 8.3 on a CentOs.
2. I have another system laptop that contains the same database but it is for 
testing purposes.

What I want ? If someone inserts some data in the laptop ( same database and 
tables as Postgres DB server ) , that data would synk to my Production Database 
server.

I thought the replication would help but it is not necessary that the laptop is 
connected to LAN always and if by mistake issue drop command, all goes in vain .

PLease guide me some ways or solutions .


Thanks Ben,
  



So to rephrase (and simplify):

- you have a main database
- you have another database which is a superset of the main database
  

No, both databases are same but on different systems.


- you want both databases to be able to accept inserts, deletes, etc.
  
Fore.g: One is Production Server and the other is simple demo machine. 
If someone inserts some data in demo machine, I want that data to be 
sync to my production server.
Now, I take complete backup of the database from demo machine & restore 
it in production server, which is very unusual way.



- you want to replicate inserts (only?) on the overlapping tables of the second 
database back to the main database (or do you want bi-directional replication?)
- these databases will often not be able to talk to each other


It sounds like Buccardo *might* be a solution for you. It allows for 
multi-master operation, which is what you seem to be looking for.

But if you want, say, inserts only to be replicated, and not deletes, you probably need to look into writing your own replication system in your application. If you keep the requirements strict enough it's really not that hard. 
I want a simple technique through which I update my production server 
easily.



Thanks


Re: [GENERAL] Syncing Data to Production DB Server

2011-08-18 Thread John R Pierce

On 08/18/11 9:03 PM, Adarsh Sharma wrote:
I want a simple technique through which I update my production server 
easily.


what if there's been data changes on the production server and different 
changes on the demo laptop?  how do you plan on reconciling those 
differences?




--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Syncing Data to Production DB Server

2011-08-18 Thread Adarsh Sharma

Hi Michael,

I think you misunderstood my problem.
I have a demo system and the data is inserted in this system.

Simply I want this newly inserted data to be synk to my production server.
Taking pg_dump of cumbersome daily.

I reserached & find some proprietary solution but I think there may be 
other solutions too.



Thanks
Michael Nolan wrote:



On Thu, Aug 18, 2011 at 11:03 PM, Adarsh Sharma 
mailto:adarsh.sha...@orkash.com>> wrote:


I want a simple technique through which I update my production
server easily.


What I do with a similar sized database is do a pg_dumpall on the 
production server and
restore it to the laptop.  Because the production server is around 950 
miles from my office, it usually takes me longer to copy the dumpall 
file across the Internet than it does to restore it on the laptop.


I do this about twice a month.

I find having a test database that is a week or two out of date 
doesn't affect most development work.  In fact, being able to restore 
the test database to a known state repeatedly has come in handy for 
testing some scenarios.  Your situation may be different.  
--

Mike Nolan




Re: [GENERAL] Syncing Data to Production DB Server

2011-08-18 Thread c k
You can use Talend or Navicat for syncing the data as per your needs without
much complexity in writing a data sync application.
You have to purchase license for navicat but you can talend for free and it
also supports many other database systems.

Chaitanya Kulkarni

On Fri, Aug 19, 2011 at 10:18 AM, Adarsh Sharma wrote:

> **
> Hi Michael,
>
> I think you misunderstood my problem.
> I have a demo system and the data is inserted in this system.
>
> Simply I want this newly inserted data to be synk to my production server.
> Taking pg_dump of cumbersome daily.
>
> I reserached & find some proprietary solution but I think there may be
> other solutions too.
>
>
> Thanks
> Michael Nolan wrote:
>
>
>
> On Thu, Aug 18, 2011 at 11:03 PM, Adarsh Sharma 
> wrote:
>
>>   I want a simple technique through which I update my production server
>> easily.
>>
>
> What I do with a similar sized database is do a pg_dumpall on the
> production server and
> restore it to the laptop.  Because the production server is around 950
> miles from my office, it usually takes me longer to copy the dumpall file
> across the Internet than it does to restore it on the laptop.
>
> I do this about twice a month.
>
> I find having a test database that is a week or two out of date doesn't
> affect most development work.  In fact, being able to restore the test
> database to a known state repeatedly has come in handy for testing some
> scenarios.  Your situation may be different.
> --
> Mike Nolan
>
>
>


Re: [GENERAL] Syncing Data to Production DB Server

2011-08-18 Thread Adarsh Sharma



I used Navicat free version many times. As you rightly said, we have to 
purchase license for Data Synchroniztion.Also , I cannot able to find 
Talend for Linux.

Is it works only for Windows. I find one component Talend MDM  for linux.
Can it satisfy my requirements ?


Thanks


c k wrote:
You can use Talend or Navicat for syncing the data as per your needs 
without much complexity in writing a data sync application.
You have to purchase license for navicat but you can talend for free 
and it also supports many other database systems.


Chaitanya Kulkarni

On Fri, Aug 19, 2011 at 10:18 AM, Adarsh Sharma 
mailto:adarsh.sha...@orkash.com>> wrote:


Hi Michael,

I think you misunderstood my problem.
I have a demo system and the data is inserted in this system.

Simply I want this newly inserted data to be synk to my production
server.
Taking pg_dump of cumbersome daily.





I reserached & find some proprietary solution but I think there
may be other solutions too.


Thanks
Michael Nolan wrote:



On Thu, Aug 18, 2011 at 11:03 PM, Adarsh Sharma
mailto:adarsh.sha...@orkash.com>> wrote:

I want a simple technique through which I update my
production server easily.


What I do with a similar sized database is do a pg_dumpall on the
production server and
restore it to the laptop.  Because the production server is
around 950 miles from my office, it usually takes me longer to
copy the dumpall file across the Internet than it does to restore
it on the laptop.

I do this about twice a month.

I find having a test database that is a week or two out of date
doesn't affect most development work.  In fact, being able to
restore the test database to a known state repeatedly has come in
handy for testing some scenarios.  Your situation may be
different.  
--

Mike Nolan







Re: [GENERAL] Pgadmin plugins

2011-08-18 Thread Guillaume Lelarge
On Thu, 2011-08-18 at 15:24 -0700, salah jubeh wrote:
> 
> Hello,
> 
> I need repository plug in in order to control my DDL, currently I am using 
> text files to create sachems, procedures , etc.  and it will be great if I 
> can keep track of how the schema changes and even to protect against errors 
> such as accidental file deletion. 
> 

I don't think it belongs to pgAdmin. You can use any of the usual tools
to do it for you. Like tortoiseCVS and tortoiseGit on Windows.

> Regarding code ordering , I mean indentation and order the sql code to make 
> it more readable. this can found in all Integrated development environments 
> for c++, and java.  
> 

This is something I've been asked for. So hard I didn't even try :)

> regarding the plug in you are right I am using an old version so the psql 
> console is not there, but is there away to enable repository plug in and get 
> the text file in the Pgadmin editor.
> 

I don't think so. The only thing you can do is to create some plugins
with a specific action in each. For example, one plugin to go in your
repository, and launch "git pull". Another one to go in your repository,
and launch "git commit". I think it will be harder to configure and use,
than to use one of the tortoise tools.

And, please, don't top-post.


-- 
Guillaume
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general