Re: [GENERAL] conditional IF statements in postgresql

2014-07-06 Thread David G Johnston
madhu_d wrote
> Hi,
> 
> I am writing a C program to access a PostgreSQL database, where
> 
> I add a column if it doesn't exists in the table
> 
> or, update the column, if the column already exits.
> 
> Please suggest how to work with the conditional statements.
> 
> Thanks!
> 
> N.B. I wrote the following:
> 
> res = PQexec(conn, "IF COL_LENGTH('protein_sequence','comment') IS NULL");
> PQclear(res);
> if(res)
>  {
>  res = PQexec(conn, "ALTER TABLE protein_sequence ADD comment
> VARCHAR(500)");
>  PQclear(res);
>  }
>  else
>  {
>  res = PQexec(conn, "UPDATE TABLE protein_sequence ADD comment
> VARCHAR(500)");
>   PQclear(res);
>  }
> 
> Is the code logically correct??

Not by any logic that I find recognizable.  It is also absolutely not
syntactically correct.

I have no clue why you think the updating of the column is conditional.  I
can understand needing to add a missing column before you can effect an
update but that can and should be independent of the need to update the
column.

Neither "IF" nor "UPDATE TABLE ... ADD" are valid commands that you can
issue directly via PQExec.

The only valid commands are listed here:

http://www.postgresql.org/docs/9.3/interactive/sql-commands.html

Any other commands, of which conditionals are a subset, must be executed
within the context of a DO command or user-defined function.  In particular
you should see if pl/pgsql can be made to accomplish that which you need.

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/conditional-IF-statements-in-postgresql-tp5810687p5810691.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump slower than pg_restore

2014-07-06 Thread David Wall


On 7/6/2014 9:06 AM, Tom Lane wrote:

David Wall  writes:

There's one row in pg_largeobject_metadata per large object.  The rows in
pg_largeobject represent 2KB "pages" of large objects (so it looks like
your large objects are averaging only 8KB-10KB apiece).  The "metadata"
table was added in 9.0 to carry ownership and access permission data for
each large object.


Thanks for that insight.

That metadata table is what first got me when I upgraded from 8.3 to 9.2 
when there were all sorts of LO permission errors.  I have found that I 
get the same sort of issue when I migrate from one system to another, 
presumably because the id of the owner has changed, though I use the 
same name each time.  I've taken to doing the large object permission 
assignment after every restore "just to be safe," but it has the 
drawback that I often have to set max_locks_per_transaction to a very 
high number (4) for the restore, and then I comment it back out once 
it's done and restart.  That number is less than the number of LOs by a 
long shot, so I'm not sure an optimal number is, but I think at 2 I 
ran out during the re-permissioning of LOs.


It could be that when I restore, the objects take on the permission of 
the DB admin user (i.e. postgres) since it has full permissions for 
creating everything.  But I'd prefer that the objects all take on the 
ownership of the DB app user, which of course has more limited 
permissions, but otherwise is the user that does all of the 
inserts/updates/deletes/selects.  I'm not sure if I can create users in 
new databases with the same id when I'm using the same name or not.



I think this report confirms something we'd worried about during 9.0
development, which was whether pg_dump wouldn't have issues with
sufficiently many large objects.  At the time we'd taught it to handle LOs
as if they were full-fledged database objects, since that was the easiest
way to piggyback on its existing machinery for handling ownership and
permissions; but that's rather expensive for objects that don't really
need all the trappings of, eg, dependency tracking.  We'd done some
measurements that seemed to indicate that the overhead wasn't awful for
medium-size numbers of large objects, but I'm not sure we tried it for
millions of 'em.

I guess the good news is that it's only being a bit slow for you and not
falling over completely.  Still, it seems like some more work is indicated
in this area.
Yes, it takes 3 hours to do the backup, which is generally okay.  It was 
just surprising that I could restore in 2 hours .


David



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] conditional IF statements in postgresql

2014-07-06 Thread Madhurima Das
Hi,

I am writing a C program to access a PostgreSQL database, where

I add a column if it doesn't exists in the table

or, update the column, if the column already exits.

Please suggest how to work with the conditional statements.

Thanks!

N.B. I wrote the following:

res = PQexec(conn, "IF COL_LENGTH('protein_sequence','comment') IS NULL");
PQclear(res);
if(res)
 {
 res = PQexec(conn, "ALTER TABLE protein_sequence ADD comment
VARCHAR(500)");
 PQclear(res);
 }
 else
 {
 res = PQexec(conn, "UPDATE TABLE protein_sequence ADD comment
VARCHAR(500)");
  PQclear(res);
 }

Is the code logically correct??


Re: [GENERAL] pg_dump slower than pg_restore

2014-07-06 Thread Tom Lane
David Wall  writes:
>>> There are only 32 table, no functions, but mostly large objects. Not
>>> sure how to know about the LOs, but a quick check from the table sizes I
>>> estimate at only 2GB, so 16GB could be LOs.  There are 7,528,803 entries
>>> in pg_catalog.pg_largeobject.

>> Hmm ... how many rows in pg_largeobject_metadata?

> pg_largeobject_metadata reports 1,656,417 rows.

> By the way, what is pg_largeobject_metadata vs. pg_largeobject since the 
> counts are so different?

There's one row in pg_largeobject_metadata per large object.  The rows in
pg_largeobject represent 2KB "pages" of large objects (so it looks like
your large objects are averaging only 8KB-10KB apiece).  The "metadata"
table was added in 9.0 to carry ownership and access permission data for
each large object.

I think this report confirms something we'd worried about during 9.0
development, which was whether pg_dump wouldn't have issues with
sufficiently many large objects.  At the time we'd taught it to handle LOs
as if they were full-fledged database objects, since that was the easiest
way to piggyback on its existing machinery for handling ownership and
permissions; but that's rather expensive for objects that don't really
need all the trappings of, eg, dependency tracking.  We'd done some
measurements that seemed to indicate that the overhead wasn't awful for
medium-size numbers of large objects, but I'm not sure we tried it for
millions of 'em.

I guess the good news is that it's only being a bit slow for you and not
falling over completely.  Still, it seems like some more work is indicated
in this area.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] breakpoints in eclipse using postgresql

2014-07-06 Thread Adrian Klaver

On 07/06/2014 05:44 AM, Ravi Kiran wrote:

hi,

I am using helios Eclipse for debugging my code in postgresql.

My aim is to know how postgresql uses join algorithms during the join
query, so I started to debug*nodenestloop.c* which is in the Executor
folder.

I gave break points in that file, But whenever I try to debug that file,
the control goes to main.c and never comes back,How do I constraint the
control only to that particular file(nodenestloop.c)

Below are the following fields which I gave in Debug configurations of
Helios Eclipse.

*C/C++ Application - src/backend/postgres* and *project - pgsql*

I followed the steps given in the following link for running the program.

https://wiki.postgresql.org/wiki/Working_with_Eclipse#

I even uncheked the field "*Start on Start up=main*" , but When I do
that, The *step in and Step over*buttons are not activated and the
following problem has popped up.

/Could not save master table to file
'/home/ravi/workspace/.metadata/.plugins/org.eclipse.core.resources/.safetable/org.eclipse.core.resources'.
/home/ravi/workspace/.metadata/.plugins/org.eclipse.core.resources/.safetable/org.eclipse.core.resources
(Permission denied)/


So who are you running Eclipse as and do they have permissions on the 
above directory?




So I started eclipse using sudo, but this time the following error has
come in the console of eclipse.

/"root" execution of the PostgreSQL server is not permitted. The server
must be started under an unprivileged user ID to prevent possible system
security compromise. See the documentation for more information on how
to properly start the server./


As the message states Postgres will not run as root.



Could any one help me with this.

Thank you




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] breakpoints in eclipse using postgresql

2014-07-06 Thread Ravi Kiran
hi,

I am using helios Eclipse for debugging my code in postgresql.

My aim is to know how postgresql uses join algorithms during the join
query, so I started to debug*nodenestloop.c* which is in the Executor
folder.

I gave break points in that file, But whenever I try to debug that file,
the control goes to main.c and never comes back,How do I constraint the
control only to that particular file(nodenestloop.c)

Below are the following fields which I gave in Debug configurations of
Helios Eclipse.

*C/C++ Application - src/backend/postgres* and *project - pgsql*

I followed the steps given in the following link for running the program.

https://wiki.postgresql.org/wiki/Working_with_Eclipse#

I even uncheked the field "*Start on Start up=main*" , but When I do that,
The *step in and Step over*buttons are not activated and the following
problem has popped up.

*Could not save master table to file
'/home/ravi/workspace/.metadata/.plugins/org.eclipse.core.resources/.safetable/org.eclipse.core.resources'.
/home/ravi/workspace/.metadata/.plugins/org.eclipse.core.resources/.safetable/org.eclipse.core.resources
(Permission denied)*

So I started eclipse using sudo, but this time the following error has come
in the console of eclipse.

*"root" execution of the PostgreSQL server is not permitted. The server
must be started under an unprivileged user ID to prevent possible system
security compromise. See the documentation for more information on how to
properly start the server.*

Could any one help me with this.

Thank you