[GENERAL] Installation Issue of PostgresPlus-9.0

2011-02-02 Thread Adarsh Sharma

Dear all,

I am trying to install postgresplus on a Ubuntu10.4 based system.
I  got a problem after installation when i issued the below command :

root@S10-MySQL:/opt/PostgresPlus/9.0SS# bin/psql -Upostgres
Password for user postgres:
psql (9.0.2)
Type help for help.

Cannot read termcap database;
using dumb terminal settings.
Aborted
root@S10-MySQL:/opt/PostgresPlus/9.0SS# ls -ls

I followed the same steps as in PostgrePlus-8.4 but don't know how it 
occurs.

Please help me to find the reason.
My startup.log says :

2011-02-02 09:02:17 GMT LOG:  could not recognize system time zone
2011-02-02 09:02:17 GMT DETAIL:  The PostgreSQL time zone will be set to 
Etc/GMT-5.
2011-02-02 09:02:17 GMT HINT:  You can specify the correct timezone in 
postgresql.conf.

2011-02-02 09:05:47 GMT LOG:  could not recognize system time zone
2011-02-02 09:05:47 GMT DETAIL:  The PostgreSQL time zone will be set to 
Etc/GMT-5.
2011-02-02 09:05:47 GMT HINT:  You can specify the correct timezone in 
postgresql.conf.






Thanks  Regards
Adarsh Sharma

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with encode () and hmac() in pgcrypto

2011-02-02 Thread Marko Kreen
On Wed, Feb 2, 2011 at 1:19 AM, hlcborg h.luis.card...@gmail.com wrote:
 These two operations are not equivalent.

 But...
 Can I have this operation done in the Stored Procedure inside the Database?
 Plain SHA1, which is signed with RSA signature. and in the end encoded to
 base64?

 I was looking in the pgcrypto functions, and I haven´t found any that I
 could use I think... Maybe I am not looking for the rigth one.
 I need to use Plain SHA1 signed with a RSA signature and encoded into base64
 due to a new law related to digital signature in invoices in my country.

pgcrypto does not do signing, sorry.

But you can do it with PL/Python or PL/Perl and their wrapper libraries
around OpenSSL (or any other crypto library).

-- 
marko

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installation Issue of PostgresPlus-9.0

2011-02-02 Thread Adarsh Sharma
I add one more thing after researching i find that this a bug and the 
solution is to install libtermcap library solves this problem.


I followed the below steps :
sudo apt-get install gcc
sudo apt-get install libgcc1
sudo apt-get install g++
sudo apt-get install cpp
sudo apt-get install ncurses-base
sudo apt-get install ncurses-bin
sudo apt-get install ncurses-term
sudo apt-get install libncurses5

(and most importantly)
sudo apt-get install libncurses5-dev


But still I face the same issue.

Any comments.





Adarsh Sharma wrote:

Dear all,

I am trying to install postgresplus on a Ubuntu10.4 based system.
I  got a problem after installation when i issued the below command :

root@S10-MySQL:/opt/PostgresPlus/9.0SS# bin/psql -Upostgres
Password for user postgres:
psql (9.0.2)
Type help for help.

Cannot read termcap database;
using dumb terminal settings.
Aborted
root@S10-MySQL:/opt/PostgresPlus/9.0SS# ls -ls

I followed the same steps as in PostgrePlus-8.4 but don't know how it 
occurs.

Please help me to find the reason.
My startup.log says :

2011-02-02 09:02:17 GMT LOG:  could not recognize system time zone
2011-02-02 09:02:17 GMT DETAIL:  The PostgreSQL time zone will be set 
to Etc/GMT-5.
2011-02-02 09:02:17 GMT HINT:  You can specify the correct timezone in 
postgresql.conf.

2011-02-02 09:05:47 GMT LOG:  could not recognize system time zone
2011-02-02 09:05:47 GMT DETAIL:  The PostgreSQL time zone will be set 
to Etc/GMT-5.
2011-02-02 09:05:47 GMT HINT:  You can specify the correct timezone in 
postgresql.conf.






Thanks  Regards
Adarsh Sharma




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Importing/Appending to Existing Table

2011-02-02 Thread Greg Williamson
Rich --
 
 

   I have an existing table with 15,492 rows and want to add additional rows
 from a .csv file. If I use 'COPY tablename from filename with delimiter
 as : csv quote as ' ' will this overwrite existing rows in the table or
 append rows?

It will not overwrite any existing data; if the table has constraints that will 
prevent duplicates then the entire load will fail if any item fails (it is a 
single transaction).

If the table doesn't have such constraints then duplicate data in the copy file 
will result in duplicate rows.

HTH,

Greg Williamson


  

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Hot-Standby and sequences

2011-02-02 Thread Wouter D'Haeseleer
Hi All,

I have a question about sequences in combination with streaming
replication.
It seems something strange is happening with sequences which are
streamed to the slave.

When updating the sequence the last_value on the slave shifts with 32
and halts at this value until the master increased the value also with
32.
Is this normal behavior ?

Please see the example below:

Using Setval
--

# psql -t -h master -c select setval('foo', 10)
  10

# psql -t -h master  -c select last_value from foo
  10

# psql -t -h slave  -c select last_value from foo
  10

Using Nextval
--

# psql -t -h master -c select nextval('foo')
   11

# psql -t -h master  -c select last_value from foo
  11

# psql -t -h slave -c select last_value from foo
 43


-- 
Wouter D'Haeseleer
Linux System Engineer

VASCO Data Security
Dellingstraat 28b
2800 Mechelen
Belgium

phone: +32 15 50 44 00
email: w...@vasco.com 
web: www.vasco.com


Re: [GENERAL] Installation Issue of PostgresPlus-9.0

2011-02-02 Thread Sachin Srivastava
Install termcap (http://ftp.gnu.org/gnu/termcap/)

Add --enable-install-termcap configure option while running configure.


On Feb 2, 2011, at 3:29 PM, Adarsh Sharma wrote:

 I add one more thing after researching i find that this a bug and the 
 solution is to install libtermcap library solves this problem.
 
 I followed the below steps :
 sudo apt-get install gcc
 sudo apt-get install libgcc1
 sudo apt-get install g++
 sudo apt-get install cpp
 sudo apt-get install ncurses-base
 sudo apt-get install ncurses-bin
 sudo apt-get install ncurses-term
 sudo apt-get install libncurses5
 
 (and most importantly)
 sudo apt-get install libncurses5-dev
 
 
 But still I face the same issue.
 
 Any comments.
 
 
 
 
 
 Adarsh Sharma wrote:
 Dear all,
 
 I am trying to install postgresplus on a Ubuntu10.4 based system.
 I  got a problem after installation when i issued the below command :
 
 root@S10-MySQL:/opt/PostgresPlus/9.0SS# bin/psql -Upostgres
 Password for user postgres:
 psql (9.0.2)
 Type help for help.
 
 Cannot read termcap database;
 using dumb terminal settings.
 Aborted
 root@S10-MySQL:/opt/PostgresPlus/9.0SS# ls -ls
 
 I followed the same steps as in PostgrePlus-8.4 but don't know how it occurs.
 Please help me to find the reason.
 My startup.log says :
 
 2011-02-02 09:02:17 GMT LOG:  could not recognize system time zone
 2011-02-02 09:02:17 GMT DETAIL:  The PostgreSQL time zone will be set to 
 Etc/GMT-5.
 2011-02-02 09:02:17 GMT HINT:  You can specify the correct timezone in 
 postgresql.conf.
 2011-02-02 09:05:47 GMT LOG:  could not recognize system time zone
 2011-02-02 09:05:47 GMT DETAIL:  The PostgreSQL time zone will be set to 
 Etc/GMT-5.
 2011-02-02 09:05:47 GMT HINT:  You can specify the correct timezone in 
 postgresql.conf.
 
 
 
 
 
 Thanks  Regards
 Adarsh Sharma
 
 
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general

--
Regards,
Sachin Srivastava
EnterpriseDB, the Enterprise PostgreSQL company.



Re: [GENERAL] Select for update with offset interferes with concurrent transactions

2011-02-02 Thread Yngve N. Pettersen (Developer Opera Software ASA)

Hello David,


On Wed, 02 Feb 2011 01:36:15 +0100, David Johnston pol...@yahoo.com  
wrote:


If random sampling is desirable would the following construct limit  
locking

only to the sampled rows?

SELECT id
FROM tasktable
WHERE id IN (SELECT random_id_sample())
FOR UPDATE

The random_id_sample would supply a configurable group of IDs off of
tasktable which the FOR UPDATE would then lock

I guess the issue remains that random_id_sample() would still end up
blocking if any of the rows it wants to return are already locked.


My immediate guess is that this would work, and I might explore it once I  
get my new fullscale test-db up and running



I too am using this basic protocol of maintaining state info within the
database and sending every query against it.  As I ponder this more it
really seems as if moving some of this logic into the application layer
would possibly make more sense in Yngve's situation (or at least  
something
to consider).  Continue to use the database as a persistence mechanism  
but

code the dispatching of tasks in the application layer and then as each
task is dispatched you simply do an UPDATE table SET state = 'dispatch'
WHERE id = 'ID' and a similar UPDATE when the task is returned  
completed.
This somewhat presumes you still only ever hand off one task at a time.   
If
you are indeed handing off tasks in batches then it would make sense to  
have
a batch table and operate at the batch level instead of individual  
tasks -

assigning tasks to a given batch via some standard mechanism.


If I read you correctly that is what my system does (dispatch = started,  
marked by the node that is to do the task).


The reason I am allocating tasks in batches is that there are so many  
processes involved that if they pick one at a time they would block each  
other. With the block allocation they only need to fetch the tasks once,  
meaning that there are not as many requests to the queue at a time, on  
average.



Either way if you truly want true parallel processing then you need to
create the parallel paths that can operate without clobbering each other  
and
thus each path needs to have its own pool of tasks since as soon as you  
have


That is what the offset part of the query was supposed to achieve.

At the moment I have worked around the problem by breaking the task list  
into 2000 subgroups, and each process picks one at random. That limits the  
number of processes that get in each others way, and the measured speed is  
now 4-5 times what I saw on Monday, and back in the old range of  
performance. However, it is a hack I had hoped to avoid (and I might get  
rid of it with the above suggestion)


a shared resource the only true way to make sure it is only allocated  
once

is to serialize access to it.  An alternative method would be to allow
multiple dispatches but have a write-once method that is called and  
sets
an immutable handler_id and then when the processing begins only the  
handler
with the matching id would be able allow to perform the actual  
processing.


This requires the handlers to have a unique ID, which my system has not  
needed so far.


I say the above with certainty but at the moment I am using and fairly  
happy

with my limited serialization - especially since I have specific
sub-properties that I can use to limit how many records are locked AND  
also
because the locking time is very short (I cap around 20 or so active  
tasks
to dispatch - and only infrequently at that) so my experience and  
insight to

high-demand situations is limited.

Dave


-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Sent: Tuesday, February 01, 2011 12:18 PM
To: Yngve Nysaeter Pettersen
Cc: pgsql-general@postgresql.org
Subject: Re: Select for update with offset interferes with concurrent
transactions

Yngve Nysaeter Pettersen yn...@opera.com writes:

To avoid having the processes trample each other's queries (the first
attempt was to select the first matching entries of the table, which
caused one to block all other transactions), one of the steps I took
was to select a set of idle rows at a random offset into the table
from the project, mark them for update, then update each record's state  
as

started.


   SELECT record_id FROM queue WHERE project_id = my_project AND state
= idle LIMIT n OFFSET i FOR UPDATE



At present n is 100-150, i is a random value in the range 0-1.



There is, intentionally, no ordering specified, since that would just
slow down the query, and is not necessary.


This seems like a pretty bad design.  There are recognized ways to solve
this problem with more predictability and much less chance of different
processes blocking each other.  In particular, this query seems be based  
on

some untenable assumptions about the physical row order being stable.


What I've discovered when using Postgres 9.0 is that the processes are
now blocking every other query into this table,


In 9.0, LIMIT/OFFSET processing is 

[GENERAL] Changing SHMMAX

2011-02-02 Thread Adarsh Sharma

Dear all,

Today i try to perform some performance tuning for Postgresql database.

I want to change my shared memory permanently.
I have 16 GB RAM.

Please guide  me how to change it  permanently and what is the correct  
value for it.

I am going for 8GB .

Thanks  Regards
Adarsh Sharma



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Changing SHMMAX

2011-02-02 Thread Florian Weimer
* Adarsh Sharma:

 Please guide  me how to change it  permanently and what is the correct
 value for it.
 I am going for 8GB .

Usually, you can put these lines

kernel.shmall = 90
kernel.shmmax = 90

into /etc/sysctl.conf.  Run sysctl -p to activate them.  However,
this is a bit distribution-specific.

-- 
Florian Weimerfwei...@bfk.de
BFK edv-consulting GmbH   http://www.bfk.de/
Kriegsstraße 100  tel: +49-721-96201-1
D-76133 Karlsruhe fax: +49-721-96201-99

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Privileges for read-only tables with sequence and foreign keys

2011-02-02 Thread gvim

To allow read-only access, ie. SELECT, to a user on a table which has a SERIAL 
column as well as foreign key references do I need to GRANT the user anything 
other than SELECT on the table and its corresponding sequence? As I understand, 
USAGE is only required to modify the table or use the currval function while 
REFERENCES is only required to create a foreign key, not to make use of it.

gvim

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why does my DB size differ between Production and DR? (Postgres 8.4)

2011-02-02 Thread Bill Moran
In response to Aleksey Tsalolikhin atsaloli.t...@gmail.com:

 On Mon, Jan 31, 2011 at 7:52 PM, Scott Marlowe scott.marl...@gmail.com 
 wrote:
  On Mon, Jan 31, 2011 at 5:54 PM, Aleksey Tsalolikhin
  atsaloli.t...@gmail.com wrote:
  Situation:  Disk usage on production server root filesystem is at 68%
  utilization (80 GB used), on DR is at 51% (56 GB used).   We use
  SlonyII-1.2.x to keep the DR up to date.  I would like to account for
  the 24 GB difference.
 
  This is likely free space in your database.  Some of it is completely
  normal and actually improves performance.  Too much and your db is
  bloated and things starting taking too long.
 
 Thanks, Scott!
 
 Bucardo's check_postgres.pl --action bloat complains about one table,
 1 GB wasted.  So the other tables must be OK.
 
 So what about my DR, which doesn't have this same 20+ GB of free space.
 Will it acquire it once it goes into production?  Will performance be impacted
 as it acquires the free space?  Should I even be concerned about the
 difference in disk usage or is it normal and expected?

Difference in free space from master to slaves is typical.  Transactions
run on the slaves differently than on the master.  For example, if you
rollback transactions on the master, that can bloat tables, but those
activities are never communicated to the slaves because the rollback
doesn't alter any data.

It's also possible that you have different autovacuum configs on the two
different machines (have you checked) or that the hardware isn't the
same, thus one is able to vacuum more successfully than the other,
or that simply the fates have caused vacuum to start at times that it
gets more done on one server than the other.

Do not be afraid of vacuum full.  It's not that it's an evil command or
should never be used, etc.  It's just something that has consequences
that you need to be aware of, such as:
*) It can take a long time
*) It locks tables while it works on them, thus it blocks other processes
   from accessing those tables
*) It can cause index bloat

However, there are mitigating factors:
*) You can tell it which tables to vacuum, thus you can vacuum full one
   table at a time to recduce the overall impact
*) It can be interrupted, so if it's taking longer than you're able to
   wait, you can cancel it.
*) You can use the REINDEX command to clean up index bloat.

Based on personal experience, and the fact that you have a slony slave to
work with, I recommend the following:

1) On the Slony slave, do the following, timing each step so you have an
   estimate of how long they will take on the master
1a) VACUUM the table.  This is non-locking and will do some preliminary
work so that VACUUM FULL takes less time.
1b) VACUUM FULL just that table.  Slony will be unable to replicate to
the table while the FULL is running, but that's OK, it will catch
up after it's done and the master won't be interrupted.
1c) REINDEX just that table.  This will have no effect on the master.
2) Now that you have time estimates for all those steps, add the times
   for 1b and 1c together.  This is an estimate of how long the master
   database will be interrupted while you do maintenance (step 1a does
   not interrupt other work going on).  Schedule downtime for about 2x
   that time, just in case things run a little longer.
3) Run steps 1a - 1c on the master.  Start 1a before your maintenance
   window starts, with enough time that it should be finished before
   your maintenance window.

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Query plan optimization: sorting vs. partitioning

2011-02-02 Thread Sergey Zaharchenko
Hello list,

I have a large time-indexed table (states) partitioned into several
tables based on the date. The smaller tables are clustered by their
time indices.The main table is empty.

I need to select some data in the time order. When I query a separate
smaller table, the index is used an no sorting is needed. However,
when I query the main table, it occurs:
...
  -  Sort ...
 Sort Key: ...
 Sort Method: ...
 -  Result ...
   -  Append ...
 -  Seq Scan on states
   Filter: ...
 -  Seq Scan on states_20101206
   Filter: ...
...

I see the database doesn't understand that there are no entries in the
main table, so it has to assume the Append data is not ordered. Is
there a way to avoid sorting?

Please CC me as I'm not on the list. Thanks in advance,

-- 
DoubleF

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Hot-Standby and sequences

2011-02-02 Thread Wouter D'Haeseleer
Hi All,

I have a question about sequences in combination with streaming
replication.
It seems something strange is happening with sequences which are
streamed to the slave.

When updating the sequence the last_value on the slave shifts with 32
and halts at this value until the master increased the value also with
32.
Is this normal behavior ?

Please see the example below:

Using Setval
--

# psql -t -h master -c select setval('foo', 10)
  10

# psql -t -h master  -c select last_value from foo
  10

# psql -t -h slave  -c select last_value from foo
  10

Using Nextval
--

# psql -t -h master -c select nextval('foo')
   11

# psql -t -h master  -c select last_value from foo
  11

# psql -t -h slave -c select last_value from foo
 43


-- 
Wouter D'Haeseleer
Linux System Engineer

VASCO Data Security
Dellingstraat 28b
2800 Mechelen
Belgium

phone: +32 15 50 44 00
email: w...@vasco.com 
web: www.vasco.com


Re: [GENERAL] Importing/Appending to Existing Table

2011-02-02 Thread Rich Shepard

On Wed, 2 Feb 2011, Sergey Konoplev wrote:


No it wont overwrite, it will append rows.


  Thank you, Sergey.

Rich

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Importing/Appending to Existing Table

2011-02-02 Thread Rich Shepard

On Wed, 2 Feb 2011, Greg Williamson wrote:


It will not overwrite any existing data; if the table has constraints that
will prevent duplicates then the entire load will fail if any item fails
(it is a single transaction).

If the table doesn't have such constraints then duplicate data in the copy file
will result in duplicate rows.


Greg,

  These data originate in highly undisciplined Excel spreadsheets over the
past 10+ years. They need a lot of preprossing in emacs and python scripts
I've written but there are no primary keys or other constraints until all
the data are entered then moved to the actual application tables. I will not
be surprised to discover duplicate data because of data-entry errors, but
there should not be any duplicates in these permit compliance monitoring
results.

Thanks,

Rich

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Importing/Appending to Existing Table

2011-02-02 Thread David Johnston
You also don't have to import the source files directly into the live table.
Instead you could create a staging table that has no constraints where you
can import everything, do some review and updates, then merge that table
over to the live one.  Depending on how many files you are dealing with and
whether you want or need to pre-combine them a more forgiving staging table
may come in quite handy.  Especially consider the fact that you likely will
not know the maximum length used for various text fields you can make the
staging table fields varchar(255) (or even text) and then do some length
checks before importing the data into the accurately sized live table.

Dave


-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Rich Shepard
Sent: Wednesday, February 02, 2011 9:30 AM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Importing/Appending to Existing Table

On Wed, 2 Feb 2011, Greg Williamson wrote:

 It will not overwrite any existing data; if the table has constraints 
 that will prevent duplicates then the entire load will fail if any 
 item fails (it is a single transaction).

 If the table doesn't have such constraints then duplicate data in the 
 copy file will result in duplicate rows.

Greg,

   These data originate in highly undisciplined Excel spreadsheets over the
past 10+ years. They need a lot of preprossing in emacs and python scripts
I've written but there are no primary keys or other constraints until all
the data are entered then moved to the actual application tables. I will not
be surprised to discover duplicate data because of data-entry errors, but
there should not be any duplicates in these permit compliance monitoring
results.

Thanks,

Rich

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make
changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query plan optimization: sorting vs. partitioning

2011-02-02 Thread Tom Lane
Sergey Zaharchenko doublef.mob...@gmail.com writes:
 I need to select some data in the time order. When I query a separate
 smaller table, the index is used an no sorting is needed. However,
 when I query the main table, it occurs:
 ...
   -  Sort ...
  Sort Key: ...
  Sort Method: ...
  -  Result ...
-  Append ...
  -  Seq Scan on states
Filter: ...
  -  Seq Scan on states_20101206
Filter: ...
 ...

 I see the database doesn't understand that there are no entries in the
 main table, so it has to assume the Append data is not ordered. Is
 there a way to avoid sorting?

No.  In existing releases there is no plan type that can produce
presorted output from an append relation (ie, an inheritance tree).
9.1 will be able to do that, but it wasn't exactly a small fix:
http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=11cad29c91524aac1d0b61e0ea0357398ab79bf8

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] could not accept SSPI security context

2011-02-02 Thread Ahmed

The issue has been addressed and patch has been submitted. Refer to Npgsql
mailing thread 
http://lists.pgfoundry.org/pipermail/npgsql-devel/2011-February/001116.html
http://lists.pgfoundry.org/pipermail/npgsql-devel/2011-February/001116.html
.

-- 
View this message in context: 
http://postgresql.1045698.n5.nabble.com/could-not-accept-SSPI-security-context-tp3275102p3367884.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] redirecting query statement and output to a marked up file, using psql

2011-02-02 Thread Wim Bertels
Hallo,

goal:
to write the input and query results to different files in 1 script

context example:
1 sql script demo.sql

--init messaging
\set client_min_messages warning
\set log_error_verbosity terse

--user1 
SET SESSION AUTHORIZATION user1;
\pset format html
\o report/test_user1.html
\i structure/test_user1.sql

--user2
SET SESSION AUTHORIZATION user2;
\pset format latex
\echo ECHO queries
\o report/test_user2.tex
\i structure/test_user2.sql


This doenst seem to work,
as the ECHO queries output isnt written to the file (test_user2.tex)

The only work around i found so far,
is using redirection.
starting for example
 psql -L out.txt  # only txt markup
or
 psql  out.txt 
But then if have to postprocess the out.txt file, dividing it into
several different files, and the format/markup doenst work so nice any
more.

Suggestions?

tnx,
Wim


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Hot-Standby and sequences

2011-02-02 Thread Tom Lane
Wouter D'Haeseleer w...@vasco.com writes:
 I have a question about sequences in combination with streaming
 replication.
 It seems something strange is happening with sequences which are
 streamed to the slave.

 When updating the sequence the last_value on the slave shifts with 32
 and halts at this value until the master increased the value also with
 32.
 Is this normal behavior ?

Yes, this is expected because of the hacks that are used to minimize
the number of WAL records emitted during nextval() operations.  The
slave is seeing the state that would prevail on the master, too, if the
master were to crash and restart.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Hot-Standby and sequences

2011-02-02 Thread Wouter D'Haeseleer
OKay Tom,

Thanks for this clarification



Re: [GENERAL] Changing SHMMAX

2011-02-02 Thread Torsten Zühlsdorff

Florian Weimer schrieb:


Please guide  me how to change it  permanently and what is the correct
value for it.
I am going for 8GB .


Usually, you can put these lines

kernel.shmall = 90
kernel.shmmax = 90

into /etc/sysctl.conf.  Run sysctl -p to activate them.  However,
this is a bit distribution-specific.


If you're using FreeBSD you even have to restart the machine.

Greetings,
Torsten
--
http://www.dddbl.de - ein Datenbank-Layer, der die Arbeit mit 8 
verschiedenen Datenbanksystemen abstrahiert,
Queries von Applikationen trennt und automatisch die Query-Ergebnisse 
auswerten kann.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Streaming Rep 101 questions

2011-02-02 Thread Gauthier, Dave
Hi:

I'm going to be experimenting with streaming replication using v9.0.1.  Here 
are a few questions I have at the onset...
1) Is it possible to replicate one database out of many that exist in the DB 
instance?  Or do you have to replicate them all?
2) Is replication transaction based?  That is to say, does the act of 
replicating a transaction wait until the commit on the server side?
3) I will be replicating to a DB server 2 timezones away initially, but perhaps 
from the US- India at some future point.  Is the PG replication solution meant 
to do something like this (given sufficient bandwidth of course)?
4) The slave DB instance will also be v9.0.1 on linux, but it wil be a virtual 
machine.  Do you see any problem with that?
5) Is there a step-by-step how to document for this?

Thanks in Advance !



Re: [GENERAL] could not accept SSPI security context

2011-02-02 Thread Francisco Figueiredo Jr.
Thank you very much for your patch!

I'm going to review and apply it.

As soon as it is done, I'll let you know.


On Wed, Feb 2, 2011 at 12:52, Ahmed ahmed.shinw...@gmail.com wrote:

 The issue has been addressed and patch has been submitted. Refer to Npgsql
 mailing thread
 http://lists.pgfoundry.org/pipermail/npgsql-devel/2011-February/001116.html
 http://lists.pgfoundry.org/pipermail/npgsql-devel/2011-February/001116.html
 .

 --
 View this message in context: 
 http://postgresql.1045698.n5.nabble.com/could-not-accept-SSPI-security-context-tp3275102p3367884.html
 Sent from the PostgreSQL - general mailing list archive at Nabble.com.

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
Regards,

Francisco Figueiredo Jr.
Npgsql Lead Developer
http://www.npgsql.org
http://fxjr.blogspot.com
http://twitter.com/franciscojunior

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Streaming Rep 101 questions

2011-02-02 Thread Ray Stell
On Wed, Feb 02, 2011 at 08:52:02AM -0700, Gauthier, Dave wrote:
 I'm going to be experimenting with streaming replication using v9.0.1.  Here 
 are a few questions I have at the onset...

why not 9.0.2?


 5) Is there a step-by-step how to document for this?

http://wiki.postgresql.org/wiki/Binary_Replication_Tutorial

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Streaming Rep 101 questions

2011-02-02 Thread Adrian Klaver

On 02/02/2011 07:52 AM, Gauthier, Dave wrote:

Hi:

I'm going to be experimenting with streaming replication using v9.0.1.
Here are a few questions I have at the onset...

1) Is it possible to replicate one database out of many that exist in
the DB instance? Or do you have to replicate them all?


It replicates the cluster.



2) Is replication transaction based? That is to say, does the act of
replicating a transaction wait until the commit on the server side?


It is shipping WAL records, so when they are recorded they ship, 
assuming streaming replication.




3) I will be replicating to a DB server 2 timezones away initially, but
perhaps from the US- India at some future point. Is the PG replication
solution meant to do something like this (given sufficient bandwidth of
course)?


Not sure.



4) The slave DB instance will also be v9.0.1 on linux, but it wil be a
virtual machine. Do you see any problem with that?


FYI 9.0.3 was just released. I have tried it on an EC2 instance as a 
test and did not see any problems.




5) Is there a step-by-step how to document for this?


http://wiki.postgresql.org/wiki/Binary_Replication_Tutorial



Thanks in Advance !




--
Adrian Klaver
adrian.kla...@gmail.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Why copy ... from stdio does not return immediately when reading invalid data?

2011-02-02 Thread Nicolas Grilly
Hello,

I am importing gigabytes of data into PostgreSQL, and I don't want to wait
10 minutes just to discover an error in the 10th line of my input file.

I tried the command \copy ... from stdio in psql and it looks like psql
has to read the entire input before returning a potential error, even if the
invalid value is in one of the first rows.

Is it a limitation of PostgreSQL protocol, of the library lipq, or of the
tool psql?

Is the copy protocol (aka PQputCopyData and PQputCopyEnd) designed to send
gigabytes of data with just one copy ... from stdio query, and is there a
way to be notified of a potential error before calling PQputCopyEnd? Or do I
have to send my data in small chunks (for example batch of 1
rows), issue a PQputCopyEnd, check for errors, and continue with the next
chunk?

Thanks for your help and advice.

Regards,

Nicolas Grilly


Re: [GENERAL] Why does my DB size differ between Production and DR? (Postgres 8.4)

2011-02-02 Thread Chris Browne
atsaloli.t...@gmail.com (Aleksey Tsalolikhin) writes:
 Situation:  Disk usage on production server root filesystem is at 68%
 utilization (80 GB used), on DR is at 51% (56 GB used).   We use
 SlonyII-1.2.x to keep the DR up to date.  I would like to account for
 the 24 GB difference.

It's more than likely a result of transactions failing on the origin,
leaving dead space around, where replication doesn't bother trying to do
any work for the failed stuff, with the consequence that there's no
corresponding clutter on the replica.

I'm talking here about cases of failures that are expected.

Look to what activities you have that tend to lead to tranactions that
ROLLBACK.  Slony-I makes no attempt to replicate activity that is
terminated by ROLLBACK (explicit or implicit), so all that activity
won't be processed on replicas.

For instance, in our applications, operating domain registries,
intentionally failed database transactions occur heavily *common*
whenever customers are 'fighting' over domain names - one and only one
customer can win the name, while all others lose, and each losing
request leaves a certain amount of mess in its wake.  Common patterns of
this sort include transactions that fail because:

 - Customer has insufficient funds on account to pay for the transaction

 - Inventory request fails because there are insufficient items in stock

 - Attempt to insert a second instance of an object that is required to
   be unique

 - Rejection of partially processed transaction due to violation of some
   business policy (which is mighty open-ended!)

It's likely, as well, that there is some set of tables that you are not
vacuuming heavily enough.  Probably a table or three needs to have
CLUSTER run on it to bring them down to size, and you may need to fiddle
with autovacuum parameters to vacuum more frequently.
-- 
(reverse (concatenate 'string moc.liamg @ enworbbc))
http://linuxdatabases.info/info/lisp.html
Microsoft has world class quality control -- Arthur Norman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why does my DB size differ between Production and DR? (Postgres 8.4)

2011-02-02 Thread Chris Browne
peter.geoghega...@gmail.com (Peter Geoghegan) writes:
 On 1 February 2011 03:52, Scott Marlowe scott.marl...@gmail.com wrote:
 You can reclaim that space by doing a cluster or vacuum full on the
 subject table.

 Yes, but this is a fairly bad idea, particularly prior to PG 9.0 . 9.0
 has a new vacuum full implementation that makes it not so bad - it
 just rewrites the entire table.

 VACUUM FULL will take exclusive locks on tables being vacuumed. It
 also causes index bloat. You should be very careful about using it on
 a production system.

 I'm not sure why you'd advocate CLUSTER as a way to reclaim disk space.

Because it works pretty well; it reorganizes the table on the basis of
the order indicated by one index, and simultaneously:
 a) Shortens the table, removing all dead space;
 b) Regenerates all indices, so they too have no dead space.

Traditional VACUUM FULL tends to worsen the dead space problem on
indices, so adds the insult to injury problem that after running
VACUUM FULL, you might need to reindex, and that aftermath is nearly as
expensive as CLUSTER.

CLUSTER is likely to be quicker than VACUUM FULL, and it gives nice,
squeaky-tight indexes.

The new form of VACUUM FULL in 9.0 changes things, but it wasn't obvious
that the original poster was on 9.0.

 I wouldn't increase index fill factor as an optimisation, unless you
 had the unusual situation of having very static data in the table.

-- 
output = reverse(gro.mca @ enworbbc)
http://linuxfinances.info/info/wp.html
The world needs more people like us and fewer like them.  -- Unknown

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Database Design Question

2011-02-02 Thread Carlos Mennens
I was sitting down thinking the other day about when is it good to
generate a new database or just use an existing one. For example, lets
say my company name is called 'databasedummy.org' and I have a
database called 'dbdummy'. Now I need PostgreSQL to manage several
applications for my company:

- webmail
- software
- mediawiki
- phpbb forum

Now what I've been doing is just creating multiple tables in the
'dbdummy' database but each table is owned by different users
depending on their role. Is this bad? Should I be creating new
databases for each application above rather than one single company
database?

Just trying to understand good DBA design practice. This is obviously
a very general question but any feedback on what good or bad issues
would come from me dumping all my tables for applications in one
database or spread out across multiple databases on PostgreSQL.

Thank you!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why copy ... from stdio does not return immediately when reading invalid data?

2011-02-02 Thread John R Pierce

On 02/02/11 10:20 AM, Nicolas Grilly wrote:
Is the copy protocol (aka PQputCopyData and PQputCopyEnd) designed to 
send gigabytes of data with just one copy ... from stdio query, and 
is there a way to be notified of a potential error before calling 
PQputCopyEnd? Or do I have to send my data in small chunks (for 
example batch of 1 rows), issue a PQputCopyEnd, check for errors, 
and continue with the next chunk?


I would batch the data, maybe 1000 lines or even 100 lines at a time if 
these errors are at all frequent.  put the errored batches in an 
exception list or something so you can sort them out later.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database Design Question

2011-02-02 Thread John R Pierce

On 02/02/11 10:32 AM, Carlos Mennens wrote:

I was sitting down thinking the other day about when is it good to
generate a new database or just use an existing one. For example, lets
say my company name is called 'databasedummy.org' and I have a
database called 'dbdummy'. Now I need PostgreSQL to manage several
applications for my company:

- webmail
- software
- mediawiki
- phpbb forum

Now what I've been doing is just creating multiple tables in the
'dbdummy' database but each table is owned by different users
depending on their role. Is this bad? Should I be creating new
databases for each application above rather than one single company
database?

Just trying to understand good DBA design practice. This is obviously
a very general question but any feedback on what good or bad issues
would come from me dumping all my tables for applications in one
database or spread out across multiple databases on PostgreSQL.


I would create a seperate database for each thing that has nothing to do 
with the other things.I doubt mediawiki and phpbb will ever share 
any data, they are totally different applications, each is a self 
contained world.  ditto your webmail.   the other item there, 
'software', well, I have no idea what that means specifically.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query plan optimization: sorting vs. partitioning

2011-02-02 Thread Sergey Zaharchenko
2011/2/2, Tom Lane t...@sss.pgh.pa.us:

 I see the database doesn't understand that there are no entries in the
 main table, so it has to assume the Append data is not ordered. Is
 there a way to avoid sorting?

 No.  In existing releases there is no plan type that can produce
 presorted output from an append relation (ie, an inheritance tree).
 9.1 will be able to do that, but it wasn't exactly a small fix:
 http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=11cad29c91524aac1d0b61e0ea0357398ab79bf8

OK, I hope I'll be able to come up with a stored procedure to query
the tables directly, then. Thanks!

-- 
DoubleF

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database Design Question

2011-02-02 Thread David Johnston
The main concern to consider is whether there are any shared relationships
that the different projects all have (e.g., common logon users).  Since you
cannot query across different databases if there is shared information then
a single database would be preferred.  I think the concept you want to
consider further is Schemas.  You can get the same kind of separation that
you would want with multiple databases with the possibility to have a
global schema that holds data common to multiple projects.

Also, I would suggest managing permissions by group roles and strictly
assigning user/logon roles to those group roles.

If, from an application standpoint, the structure does not matter then
consider the maintenance aspects of such a design.  The advantage of
multiple databases is that you can easily put each database onto its own
machine and individual applications can be brought offline without bringing
down all the applications.  Your admin tool will also have references to
each of the separate databases instead of a single database with multiple
schemas.  If you end up using maintenance functions and/or views they will
probably need to be installed and configured in each database.  At the same
time it becomes easier to look at the maintenance logs when each application
is independent (of course this depends on the tool and how schemas are
handled).

Dave


-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of John R Pierce
Sent: Wednesday, February 02, 2011 2:09 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Database Design Question

On 02/02/11 10:32 AM, Carlos Mennens wrote:
 I was sitting down thinking the other day about when is it good to 
 generate a new database or just use an existing one. For example, lets 
 say my company name is called 'databasedummy.org' and I have a 
 database called 'dbdummy'. Now I need PostgreSQL to manage several 
 applications for my company:

 - webmail
 - software
 - mediawiki
 - phpbb forum

 Now what I've been doing is just creating multiple tables in the 
 'dbdummy' database but each table is owned by different users 
 depending on their role. Is this bad? Should I be creating new 
 databases for each application above rather than one single company 
 database?

 Just trying to understand good DBA design practice. This is obviously 
 a very general question but any feedback on what good or bad issues 
 would come from me dumping all my tables for applications in one 
 database or spread out across multiple databases on PostgreSQL.

I would create a seperate database for each thing that has nothing to do 
with the other things.I doubt mediawiki and phpbb will ever share 
any data, they are totally different applications, each is a self 
contained world.  ditto your webmail.   the other item there, 
'software', well, I have no idea what that means specifically.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make
changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database Design Question

2011-02-02 Thread Joshua D. Drake
On Wed, 2011-02-02 at 11:08 -0800, John R Pierce wrote:
 On 02/02/11 10:32 AM, Carlos Mennens wrote:

 I would create a seperate database for each thing that has nothing to do 
 with the other things.I doubt mediawiki and phpbb will ever share 
 any data, they are totally different applications, each is a self 
 contained world.  ditto your webmail.   the other item there, 
 'software', well, I have no idea what that means specifically.

Forget separate databases. Use separate users with schemas.

JD

 
 

-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Streaming Rep 101 questions

2011-02-02 Thread Magnus Hagander
On Wed, Feb 2, 2011 at 16:52, Gauthier, Dave dave.gauth...@intel.com wrote:
 Hi:


 I'm going to be experimenting with streaming replication using v9.0.1.  Here
 are a few questions I have at the onset...

You should use 9.0.3


 1) Is it possible to replicate one database out of many that exist in the DB
 instance?  Or do you have to replicate them all?

No, you must do all.


 2) Is replication transaction based?  That is to say, does the act of
 replicating a transaction wait until the commit on the server side?

Yes, it's transaction based, in the sense that it's transactionally
safe. No, you don't have to wait until commit on the slave side (which
I assume you mean). THat's sync rep, which is hopefully going to be an
option in 9.1.


 3) I will be replicating to a DB server 2 timezones away initially, but
 perhaps from the US- India at some future point.  Is the PG replication
 solution meant to do something like this (given sufficient bandwidth of
 course)?

Should work fine. As long as the connection is reasonably reliable
(doesn't go up and down every minute), a bit of latency shouldn't be a
problem.


 4) The slave DB instance will also be v9.0.1 on linux, but it wil be a
 virtual machine.  Do you see any problem with that?

None other than potential performance issues if it's too slow.


 5) Is there a step-by-step how to document for this?

http://wiki.postgresql.org/wiki/Binary_Replication_Tutorial

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database Design Question

2011-02-02 Thread John R Pierce

On 02/02/11 11:24 AM, Joshua D. Drake wrote:

Forget separate databases. Use separate users with schemas.


for canned applications like mediawiki and phpbb?   not sure they 
support that.






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database Design Question

2011-02-02 Thread Gary Chambers

Forget separate databases. Use separate users with schemas.

for canned applications like mediawiki and phpbb?   not sure they support
that.


Mediawiki does -- I'm doing just that.  It's been liberating learning how
PostgreSQL deals with schemas (and applying that knowledge).

-- Gary Chambers

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Changing SHMMAX

2011-02-02 Thread Vick Khera
On Wed, Feb 2, 2011 at 10:31 AM, Torsten Zühlsdorff
f...@meisterderspiele.de wrote:
 kernel.shmall = 90
 kernel.shmmax = 90

 into /etc/sysctl.conf.  Run sysctl -p to activate them.  However,
 this is a bit distribution-specific.

 If you're using FreeBSD you even have to restart the machine.


No, you do not.  kern.ipc.shmall and shmmax are run-time tunables in FreeBSD.

The only postgres related settings you need a reboot to adjust are
kern.ipc.semmni and semmns.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Why does a normally fast query run so slow when the table is in a partition?

2011-02-02 Thread Bill Thoen
I've got a large  (and growing) database set up as a partitioned 
database. The partitions are physically broken out by state plus a 
unique id for each. There's roughly 20 million records in the whole 
thing just now. My question is, why does a simple query supplying both 
parts of the index key work nearly instantly as expected when I submit 
it to the appropriate partition table directly, but the same query when 
sent to the master table takes nearly 3/4 of a minute to return one 
record? Actually, running the queries with 'Explain analyze verbose'  
tells me what it chose, so I know it's slopw because it chose to do a 
sequential scan on the master table but what I'd like to know is why 
does it take so long to go through the master table looking for the 
partition for 'co'? According to the log, if I read it correctly, it 
took nearly 40 seconds just to scan through the 19 partition tables 
before it found the colorado partition. Can soeone tell me ho wot speed 
up that step?


Also (this might be relevant) I accidentally got 15 million records into 
the master table earlier, but those have all been deleted and I've run 
VACUUM ANALYZE on  the master table since then.


The following shows the details and the environment.  I'm using 
PostgreSQL 8.4.5 and running on CentOS 5.5


This is the master table. It has no records or indexes as per the PG manual.
fsa=# \d clu
Table vfm.clu
  Column| Type  | Modifiers
-+---+---
ogc_fid | bigint| not null
geom| geometry  |
comments| character(80) |
statecd | character(2)  |
countycd| character(3)  |
tractnbr| character(7)  |
farmnbr | character(7)  |
clunbr  | numeric(7,0)  |
acres   | numeric(8,2)  |
fsa_acres   | numeric(8,2)  |
heltypecd   | character(1)  |
cluclscd| numeric(2,0)  |
cluid   | character(36) |
admnstate   | character(2)  |
admncounty  | character(3)  |
source_disc | character(2)  | not null

This is one of the partition tables. It has the same structure, although 
the key fields are not in the same order as the master table. It is also 
indexed on source_disc + ogc_fid (and spatially as well). Its constraint 
is that only records with 'co' in the source_disk attribute can be added 
or found here.

fsa=# \d clu_co
  Table vfm.clu_co
  Column| Type  | Modifiers
-+---+---
geom| geometry  |
comments| character(80) |
statecd | character(2)  |
countycd| character(3)  |
tractnbr| character(7)  |
farmnbr | character(7)  |
clunbr  | numeric(7,0)  |
acres   | numeric(8,2)  |
fsa_acres   | numeric(8,2)  |
heltypecd   | character(1)  |
cluclscd| numeric(2,0)  |
cluid   | character(36) |
admnstate   | character(2)  |
admncounty  | character(3)  |
ogc_fid | bigint| not null
source_disc | character(2)  | not null
Indexes:
   clu_co_pkey PRIMARY KEY, btree (source_disc, ogc_fid)
   clu_co_geom gist (geom)
Check constraints:
   cd_id CHECK (source_disc = 'co'::bpchar)
Inherits: clu


Here's the query that executes quickly in the partition table. Notice 
that it's using the index for a fast lookup.


fsa=# explain analyze verbose select :flds from clu_co where 
source_disc='co' and ogc_fid = 116337;
QUERY 
PLAN  
 
--

---
Index Scan using clu_co_pkey on clu_co  (cost=0.00..8.31 rows=1 
width=48) (actual time=0.079..0.086 rows=

1 loops=1)
  Output: source_disc, ogc_fid, statecd, countycd, tractnbr, farmnbr, 
clunbr, acres

  Index Cond: ((source_disc = 'co'::bpchar) AND (ogc_fid = 116337))
Total runtime: 0.177 ms
(4 rows)


fsa=# select :flds from clu where source_disc='co' and ogc_fid = 116337;
source_disc | ogc_fid | statecd | countycd | tractnbr | farmnbr | 
clunbr |  acres 
-+-+-+--+--+-++-
co  |  116337 | 08  | 043  | 533  | 065 |  
9 | 4677.79

(1 row)



The same query when sent through the master table. Notice it's using a 
sequential scan. But why does this operation take 38 seconds? How do I 
speed that up?


fsa=# explain analyze verbose select :flds from clu where 
source_disc='co' and ogc_fid = 116337;
QUERY PLAN
 
--
Result  (cost=0.00..1098364.31 rows=2 width=52) (actual 
time=38367.332..38367.355 rows=1 loops=1)
  Output: vfm.clu.source_disc, vfm.clu.ogc_fid, vfm.clu.statecd, 
vfm.clu.countycd,

   vfm.clu.tractnbr, vfm.clu.farmnbr, vfm.clu.clunbr, vfm.clu.acres
  -  Append  (cost=0.00..1098364.31 rows=2 

Re: [GENERAL] Database Design Question

2011-02-02 Thread Chris Browne
carlos.menn...@gmail.com (Carlos Mennens) writes:
 I was sitting down thinking the other day about when is it good to
 generate a new database or just use an existing one. For example, lets
 say my company name is called 'databasedummy.org' and I have a
 database called 'dbdummy'. Now I need PostgreSQL to manage several
 applications for my company:

 - webmail
 - software
 - mediawiki
 - phpbb forum

 Now what I've been doing is just creating multiple tables in the
 'dbdummy' database but each table is owned by different users
 depending on their role. Is this bad? Should I be creating new
 databases for each application above rather than one single company
 database?

 Just trying to understand good DBA design practice. This is obviously
 a very general question but any feedback on what good or bad issues
 would come from me dumping all my tables for applications in one
 database or spread out across multiple databases on PostgreSQL.

 Thank you!

I think it's likely that these would properly have separate databases,
as...

 - There isn't *that* much data that is likely to be shared between
   these applications, so it probably doesn't add a lot of value to
   force them together.

 - If you integrate the databases together, then any maintenance on the
   database represents an outage for *ALL* those systems, whereas if
   they're separate, there's at least the possibility of outages being
   independent.

You'll have to think about the expected kinds of failure cases to
determine in which direction to go.
-- 
(format nil ~S@~S cbbrowne acm.org)
http://www3.sympatico.ca/cbbrowne/rdbms.html
Make sure your code does nothing gracefully.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How best to load modules?

2011-02-02 Thread Steve White
Hi,

Here is the best cludge so far.  To load the module 'tablefunc' from the
contrib/ directory, process the output of the 'pg_config' program with unix
commands.  The 'pg_config' program is often distributed in a package
separate from  postgresql.

\set tablefunc `pg_config|grep SHAREDIR|sed s/SHAREDIR = 
\(.*\)/\1\/contrib\/tablefunc.sql/g`
\i :tablefunc

This isn't very robust, but at least it allows me to load and unload stuff
from a single sql script on two different distros.

Cheers!


On 28.01.11, Steve White wrote:
 Hello, all!
 
 What are best practices regarding the loading of postgresql modules, say
 from the contrib/ directory; specifically, with regard to portability?
 
 I would like to distribute an SQL script which loads a module, and works
 with as little further fiddling as possible.
 
 known options
 =
 
 Within a session, or in a script, one can use 
   \i explicit file path
 But within a script this has the weakness that the file path varies from
 one system distribution to another.
 
 One can start psql with 
   psql ... -f explicit file path
 but that's a measure taken outside the script, to done either with session,
 or else be done by a further measure such as a shell script.
 
 Ideally, the location of the default modules directory (or installation
 directory) should be available within a session in some variable or from
 some function call.
 
 There are some pre-defined variables, listed in a session by
   show all;
 but I don't see anything like a directory path there.
 Maybe a built-in function returning this directory?  Searched to no avail:
   http://www.postgresql.org/docs/8.2/interactive/functions.html
 
 There has been talk about a bigger solution on
   http://wiki.postgresql.org/wiki/Module_Manager
 but little seems to have happened there in some years.
 
 An environment variable
   $libdir, 
 is mentioned
   http://www.postgresql.org/docs/8.2/static/runtime-config-client.html
 but this seems not to be present within a session.
 It seems to be expanded within the LANGUAGE C environment, for instance in
 tablefunc.sql
  -
 CREATE OR REPLACE FUNCTION crosstab2(text)
 RETURNS setof tablefunc_crosstab_2
 AS '$libdir/tablefunc','crosstab'
 LANGUAGE C STABLE STRICT;
  -
 
 Thanks!
 
 -- 
 | -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -
 | Steve White +49(331)7499-202
 | E-ScienceZi. 27  Villa Turbulenz 
 | -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -
 | Astrophysikalisches Institut Potsdam (AIP)
 | An der Sternwarte 16, D-14482 Potsdam
 |
 | Vorstand: Prof. Dr. Matthias Steinmetz, Peter A. Stolz
 |
 | Stiftung privaten Rechts, Stiftungsverzeichnis Brandenburg: III/7-71-026
 | -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -

-- 
| -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -
| Steve White +49(331)7499-202
| E-ScienceZi. 27  Villa Turbulenz 
| -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -
| Astrophysikalisches Institut Potsdam (AIP)
| An der Sternwarte 16, D-14482 Potsdam
|
| Vorstand: Prof. Dr. Matthias Steinmetz, Peter A. Stolz
|
| Stiftung privaten Rechts, Stiftungsverzeichnis Brandenburg: III/7-71-026
| -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] SCALE: seeking booth attendees

2011-02-02 Thread Richard Broersma
On February 26th and 27th (Saturday and Sunday), PostgreSQL is assigned an
exhibitor booth.  Between the hours of 10:00 am and 6:00 pm, we need booth
coverage.

The call is going out for booth attendees.  This will be an excellent
opportunity to meet PostgreSQL community leaders as well as community
locals!

If your interested, please reply with an email off-list.

-- 
Regards,
Richard Broersma Jr.


P.S.

Also if your in town on Friday, be sure to check out PgDay on Friday
February 25th.
https://sites.google.com/site/pgdayla/


Re: [GENERAL] Database Design Question

2011-02-02 Thread Martijn van Oosterhout
On Wed, Feb 02, 2011 at 11:44:51AM -0800, John R Pierce wrote:
 On 02/02/11 11:24 AM, Joshua D. Drake wrote:
 Forget separate databases. Use separate users with schemas.

 for canned applications like mediawiki and phpbb?   not sure they  
 support that.


If they use different users you can easily do it by setting the default
search path per user.

ALTER USER phpbb SET search_path='phpbbschema';

As long as the apps don't play with the search path themselves it
should be fine.

Have a nice day,
-- 
Martijn van Oosterhout   klep...@svana.org   http://svana.org/kleptog/
 Patriotism is when love of your own people comes first; nationalism,
 when hate for people other than your own comes first. 
   - Charles de Gaulle


signature.asc
Description: Digital signature


Re: [GENERAL] redirecting query statement and output to a marked up file, using psql

2011-02-02 Thread Bosco Rama
Wim Bertels wrote:
 
 --user2
 SET SESSION AUTHORIZATION user2;
 \pset format latex
 \echo ECHO queries
 \o report/test_user2.tex
 \i structure/test_user2.sql
 
 
 This doenst seem to work,
 as the ECHO queries output isnt written to the file (test_user2.tex)

Actions are performed as they are encountered so put the \echo *after* the
\o, like this:

   SET SESSION AUTHORIZATION user2;
   \pset format latex
   \o report/test_user2.tex
   \echo ECHO queries
   \i structure/test_user2.sql

HTH,
Bosco.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] effective_io_concurrency

2011-02-02 Thread Yves Weißig
Hi pgsql-general group,

I was wondering if there is more information about this switch in the
configuration. Does it really work? Where in the source code can I
follow how it works? sgmgr.c seems to be an entry point, but where
exactly is it used?

Greets, Yves

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why does a normally fast query run so slow when the table is in a partition?

2011-02-02 Thread Steve Crawford

On 02/02/2011 12:17 PM, Bill Thoen wrote:
I've got a large  (and growing) database set up as a partitioned 
database

What is the setting of contstraint_exclusion?
http://www.postgresql.org/docs/9.0/static/runtime-config-query.html#GUC-CONSTRAINT-EXCLUSION

Cheers,
Steve


Re: [GENERAL] Why does a normally fast query run so slow when the table is in a partition?

2011-02-02 Thread Steve Crawford

On 02/02/2011 01:35 PM, Bill Thoen wrote:

Steve Crawford wrote:

On 02/02/2011 12:17 PM, Bill Thoen wrote:
I've got a large  (and growing) database set up as a partitioned 
database

What is the setting of contstraint_exclusion?
http://www.postgresql.org/docs/9.0/static/runtime-config-query.html#GUC-CONSTRAINT-EXCLUSION 



Cheers,
Steve

It's set to 'Partition'

That sounds good. Out of curiosity, what happens if you use an explicit 
cast?:

...where 'co'::char(2)...

I've seen lots of cases where the planner doesn't use indexes when the 
data-type differs sometimes even subtly. Might be the same for 
constraint exclusion.


Cheers,
Steve


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PQfinish blocking on non-existent IP address ...

2011-02-02 Thread Mad
/**

How do I stop PQfinish from blocking?
(try any LAN IP address that doesn't exist on your LAN.)

I compiled it with both VC and MinGW, same result.

(change C:\Program Files (x86) to C:\Program Files for 32bit Windows OS)

Command Line Compile in VC 32bit:
cl x.cpp -IC:\Program Files (x86)\PostgreSQL\8.4\include -link C:\Program
Files (x86)\PostgreSQL\8.4\lib\libpq.lib

Command Line Compile in MinGW 32bit:
g++ x.cpp -ox.exe -IC:\Program Files (x86)\PostgreSQL\8.4\include
-LC:\Program Files (x86)\PostgreSQL\8.4\lib -llibpq
 
***/

#include stdlib.h
#include stdio.h
#include libpq-fe.h

int main(int na,char** sa){
printf(Connecting ...\n);
PGconn* lpcn = PQconnectStart(dbname=postgres
host=192.168.250.60);
printf(Connected\n);
printf(Calling PQfinish\n);
PQfinish(lpcn);
printf(PQfinished\n);
return 0;
};



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] isn't insert into where not exists atomic?

2011-02-02 Thread Mage

Hello,

I just received an error message:

  PGError: ERROR:  duplicate key value violates unique constraint 
chu_user_id_chat_room_id
DETAIL:  Key (user_id, chat_room_id)=(8, 2) already exists.
CONTEXT:  SQL statement insert into chat_room_users (user_id, chat_room_id, 
active_at) (select NEW.user_id, NEW.chat_room_id, now() where not exists (select 1 from 
chat_room_users where user_id = NEW.user_id and chat_room_id = NEW.chat_room_id))
PL/pgSQL function trf_chat_room_users_insert line 3 at SQL statement
: INSERT INTO chat_room_users (user_id, chat_room_id, active_at) VALUES 
(8, 2, NULL)


The important line is:
insert into chat_room_users (user_id, chat_room_id, active_at) (select 
NEW.user_id, NEW.chat_room_id, now() where not exists (select 1 from 
chat_room_users where user_id = NEW.user_id and chat_room_id = 
NEW.chat_room_id))


I always thought this is atomic and can not fail. Was I wrong?

If it isn't then I have to rewrite my triggers. Do I have to use lock 
table instead of the above to avoid errors in parallel inserts?


The trigger looks like:

create or replace function trf_chat_room_users_insert() returns trigger 
as $$

begin
if NEW.active_at is null then
insert into chat_room_users (user_id, chat_room_id, 
active_at) (select NEW.user_id, NEW.chat_room_id, now() where not exists 
(select 1 from chat_room_users where user_id = NEW.user_id and 
chat_room_id = NEW.chat_room_id));

if not found then
update chat_room_users set active_at = now() 
where user_id = NEW.user_id and chat_room_id = NEW.chat_room_id;

end if;
return null;
end if;
return NEW;
end;
$$ language plpgsql;

And it meant to be insert or update.

Mage


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why does my DB size differ between Production and DR? (Postgres 8.4)

2011-02-02 Thread Aleksey Tsalolikhin
Thank you for the discussion.

I'm on Postgres 8.4, and the hardware between Slony master and slave
is identical,
as is the autovacuum config.

We do have transactions that fail to commit, transactions that roll back.

I'm glad to have some idea of the cause of the difference in table size
between Slony Master and Slave.

If disk usage on the Master goes over 75% before my upgrade money
is approved, I will try Bill Moran's suggestion of doing a practice vacuum
run on the Slave, and then we'll take a maintenance window to VACUUM,
VACUUM FULL, REINDEX; or CLUSTER on the master.

THANK YOU!  This is a super-helpful list.  I really appreciate the positive
energy in the PostgreSQL community.   I'm looking forward to helping out
at the PostgresSQL booth at the So Cal Linux Expo later this month.

Thank you!
Aleksey

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why does my DB size differ between Production and DR? (Postgres 8.4)

2011-02-02 Thread Scott Marlowe
On Wed, Feb 2, 2011 at 10:45 AM, Chris Browne cbbro...@acm.org wrote:
 peter.geoghega...@gmail.com (Peter Geoghegan) writes:
 I'm not sure why you'd advocate CLUSTER as a way to reclaim disk space.

 Because it works pretty well; it reorganizes the table on the basis of
 the order indicated by one index, and simultaneously:
  a) Shortens the table, removing all dead space;
  b) Regenerates all indices, so they too have no dead space.

It's important at this point to set fill factor before the cluster if
something besides the default 100% makes sense.  any randomly updated
table full of small records will usually benefit from a fill fact even
as high as 95% which is very little wasted space for a gain in HOT
updates starting in 8.3.  HOT saved our bacon at work.  They really
lowered the requirements for disk access / index update a LOT.  I wish
I'd have saved the pg_stat_index from 8.1 versus 8.3.   And IO
numbers.  Our load dropped by a power of ten more or less.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why does my DB size differ between Production and DR? (Postgres 8.4)

2011-02-02 Thread Peter Geoghegan
On 2 February 2011 05:41, Scott Marlowe scott.marl...@gmail.com wrote:

 I wouldn't increase index fill factor as an optimisation, unless you
 had the unusual situation of having very static data in the table.

 That makes no sense whatsoever.  You decrease fill factor (not
 increase btw) so there will be some space for future updates.  If he's
 getting bloat it may well help quite a bit to have a lower than 100%
 fill factor.

As I said, it depends on the profile of the data. Heavily or randomly
updated tables will benefit from reducing *index* fillfactor - it will
reduce index fragmentation. OTOH, indexes for static data can have
their fillfactors increased to 100% from the default of 90% without
consequence.

-- 
Regards,
Peter Geoghegan

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PQfinish blocking on non-existent IP address ...

2011-02-02 Thread Mad
Hmm ... It would appear that is it actually WSACleanup() that is taking
forever. I Added a WSAStartup() and a WSACleanup(), and it hung for awhile
on WSACleanup() instead of PQfinish() :)

-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Mad
Sent: Wednesday, February 02, 2011 6:03 PM
To: pgsql-general@postgresql.org
Cc: mad...@schif.org
Subject: [GENERAL] PQfinish blocking on non-existent IP address ...

/**

How do I stop PQfinish from blocking?
(try any LAN IP address that doesn't exist on your LAN.)

I compiled it with both VC and MinGW, same result.

(change C:\Program Files (x86) to C:\Program Files for 32bit Windows OS)

Command Line Compile in VC 32bit:
cl x.cpp -IC:\Program Files (x86)\PostgreSQL\8.4\include -link C:\Program
Files (x86)\PostgreSQL\8.4\lib\libpq.lib

Command Line Compile in MinGW 32bit:
g++ x.cpp -ox.exe -IC:\Program Files (x86)\PostgreSQL\8.4\include
-LC:\Program Files (x86)\PostgreSQL\8.4\lib -llibpq
 
***/

#include stdlib.h
#include stdio.h
#include libpq-fe.h

int main(int na,char** sa){
printf(Connecting ...\n);
PGconn* lpcn = PQconnectStart(dbname=postgres
host=192.168.250.60);
printf(Connected\n);
printf(Calling PQfinish\n);
PQfinish(lpcn);
printf(PQfinished\n);
return 0;
};



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why does my DB size differ between Production and DR? (Postgres 8.4)

2011-02-02 Thread Scott Marlowe
On Wed, Feb 2, 2011 at 8:49 PM, Peter Geoghegan
peter.geoghega...@gmail.com wrote:
 On 2 February 2011 05:41, Scott Marlowe scott.marl...@gmail.com wrote:

 I wouldn't increase index fill factor as an optimisation, unless you
 had the unusual situation of having very static data in the table.

 That makes no sense whatsoever.  You decrease fill factor (not
 increase btw) so there will be some space for future updates.  If he's
 getting bloat it may well help quite a bit to have a lower than 100%
 fill factor.

 As I said, it depends on the profile of the data. Heavily or randomly
 updated tables will benefit from reducing *index* fillfactor - it will
 reduce index fragmentation. OTOH, indexes for static data can have
 their fillfactors increased to 100% from the default of 90% without
 consequence.


Certainly.  I was talking table fill factor at the time, so that's why
I wasn't sure what you meant.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] set theory question

2011-02-02 Thread matty jones
I am looking for a good book on the math and/or theory behind relational
databases and associated topics..  I am looking some works on set theory,
algebra, or any other books/papers on the mechanics that databases are built
on.  I found one book online,
http://web.cecs.pdx.edu/~maier/TheoryBook/TRD.html, The Theory of Relational
Databases by David Maier and that is what got me interested in the
foundations of database design and theory.

Thanks,

Matt


[GENERAL] upgrade

2011-02-02 Thread William Bruton
How do I know which version to upgrade to from 8.1.4?

 

Regards,

 

William Bruton

 

Data Retrieval Corporation

13231 Champion Forest Dr Suite 401

Houston Tx 77069

Tel: 281 444-5398

Fax: 281 444-5397

24 Hrs: 832 752-0074

http://www.spidr.com/

 mailto:d...@spidr.com d...@spidr.com

 

DOWNHOLE RESULTS WITHOUT DOWNHOLE GAUGES!

 



Re: [GENERAL] upgrade

2011-02-02 Thread Scott Marlowe
On Wed, Feb 2, 2011 at 12:08 PM, William Bruton d...@spidr.com wrote:
 How do I know which version to upgrade to from 8.1.4?

you should first update to 8.1.latest so you've got all the bug fixes
available.  It's pretty close to painless, and unless the release
notes between 8.1.5 and 8.1.latest say you need to do anything out of
the ordinary, it's just a software update, no admin work to do.

Upgrading to a later version I'd go for 8.2 or 9.0.  8.3 removed a lot
of explicit casts that some (broken really) programs need to work
right.  So going to 8.3 or beyond you're making a commitment to
hunting down such code and fixing it.

9.0 is WORLDS ahead of 8.2 though, and well worth the effort, if
needed, to move to.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] isn't insert into where not exists atomic?

2011-02-02 Thread Alban Hertroys
On 3 Feb 2011, at 2:17, Mage wrote:

 The trigger looks like:
 
 create or replace function trf_chat_room_users_insert() returns trigger as $$
 begin
if NEW.active_at is null then
insert into chat_room_users (user_id, chat_room_id, active_at) 
 (select NEW.user_id, NEW.chat_room_id, now() where not exists (select 1 from 
 chat_room_users where user_id = NEW.user_id and chat_room_id = 
 NEW.chat_room_id));
if not found then
update chat_room_users set active_at = now() where 
 user_id = NEW.user_id and chat_room_id = NEW.chat_room_id;
end if;
return null;
end if;
return NEW;
 end;
 $$ language plpgsql;


Your trigger is the wrong way around. Insert doesn't set found, but update does.

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,4d4a559711736475013765!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general