Re: [GENERAL] pg_dump

2011-09-13 Thread John R Pierce

On 09/12/11 7:40 PM, Rogel Nocedo wrote:


C:\dbbackup does not exist yet.

Please advise.




make the directory first.  you can't write a file to a nonexistent 
directory.




--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Alternative JDBC driver

2011-09-13 Thread Jimmy K.
Hello,

I asked this question on JDBC list, but it seems "dead".

I found alternative JDBC driver marked as Beta 2 (actually last time
bumped to Beta 2.1) http://softperience.eu/pages/cmn/ngpgjdbc.xhtml.
We still test it. Those features are, mainly, in our interest
- binary transfer
- possibility of loading large bytea in JVM with small heap (we loaded
about 1GB in JVM with 256MB)
- XA

But, I would like to ask, if someone of You have tested it, and / or
compared. We actually looking for general opinion if it's suitable for
JavaEE + JPA/Hibernate

Regards,
J.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Alternative JDBC driver

2011-09-13 Thread Dave Cramer
I looked at the link.

Where is the code ? What license ?

Dave Cramer

dave.cramer(at)credativ(dot)ca
http://www.credativ.ca




On Tue, Sep 13, 2011 at 4:18 AM, Jimmy K.  wrote:
> Hello,
>
> I asked this question on JDBC list, but it seems "dead".
>
> I found alternative JDBC driver marked as Beta 2 (actually last time
> bumped to Beta 2.1) http://softperience.eu/pages/cmn/ngpgjdbc.xhtml.
> We still test it. Those features are, mainly, in our interest
> - binary transfer
> - possibility of loading large bytea in JVM with small heap (we loaded
> about 1GB in JVM with 256MB)
> - XA
>
> But, I would like to ask, if someone of You have tested it, and / or
> compared. We actually looking for general opinion if it's suitable for
> JavaEE + JPA/Hibernate
>
> Regards,
> J.
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with the 9.1 one-click installer Windows7 64bit

2011-09-13 Thread Thomas Kellerer

Merlin Moncure, 12.09.2011 21:28:

With the second attempt, the installer again hang during initdb. Checking
the state using ProcessExplorer I could see that the installer script was
waiting for icacls.exe to set permissions for the user currently running the
installer. It was running [icacls.exe "thomas"] instead of [icacls.exe
"mydomain\thomas"] - although I have to admit that I don't know if that
would make a difference.

So I killed the iacls.exe and the script proceeded, just to hang at the next
call to icacls.exe when it tried to set the privileges on the directory for
the postgres user despite the fact that that user already was the owner and
had full control over it. So I killed icacls.exe again and then the script
finally finished without problems. The service was registered and
successully started.

UAC is turned off on my computer.



hm, why is icacls hanging? does it do that if you run it from the command line?



I re-ran the installer and copied the actual commandline the installer was 
using.

It turned out that it tries the following:
(Note that the selected data directory is c:\Daten\db\pgdata91)

   icacls  C:\ /grant "tkellerer":RX

(Changing the permission for the whole harddisk!!)

Then when I kill that process, the installer starts a new one with

  icacls  "C:\Daten" /grant "tkellerer":RX

Then I kill that one again, a new one is started with:

  icacls  C:\ /grant "postgres":RX

Then after killing that one, a new one is started with:

  icacls  "C:\Daten" /grant "postgres":RX

So it didn't actually hang, but changing the permissions for the entire 
harddisk (170GB taken) and the complete c:\Daten (~20GB) directory twice would 
have taken ages.

Seems there is something fishy about the way the iacls.exe commandline is 
assembled.

Regards
Thomas


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump

2011-09-13 Thread Raymond O'Donnell
On 13/09/2011 03:40, Rogel Nocedo wrote:
> Hi!
>
> How can I please a directory folder where my backup files will be
> placed?  I am calling pg_dump
> 
> C:\Program Files (x86)\PostgreSQL\9.0\bin>pg_dump -i -h localhost -p
> 5433 -U postgres -f add.sql --column-inserts -t address my_db
> 
> C:\Program Files (x86)\PostgreSQL\9.0\bin>pg_dump -i -h localhost -p
> 5433 -U postgres -f phone.sql --column-inserts -t phone my_db
> 
> I tried these but did not work.
> 
> C:\Program Files (x86)\PostgreSQL\9.0\bin>pg_dump -i -h localhost -p
> 5433 -U postgres -f C:\dbbackup13092011\add.sql --column-inserts -t
> address my_db
> 
> C:\Program Files (x86)\PostgreSQL\9.0\bin>pg_dump -i -h localhost -p
> 5433 -U postgres -f C:\dbbackup13092011\phone.sql --column-inserts -t
> phone my_db
> 
> C:\dbbackup does not exist yet.

You need to create the directory first - pg_dump won't create it for you.

BTW, if you're running a 64-bit OS (from the paths above), there's a
64-bit Windows version of PG.

-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: Need help with what I think is likely a simple query - for each distinct val, return only one record with the most recent date.

2011-09-13 Thread Vincent de Phily
On Monday 12 September 2011 22:51:54 Reid Thompson wrote:
> test=# select distinct on (val1) val1, val2, val3 from (SELECT
> max(val3) OVER (PARTITION BY val1), * FROM sampledata) as sq where val3
> = max order by val1;

Other things I've tried (was limited to PG8.3 so no "OVER (PARTITION...)" 
support) :

SELECT DISTINCT ON (val1), val2, val3 FROM table ORDER BY val1, val3;

SELECT val1, val2, val3 FROM table WHERE id IN (
   SELECT sq.i FROM (
  SELECT val1, max(val3) FROM table GROUP by 1
   ) AS sq (v,i))

My case was a bit different since I wanted the record for distinct(A,B) 
instead of just DISTINC(A), and since I had a primary key available on the 
table. But let it be food for thought.



However, none of those queries are either efficient or beautiful, so I ended 
up populating a "last_values" table via a trigger, which is way more efficient 
if it fits your needs :

CREATE TABLE last_values (PRIMARY KEY foo, bar integer, baz integer);

CREATE OR REPLACE FUNCTION insert_last_value() RETURNS TRIGGER AS $$
BEGIN
   UPDATE last_values SET val1=NEW.val1... WHERE ...;
   IF NOT found THEN
  BEGIN
 INSERT INTO last_values (...) VALUES (NEW);
 EXCEPTION
WHEN UNIQUE_VIOLATION THEN
   UPDATE last_values SET ... WHERE ...;
 END;
END IF;
RETURN NULL;
END
$$ LANGUAGE plpgsql;

CREATE TRIGGER last_values_trigger AFTER INSERT ON values FOR EACH ROW EXECUTE 
PROCEDURE insert_last_values();

-- 
Vincent de Phily

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Has Pg 9.1.0 been released today?

2011-09-13 Thread Venkat Balaji
Yes. I would be excited to know if there is a possibility of multi-master
replication system on Postgres.

We will be soon using 9.1 Streaming replication.

Thanks
Venkat

On Tue, Sep 13, 2011 at 1:31 AM, Aleksey Tsalolikhin <
atsaloli.t...@gmail.com> wrote:

> Congratulations on the release of 9.1.0!
>
> Lots of great features, I for one can't wait to try out unlogged
> tables, that should help a lot in our environment.
>
> Now that you have streaming replication both async and sync, are you
> working on multi-master replication?  *excited*  Or what's the
> roadmap?
>
> Thanks again and keep up the great work!
>
> Aleksey
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


[GENERAL] Jenkins

2011-09-13 Thread salah jubeh
Hello.


This might be the wrong place to post my question, but any help is appreciated.

Did any one used Jenkins for Postgresql unit testing, and what are the 
available unit testing plug-ins?,  is there is a tutorial for that ?

Thanks in advance

Re: [GENERAL] Has Pg 9.1.0 been released today?

2011-09-13 Thread Leonardo Carneiro
On Tue, Sep 13, 2011 at 8:12 AM, Venkat Balaji wrote:

> Yes. I would be excited to know if there is a possibility of multi-master
> replication system on Postgres.
>
> We will be soon using 9.1 Streaming replication.
>
> Thanks
> Venkat
>
>
> On Tue, Sep 13, 2011 at 1:31 AM, Aleksey Tsalolikhin <
> atsaloli.t...@gmail.com> wrote:
>
>> Congratulations on the release of 9.1.0!
>>
>> Lots of great features, I for one can't wait to try out unlogged
>> tables, that should help a lot in our environment.
>>
>> Now that you have streaming replication both async and sync, are you
>> working on multi-master replication?  *excited*  Or what's the
>> roadmap?
>>
>> Thanks again and keep up the great work!
>>
>> Aleksey
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
>
There is any doc about this feature:

   - Add protocol support for sending file system backups to standby servers
   using the streaming replication network connection (Magnus Hagander, Heikki
   Linnakangas)
   This avoids the requirement of manually transferring a file system backup
   when setting up a standby server.

?

It's a very, VERY welcome feature. Tks to all developers, testers and other
people involved in the great software.


Re: [GENERAL] 8.4.4 locked after power failure

2011-09-13 Thread Grzegorz Jaśkiewicz
I'm guessing I won't get much more from devs , without providing more
info here which unfortunately has been lost.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



[GENERAL] Window Function API Performance

2011-09-13 Thread Schneider, Thilo
Dear all,

first let me thank the PostgreSQL developers for implementing the incredibly 
helpful window functions. This is one of the features I use most often.

But, as my requirements are perhaps a bit special, I always want more ;)
Now I am trying to implement my own window function using C-Language-Functions 
and the window function API, which, by the way, gives a very nice interface for 
developing my own window functions.

My function needs a very long time to execute while the underlying algorithm 
should be relatively fast.

Currently I access rows of the window partition using the 
WinGetFuncArgInPartition function. However, what I noticed is that this 
function seems incredibly slow when the partition looked at is rather large - 
for my case about 1245000 rows. I call that function about 10 times for each 
row at the first row of each partition (using more or less a random access 
schema) and compute some summary statistics. When I just randomly stop the 
running function with a debugger, it nearly always seems to be caught in 
BufFileRead(), thus my assumption is that this function and the call to 
WinGetFuncArgInPartition is the bottleneck of my function.

Is anybody able to explain why I get such bad runtimes? And even better, do you 
see a way to speed up WinGetFuncArgInPartition or replace it by something more 
performant? Would linear access be faster?

Thank you for your support.
Thilo Schneider

P.S.: I know my question might be better suited the hackers-list. However, as 
all questions should be asked elsewhere first, let's have a try :)




**

**


Fraport AG
Frankfurt Airport Services Worldwide
60547 Frankfurt am Main

Sitz der Gesellschaft:
Frankfurt am Main
Amtsgericht Frankfurt am Main: HRB 7042
Umsatzsteuer-Identifikationsnummer: DE 114150623

Vorsitzender des Aufsichtsrates:
Karlheinz Weimar - Hessischer Finanzminister a.D.

Vorstand:
Dr. Stefan Schulte (Vorsitzender)
Herbert Mai
Peter Schmitz
Dr. Matthias Zieschang
**
Diese E-Mail kann vertrauliche und/oder rechtlich geschuetzte Informationen 
enthalten.
Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtuemlich 
erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie 
diese Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail 
ist nicht gestattet.

This e-mail may contain confidential and/or privileged information. If you are 
not the intended recipient (or have received this e-mail in error) please 
notify the sender immediately and destroy this e-mail. Any unauthorized 
copying, disclosure or distribution of the material in this e-mail is strictly 
forbidden.
**


Re: [GENERAL] Need help with what I think is likely a simple query - for each distinct val, return only the record with the most recent date.

2011-09-13 Thread jesuthefrog
I would think to do it like
   SELECT DISTINCT ON (val) val,date ORDER BY date DESC, val ASC

I haven't tested this, but it's similar to things I've done recently,
and I'm pretty sure this will do what you want.

On Mon, Sep 12, 2011 at 8:54 PM, Reid Thompson  wrote:
> Could someone point me in the right direction..
> Thanks - reid
>
> Given the example data,
> how do I write a query that will give me the resultset:
>
> 1    2011-01-01
> 2    2011-01-06
> 3    2011-01-05
> 4    2011-01-09
> 5    2011-01-05
> 6    2011-01-08
>
> I.E. for each distinct val, return the record with the most recent date.
>
> ex data
> val  date
> 1    2011-01-01
> 2    2011-01-02
> 3    2011-01-03
> 4    2011-01-04
> 5    2011-01-05
> 5    2011-01-01
> 4    2011-01-02
> 6    2011-01-03
> 4    2011-01-04
> 3    2011-01-05
> 2    2011-01-06
> 4    2011-01-07
> 6    2011-01-08
> 4    2011-01-09
> 5    2011-01-01
> 2    2011-01-02
> 4    2011-01-03
> 2    2011-01-04
> 1    2011-01-01
> 2    2011-01-02
> 3    2011-01-03
> 4    2011-01-04
> 3    2011-01-05
> 1    2011-01-01
> 2    2011-01-02
> 3    2011-01-03
> 4    2011-01-04
> 5    2011-01-01
> ---
>
> $ cat sampledata|sort -k1,2
> 1    2011-01-01
> 1    2011-01-01
> 1    2011-01-01
> 2    2011-01-02
> 2    2011-01-02
> 2    2011-01-02
> 2    2011-01-02
> 2    2011-01-04
> 2    2011-01-06
> 3    2011-01-03
> 3    2011-01-03
> 3    2011-01-03
> 3    2011-01-05
> 3    2011-01-05
> 4    2011-01-02
> 4    2011-01-03
> 4    2011-01-04
> 4    2011-01-04
> 4    2011-01-04
> 4    2011-01-04
> 4    2011-01-07
> 4    2011-01-09
> 5    2011-01-01
> 5    2011-01-01
> 5    2011-01-01
> 5    2011-01-05
> 6    2011-01-03
> 6    2011-01-08
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GIT d+ s: a-- C(++) UL+++ P$ L+++> E- W+ N o? K w--- O-
M- V? PS+++ PE(-) Y+ PGP->+++ t+>++ 5+ X(+) R>+ tv b+
DI++ D++>+++ G+ e* h! !r y**
--END GEEK CODE BLOCK--

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgreSQL 9.1, replica and unlogged tables

2011-09-13 Thread Ferruccio Zamuner
I'm starting to play with PostgreSQL 9.1, thank you all for this nice 
and sweet piece of software.


I've two hosts in my cluster:

a) postgresql master
b) postgresql standby

I've created two tables on master:

create table test_logged (id serial, nome text);
create unlogged table test_unlogged (id serial, nome text);


Both tables appears on standby too but on standby following query:

select * from test_unlogged;

gives me following message:

ERROR:  cannot access temporary or unlogged relations during recovery


I understand that unlogged table are not replicated, but I expected:
1) not see defined unlogged tables on standby
OR
2) see them void on standby and use them to store different set of 
records for each standby (like web sessions) those need not to be 
replicated in the cluster.


Robe on #postgresql suggest me to run another postgresql instance on 
each custer host node to store local volatile data (like web app sessions).

Is it this the best option actually?


Thank you in advance,\ferz


PS: I've written some simply tests and I've seen that inserts on 
unlogged tables are 10 times faster.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 8.4.4 locked after power failure

2011-09-13 Thread Merlin Moncure
2011/9/13 Grzegorz Jaśkiewicz :
> I'm guessing I won't get much more from devs , without providing more
> info here which unfortunately has been lost.

yup -- you destroyed all the evidence.  if it happens again, try
posting some more info, particularly what's going on with locks,
pg_stat_activity, maybe an strace of any stuck processes, etc.  also
irc can be a good place to get some quick help if its time sensitive.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: Need help with what I think is likely a simple query - for each distinct val, return only one record with the most recent date.

2011-09-13 Thread Hannes Erven
Reid,


> where any one of these 3
>
> 11   2011-01-01
> 11   2011-01-01
> 13   2011-01-01
>
> or any one of these 2
> 31   2011-01-05
> 32   2011-01-05
>
> are suitable for val = 1, val = 3 respectively.


Can you please describe in words what you are trying to accomplish? When
I look at your data and expected output, I'd say you want this:

   For each distinct value of "val1", return any value of "val2" and
   the lowest value of "date".


This is actually quite simple - you could also use max(), avg(), ...
instead of min for val2:

SELECT val1, min(val2), min(date)
FROM data
GROUP BY val1


Best regards

-hannes

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 8.4.4 locked after power failure

2011-09-13 Thread Grzegorz Jaśkiewicz
2011/9/13 Merlin Moncure :
> 2011/9/13 Grzegorz Jaśkiewicz :
>> I'm guessing I won't get much more from devs , without providing more
>> info here which unfortunately has been lost.
>
> yup -- you destroyed all the evidence.  if it happens again, try
> posting some more info, particularly what's going on with locks,
> pg_stat_activity, maybe an strace of any stuck processes, etc.  also
> irc can be a good place to get some quick help if its time sensitive.

See, I was hoping that just the fact that it occurred might trigger
some suspicions on the -devel side of things. I know it would in my
product.



-- 
GJ

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 8.4.4 locked after power failure

2011-09-13 Thread Merlin Moncure
2011/9/13 Grzegorz Jaśkiewicz :
> 2011/9/13 Merlin Moncure :
>> 2011/9/13 Grzegorz Jaśkiewicz :
>>> I'm guessing I won't get much more from devs , without providing more
>>> info here which unfortunately has been lost.
>>
>> yup -- you destroyed all the evidence.  if it happens again, try
>> posting some more info, particularly what's going on with locks,
>> pg_stat_activity, maybe an strace of any stuck processes, etc.  also
>> irc can be a good place to get some quick help if its time sensitive.
>
> See, I was hoping that just the fact that it occurred might trigger
> some suspicions on the -devel side of things. I know it would in my
> product.

sure.  In lieu of useful and/or specific information, the #1 suspicion
is always going to be 'operator error', because, more often than not,
that's what it is.  Having seen and read your posts on the various
lists for some time now I know that if you report a problem then there
likely is a problem, but without more detail this is essentially a
fishing expedition :-).

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_restore must failure on attempt

2011-09-13 Thread artdias90
As I try to restore my data, I get the warning message:

pg_restore: [archiver] input file does not appear to be a valid archive (too
short?)


Ok, but the job ends with green status, I want it to finish with error. Is
there any way of using $? there?

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/pg-restore-must-failure-on-attempt-tp4798842p4798842.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL 9.1, replica and unlogged tables

2011-09-13 Thread Merlin Moncure
On Tue, Sep 13, 2011 at 9:11 AM, Ferruccio Zamuner  wrote:
> I'm starting to play with PostgreSQL 9.1, thank you all for this nice and
> sweet piece of software.
>
> I've two hosts in my cluster:
>
> a) postgresql master
> b) postgresql standby
>
> I've created two tables on master:
>
> create table test_logged (id serial, nome text);
> create unlogged table test_unlogged (id serial, nome text);
>
>
> Both tables appears on standby too but on standby following query:
>
> select * from test_unlogged;
>
> gives me following message:
>
> ERROR:  cannot access temporary or unlogged relations during recovery
>
>
> I understand that unlogged table are not replicated, but I expected:
> 1) not see defined unlogged tables on standby
> OR
> 2) see them void on standby and use them to store different set of records
> for each standby (like web sessions) those need not to be replicated in the
> cluster.
>
> Robe on #postgresql suggest me to run another postgresql instance on each
> custer host node to store local volatile data (like web app sessions).
> Is it this the best option actually?

depends.  The postgresql system tables which contain your schema are
replicated along with everything else which is why the table is
visible on the standby -- however the data itself is not replicated.
I somewhat prefer the existing behavior vs the alternatives you list
-- it just seems the most regular.

Writing to any table on the standby is strictly forbidden so you can
forget having your own volatile copy.  Regarding setting up a volatile
postgresql instance, that's too difficult to answer based on the
information given, I'd say only do that if you absolutely can't work
your requirements around a standard HS/SR setup.  One possible
workaround for managing volatile data in the standby would be using
function managed data stores (like a pl/perl hash, etc).  Note that
those data stores wont honor mvcc, so use caution.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: Need help with what I think is likely a simple query - for each distinct val, return only one record with the most recent date.

2011-09-13 Thread Reid Thompson
On Tue, 2011-09-13 at 16:39 +0200, Hannes Erven wrote:
> Reid,
> 
> 
> > where any one of these 3
> >
> > 11   2011-01-01
> > 11   2011-01-01
> > 13   2011-01-01
> >
> > or any one of these 2
> > 31   2011-01-05
> > 32   2011-01-05
> >
> > are suitable for val = 1, val = 3 respectively.
> 
> 
> Can you please describe in words what you are trying to accomplish? When
> I look at your data and expected output, I'd say you want this:
> 
>For each distinct value of "val1", return any value of "val2" and
>the lowest value of "date".

for each distinct value of "val1', return the highest value(most recent)
of "date" and the value of "val2" associated with that date




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Has Pg 9.1.0 been released today?

2011-09-13 Thread John R Pierce

On 09/12/11 1:01 PM, Aleksey Tsalolikhin wrote:

Now that you have streaming replication both async and sync, are you
working on multi-master replication?


general case multimaster replication is extremely hard to do 'right'.   
all solutions compromise data integrity and/or create huge performance 
bottlenecks.




--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] warm standby - apply wal archives

2011-09-13 Thread MirrorX
just an update from my tests

i restored from the backup. the db is about 2.5TB and the wal archives were
about 300GB. the recovery of the db was completed after 3 hours. thx to all
for your help

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/warm-standby-apply-wal-archives-tp4770567p4799786.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Window Function API Performance

2011-09-13 Thread Tom Lane
"Schneider, Thilo"  writes:
> Currently I access rows of the window partition using the
> WinGetFuncArgInPartition function. However, what I noticed is that
> this function seems incredibly slow when the partition looked at is
> rather large - for my case about 1245000 rows.

It might help to increase work_mem ... if that's not large enough to
hold the whole partition in a tuplestore, performance will definitely
suffer.

Also, make sure you're using a release new enough to have this patch:

Author: Tom Lane 
Branch: master Release: REL9_1_BR [244407a71] 2010-12-10 11:33:38 -0500
Branch: REL9_0_STABLE Release: REL9_0_2 [14a58a1c9] 2010-12-10 11:33:38 -0500
Branch: REL8_4_STABLE Release: REL8_4_6 [999e82cc8] 2010-12-10 11:33:38 -0500

Fix efficiency problems in tuplestore_trim().

The original coding in tuplestore_trim() was only meant to work efficiently
in cases where each trim call deleted most of the tuples in the store.
Which, in fact, was the pattern of the original usage with a Material node
supporting mark/restore operations underneath a MergeJoin.  However,
WindowAgg now uses tuplestores and it has considerably less friendly
trimming behavior.  In particular it can attempt to trim one tuple at a
time off a large tuplestore.  tuplestore_trim() had O(N^2) runtime in this
situation because of repeatedly shifting its tuple pointer array.  Fix by
avoiding shifting the array until a reasonably large number of tuples have
been deleted.  This can waste some pointer space, but we do still reclaim
the tuples themselves, so the percentage wastage should be pretty small.

Per Jie Li's report of slow percent_rank() evaluation.  cume_dist() and
ntile() would certainly be affected as well, along with any other window
function that has a moving frame start and requires reading substantially
ahead of the current row.

Back-patch to 8.4, where window functions were introduced.  There's no
need to tweak it before that.

I think Hitoshi-san has some ideas for future performance improvements
in this area, but partitions bigger than working memory are probably
always going to be bad news.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Query performance help with 'shadow table' approach.

2011-09-13 Thread Brian Fehrle

Hi all,
I've got a large table that has 15 million + rows in it, and a set 
of queries I've been trying to speed up. The table has a primary key 
column, and a couple hundred other columns.


These queries basically do a 'select max(primary_key_column) from table 
group by column1, column2." Because of the group by, we would result in 
a sequential scan of the entire table which proves to be costly.


Since the table has a ton of columns, I set up a smaller table that will 
house a copy of some of the data that the query uses, the Primary Key 
colum, and the two columns I do my 'group by' on. My application is 
smart enough to update this 'shadow' table whenever the main table is 
updated, so it will accurately mirror the other table. This shadow table 
will also only contain one row for every column1 and column2 combination 
(due to the group by), and for those rows, will have the max of the 
primary key. Even with this, the 'shadow' table will have about 14 
million rows, compared to the 15 million in the main table.


Here is an example query that I'm working with:
postgres=# explain select T2.pkey_sid, T2.column54, T2.column44. 
T2.column67 FROM

public.mytable AS T2
JOIN public.mytable_shadow AS T3
ON (T2.pkey_sid = T3.pkey_sid)
WHERE T3.column1 >= 1072310434 AND T3.column1 <= 1074124834;
  QUERY PLAN
---
 Hash Join  (cost=118310.65..2250928.27 rows=409600 width=8)
   Hash Cond: (t2.pkey_sid = t3.pkey_sid)
   ->  Seq Scan on mytable t2  (cost=0.00..2075725.51 rows=15394251 
width=8)

   ->  Hash  (cost=113190.65..113190.65 rows=409600 width=8)
 ->  Bitmap Heap Scan on mytable_shadow t3 
(cost=12473.65..113190.65 rows=409600 width=8)
   Recheck Cond: ((1072310434 <= column1) AND (column1 <= 
1074124834))
   ->  Bitmap Index Scan on mytable_shadow_pkey  
(cost=0.00..12371.25 rows=409600 width=0)
 Index Cond: ((1072310434 <= column1) AND (column1 
<= 1074124834))

(8 rows)

So the issue here comes in retrieving the needed data from my main 
table. The resulting rows is estimated to be 409,600, and the retrieving 
of the primary key's that are associated with those rows is actually 
really easy. However, when we take those 409,600 rows back to the main 
table to retrieve the other columns I need, the planner is just doing a 
sequential scan as it's most likely going to be faster than hitting the 
index then retrieving the columns I need for all 400K+ rows.


Things to note:
1. If I reduce my where clause's range, then the sequential scan turns 
into an index scan, but sadly this can't always be done.
2. I have appropriate indexes where they need to be. The issue is in the 
query planner not using them due to it (i assume) just being faster to 
scan the whole table when the data set it needs is as large as it is.
3. Without this shadow table, my query would look _something_ like this 
(The idea being, retrieve a certain set of columns from the rows with 
the max(primary key) based on my group by):
select pkey_sid, column54, column44, column47\\67 from public.mytable 
where pkey_sid in (select max(pkey_sid) from public.mytable group by 
column1, column2);



So I need to see how I can speed this up. Is my approach misguided, or 
are there other ways I can go about it? Any thoughts, suggestions, or 
info would be greatly appreciated. And I tried to explain it all easily, 
if I can be more clear let me know.


Thanks,
- Brian F




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query performance help with 'shadow table' approach.

2011-09-13 Thread Ondrej Ivanič
Hi,

On 14 September 2011 07:44, Brian Fehrle  wrote:
> 2. I have appropriate indexes where they need to be. The issue is in the
> query planner not using them due to it (i assume) just being faster to scan
> the whole table when the data set it needs is as large as it is.

Try to reduce random_page cost to 2, which biased planner towards
index scans,  (set random_page = 2 before the query; assuming that
default seq_page_cost and random_page_cost are 1 and 4 respectively)
and run "explain analyze". Sometimes is worth to disable nested loops
join (set enable_nestloop = off). Finally you can increase
default_statistics_target (or ALTER TABLE SET STATISTICS) to 100 (8.4
has this as a default) on selected columns or table (and run analyze
on that table).

-- 
Ondrej Ivanic
(ondrej.iva...@gmail.com)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgreSQL benchmarked on XFS vs ZFS vs btrfs vs ext4

2011-09-13 Thread Toby Corkindale

Hi,
Some months ago, I ran some (probably naive) benchmarks looking at how 
pgbench performed on an identical system with differing filesystems. (on 
Linux).


Since then the kernel-level version of ZFS became usable, and there have 
been improvements to btrfs, and no doubt various updates in the Linux 
kernel and PostgreSQL that should help performance.


I ran the tests on Ubuntu 11.04 with Pg 9.0 first, then upgraded the 
system to Ubuntu 11.10 (beta) with Pg 9.1 and ran them again.


The latter combination showed a considerable performance improvement 
overall - although I didn't investigate to find out whether this was due 
to kernel improvements, postgres improvements, or virtio improvements.


The results are measured in transactions-per-second, with higher numbers 
being better.


Results:

ext4 (data=writeback,relatime):
natty: 248
  oneiric: 297

ext4 (data=writeback,relatime,nobarrier):
natty: didn't test
  oneiric: 1409

XFS (relatime):
natty: didn't test
  oneiric: 171

btrfs (relatime):
natty: 61.5
  oneiric: 91

btrfs (relatime,nodatacow):
natty: didn't test
  oneiric: 128

ZFS (defaults):
natty: 171
  oneiric: 996


Conclusion:
Last time I ran these tests, xfs and ext4 pulled very similar results, 
and both were miles ahead of btrfs. This time around, ext4 has managed 
to get a significantly faster result than xfs.


However we have a new contender - ZFS performed *extremely* well on the 
latest Ubuntu setup - achieving triple the performance of regular ext4!
I'm not sure how it achieved this, and whether we're losing some kind of 
data protection (eg. like the "barrier" options in XFS and ext4).

If ext4 has barriers disabled, it surpasses even ZFSs high score.

Oddly, ZFS performed wildly differently on ubuntu 11.04 vs 11.10b. I 
can't explain this. Any ideas?



Cheers,
Toby

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 8.4.4 locked after power failure

2011-09-13 Thread Craig Ringer

On 09/13/2011 11:04 PM, Grzegorz Jaśkiewicz wrote:

2011/9/13 Merlin Moncure:

2011/9/13 Grzegorz Jaśkiewicz:

I'm guessing I won't get much more from devs , without providing more
info here which unfortunately has been lost.


yup -- you destroyed all the evidence.  if it happens again, try
posting some more info, particularly what's going on with locks,
pg_stat_activity, maybe an strace of any stuck processes, etc.  also
irc can be a good place to get some quick help if its time sensitive.


See, I was hoping that just the fact that it occurred might trigger
some suspicions on the -devel side of things. I know it would in my
product.


Of course it does. How would _you_ go about reproducing it or tracing it 
so you could even begin to track it down, though? There just isn't 
enough information to work with.


It's like someone contacting you and explaining that they saved a file 
containing a huge embedded image from your office suite software, but 
when they went to open the file later it crashed the office suite. They 
were in a hurry and didn't have the disk space to keep the old one, so 
they had to redo the work and save over the file. It's good to get the 
report, but ... what do you do about it? They don't have a faulty file, 
a reproducible test case, any or any good prospects for working on 
creating a test case.


--
Craig Ringer


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Has Pg 9.1.0 been released today?

2011-09-13 Thread Craig Ringer

On 09/13/2011 04:01 AM, Aleksey Tsalolikhin wrote:

Congratulations on the release of 9.1.0!

Lots of great features, I for one can't wait to try out unlogged
tables, that should help a lot in our environment.

Now that you have streaming replication both async and sync, are you
working on multi-master replication?  *excited*  Or what's the
roadmap?


I haven't heard anything about multimaster, but if you're interested, 
search the archives of the pgsql-hackers mailing list for discussions on it.


Personally I can't see SR being helpful as the base of an MM 
implementation; it's totally reliant on the idea of one server being the 
source of all WAL data. I suspect that MM would have to be a completely 
different affair. I know a lot of people are dubious about MM because of 
its inherent performance limitations and the difficulty of getting it to 
be both correct and even vaguely fast-ish. You need shared storage or 
some protocol for doing I/O via other masters; you need a network 
locking protocol that handles the numerous kinds of locking required and 
somehow does it fast, etc. The network locking protocol alone would be a 
major effort, especially as you'd want to do optimistic locking where 
possible. That's where transactions don't block, instead they fail on 
commit if a conflicting transaction already committed.


Personally I'm vaguely interested in the idea of selective replication, 
where some tables or databases can be omitted from replication (or even 
sent to a different replication client) but still WAL-logged and 
crash-safe on the master. Doing this with tablespace granularity would 
possibly make sense. Pg already stores heap data as individual files, 
one or more per index/table/etc, so if it could split WAL-logging out 
into per-tablespace logs then some clients could elect not to carry some 
tablespaces, and they could be treated as unlogged tables on that client.


That said, "vaguely interested in" means I haven't the time to even 
begin learning the appropriate parts of the codebase, nor the enthusiasm 
for it. I don't need the feature and don't even use SR, I just know some 
others would benefit from it and have seen requests for selective 
replication here before.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Has Pg 9.1.0 been released today?

2011-09-13 Thread Joshua D. Drake


On 09/13/2011 06:43 PM, Craig Ringer wrote:


On 09/13/2011 04:01 AM, Aleksey Tsalolikhin wrote:

Congratulations on the release of 9.1.0!

Lots of great features, I for one can't wait to try out unlogged
tables, that should help a lot in our environment.

Now that you have streaming replication both async and sync, are you
working on multi-master replication? *excited* Or what's the
roadmap?


I would take a look at postgres-r. It is under active and supported 
development.


Sincerely,

Joshua D. Drake
--
Command Prompt, Inc. - http://www.commandprompt.com/
PostgreSQL Support, Training, Professional Services and Development
The PostgreSQL Conference - http://www.postgresqlconference.org/
@cmdpromptinc - @postgresconf - 509-416-6579

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: Need help with what I think is likely a simple query - for each distinct val, return only one record with the most recent date.

2011-09-13 Thread Eduardo Piombino
I'm sorry Reid, driving back home I realized that the previous query I
suggested didn't do what I was expecting, cause it would compute all of val2
for each val1, even if they belonged to another group (not for a particular
val1/date pair), or in other words, to another date.

I've considered this fact in the previous post but the resulting query
appeared too complex for such a simple task, and then disregarded it, but
well, after all it seems it was necessary to do this little extra work,
because you wanted the exact val2 associated to the max(date) for val1.

I've come up with this alternative, basically a "key extractor" followed by
diving in the original table looking for val2 for that key.
As I've said before, the data you provided showed that there could be
multiple rows for each key made of val1 and max(date) for that specific
val1.

So, you will still have to define a criteria on which val2 to keep.
Since I don't know the reason for this query, I've suggested an array_agg so
that it is more clear to you.

select
   a.val1,
   a.date,
   array_agg(mytable.val2) -- given there can be multiple rows for any
val1/date pair, this is where you are allowed to define which one you want
(or all of them)
from (
   select val1, max(date) as date from mytable group by val1) a -- this
gives you only rows satisfying val1/max(date)
   inner join mytable on a.val1 = mytable.val1 and a.date = mytable.date --
this join allows the retrieval of val2 for that "key" formed by
val1/max(date), but be prepared for many rows
group by
   a.val1,
   a.date
order by
   val1;

test case:


select val1, val2, date from mytable;

1;16;"2011-09-13";1
1;15;"2011-09-13";2
1;14;"2011-09-13";3

1;23;"2011-09-12";4
1;22;"2011-09-12";5
1;21;"2011-09-12";6

2;6;"2011-09-13";7
2;5;"2011-09-13";8
2;4;"2011-09-13";9

2;3;"2011-09-14";10
2;2;"2011-09-14";11
2;1;"2011-09-14";12

This is the result coming from the new query, you see, only val2's of 14, 15
and 16 are computed (you will still eventually have to select one from it,
using a more specific aggregate, like max, min, avg, etc.)
1;"2011-09-13";"{14,15,16}"
2;"2011-09-14";"{1,2,3}"


This is the result coming from the original, simpler (but flawed) query,
which as it clearly shows computes val2's of 14, 15, 16, 23, 22 and 21,
beging those last 3 (23, 22 and 21) from another date associated with val1
(which is not the max date, so they shouldn't have been considered at all).

select val1, max(date), array_agg(val2) from mytable group by val1

1;"2011-09-13";"{14,15,16,23,22,21}"
2;"2011-09-14";"{1,2,3,4,5,6}"


hope it helps.
regards,
eduardo


On Tue, Sep 13, 2011 at 1:13 PM, Reid Thompson wrote:

> On Tue, 2011-09-13 at 16:39 +0200, Hannes Erven wrote:
> > Reid,
> >
> >
> > > where any one of these 3
> > >
> > > 11   2011-01-01
> > > 11   2011-01-01
> > > 13   2011-01-01
> > >
> > > or any one of these 2
> > > 31   2011-01-05
> > > 32   2011-01-05
> > >
> > > are suitable for val = 1, val = 3 respectively.
> >
> >
> > Can you please describe in words what you are trying to accomplish? When
> > I look at your data and expected output, I'd say you want this:
> >
> >For each distinct value of "val1", return any value of "val2" and
> >the lowest value of "date".
>
> for each distinct value of "val1', return the highest value(most recent)
> of "date" and the value of "val2" associated with that date
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] PostgreSQL benchmarked on XFS vs ZFS vs btrfs vs ext4

2011-09-13 Thread Andy Colson

On 09/13/2011 08:15 PM, Toby Corkindale wrote:

Hi,
Some months ago, I ran some (probably naive) benchmarks looking at how pgbench 
performed on an identical system with differing filesystems. (on Linux).

Since then the kernel-level version of ZFS became usable, and there have been 
improvements to btrfs, and no doubt various updates in the Linux kernel and 
PostgreSQL that should help performance.

I ran the tests on Ubuntu 11.04 with Pg 9.0 first, then upgraded the system to 
Ubuntu 11.10 (beta) with Pg 9.1 and ran them again.

The latter combination showed a considerable performance improvement overall - 
although I didn't investigate to find out whether this was due to kernel 
improvements, postgres improvements, or virtio improvements.

The results are measured in transactions-per-second, with higher numbers being 
better.

Results:

ext4 (data=writeback,relatime):
natty: 248
oneiric: 297

ext4 (data=writeback,relatime,nobarrier):
natty: didn't test
oneiric: 1409

XFS (relatime):
natty: didn't test
oneiric: 171

btrfs (relatime):
natty: 61.5
oneiric: 91

btrfs (relatime,nodatacow):
natty: didn't test
oneiric: 128

ZFS (defaults):
natty: 171
oneiric: 996


Conclusion:
Last time I ran these tests, xfs and ext4 pulled very similar results, and both 
were miles ahead of btrfs. This time around, ext4 has managed to get a 
significantly faster result than xfs.

However we have a new contender - ZFS performed *extremely* well on the latest 
Ubuntu setup - achieving triple the performance of regular ext4!
I'm not sure how it achieved this, and whether we're losing some kind of data protection 
(eg. like the "barrier" options in XFS and ext4).
If ext4 has barriers disabled, it surpasses even ZFSs high score.

Oddly, ZFS performed wildly differently on ubuntu 11.04 vs 11.10b. I can't 
explain this. Any ideas?


Cheers,
Toby



Did you test unplugging the power cable in the middle of a test to see which 
would come back up?

-Andy

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_restore must failure on attempt

2011-09-13 Thread Adrian Klaver
On Tuesday, September 13, 2011 7:57:44 am artdias90 wrote:
> As I try to restore my data, I get the warning message:
> 
> pg_restore: [archiver] input file does not appear to be a valid archive
> (too short?)
> 
> 
> Ok, but the job ends with green status, I want it to finish with error. Is
> there any way of using $? there?

First guess is you are using pg_restore to restore a text based dump file.  If 
the database dump is in text format you will need to use psql.
> 
> --


-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with the 9.1 one-click installer Windows7 64bit

2011-09-13 Thread Craig Ringer

On 09/13/2011 02:44 PM, Thomas Kellerer wrote:


It turned out that it tries the following:
(Note that the selected data directory is c:\Daten\db\pgdata91)

icacls C:\ /grant "tkellerer":RX

(Changing the permission for the whole harddisk!!)


I forwarded your message to Dave Page in case the EDB folks needed to 
look into this. He commented that:


"Aside from the fact that icacls is hanging for reasons unknown, it 
appears to be doing what it is designed to do - it traverses up the path 
from the data directory to the root directory, granting read/execute 
permissions on each dir as it goes. It's a non-recursive grant (the 
default - /T makes it recurse), and is done because a great deal of the 
failures we used to see were caused by the user not having read or 
execute permissions on one of the parents of the data directory (or 
installation directory)."


... which to me explains why:


Then when I kill that process, the installer starts a new one with

icacls "C:\Daten" /grant "tkellerer":RX




So it didn't actually hang, but changing the permissions for the entire
harddisk (170GB taken) and the complete c:\Daten (~20GB) directory twice
would have taken ages.


As Dave noted, it's a non-recursive grant. It isn't changing the 
permissions for C:\ and everything under it. It's only changing the 
permissions for C:\ its self. It's then stepping down the path of 
parents down to the datadir and doing the same thing to make sure you 
have the required rights on every folder all the way down.


See:
http://technet.microsoft.com/en-us/library/cc753525(WS.10).aspx

and note the lack of the /t flag when icacls is called.

Given that, it's not clear why it's taking so long. A lock of some kind, 
maybe? Anyway, it's *not* recursively changing the permissions on your HDD.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] using trigger to change statusin one table from the modification in other table

2011-09-13 Thread Marcos Hercules Santos
Hello guys,

i'm trying to use trigger in these two tables below, in order to do
the following;

To cancel the booking of an accommodation, since the customer do the
host (status = "Cancelled") in the book_rooms table. Changing the
state of accommodation to "occupied" (Accommodation Table).


CREATE TABLE book_rooms
(Idbookroom int AUTO_INCREMENT not null,
IdHost int not null,
IdAccommododation int not null,
booking_date Date not null,
InitialDate Date not null,
EndDate not null,
Status varchar(10) not null check (Estado in (‘active’, ‘Cancelled’)),
PRIMARY KEY (Idbookroom),
FOREIGN KEY (IdHost) REFERENCES Hosts(IdHost),
FOREIGN KEY (IdAccommododation) REFERENCES
Accommodations(IdAccommododation),
UNIQUE(IdHost, IdAccommododation, booking_date))


TABLE Accommodations
(IdAccommododation int AUTO_INCREMENT not null,
name varchar(20) not null,
high_season_price not null numeric (5,2),
low_season_price not null numeric (5,2),
Status varchar(12) not null ((status = 'occupied') or (estado =
'available')
or (estado = 'maintenance')),
PRIMARY KEY (IdAccommododation),
UNIQUE(name)),



can someone help will be  appreciated

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with the 9.1 one-click installer Windows7 64bit

2011-09-13 Thread Thomas Kellerer

Craig Ringer, 14.09.2011 06:20:

I forwarded your message to Dave Page in case the EDB folks needed to
look into this. He commented that:

"Aside from the fact that icacls is hanging for reasons unknown, it
appears to be doing what it is designed to do - it traverses up the
path from the data directory to the root directory, granting
read/execute permissions on each dir as it goes. It's a non-recursive
grant (the default - /T makes it recurse), and is done because a
great deal of the failures we used to see were caused by the user not
having read or execute permissions on one of the parents of the data
directory (or installation directory)."

... which to me explains why:


Then when I kill that process, the installer starts a new one with

icacls "C:\Daten" /grant "tkellerer":RX


As Dave noted, it's a non-recursive grant. It isn't changing the
permissions for C:\ and everything under it. It's only changing the
permissions for C:\ its self. It's then stepping down the path of
parents down to the datadir and doing the same thing to make sure you
have the required rights on every folder all the way down.

See: http://technet.microsoft.com/en-us/library/cc753525(WS.10).aspx

and note the lack of the /t flag when icacls is called.

Given that, it's not clear why it's taking so long. A lock of some
kind, maybe? Anyway, it's *not* recursively changing the permissions
on your HDD.


Thanks for the feedback.

Those steps do make sense - and I appreciate the installer doing that, as there 
were a lot of problems in various forums regarding the permissions ;)

But I have no idea why it's hanging (or taking that long).

I ran the installer on a Windows XP box (32bit) that is not part of a domain, 
and there everything worked just fine.

I currently don't have time to re-test this and wait until it finished (or 
check what iacls.exe is waiting for). But maybe next week I can try to find the 
real reason for that.

Regards
Thomas


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general