[GENERAL] DDL statement blocked on long-running query with no runtime dependancy

2014-05-06 Thread Tim Kane
Hi all,

I’ve just noticed that some DDL statements will block if there are other
queries accessing the relation in question, even where those queries are not
dependant on the underlying structures (such as a new index, created after
the initial query).

For instance…

 SELECT long_running_query FROM foo;   -- Query A,
 
 --   meanwhile…
 
 CREATE UNIQUE INDEX ON foo USING btree (blah);
 -- Fine, no problem
 
 DROP INDEX foo_blah_idx;
 -- The above DDL statement will block until Query A completes, even though the
 query cannot possibly rely on the newly created index
 


Is this behaviour by design? Is it worth allowing DDL statements to occur
where it can be determined that no currently running query will be affected?

Cheers,

Tim




[GENERAL] copy expensive local view to an RDS instance

2014-05-06 Thread Marcus Engene

Hi,

I have a local db behind a firewall etc. Basically, I'd like to do what 
I'd locally would...


create table abc
as
select
*
from
local_expensive_view;

abc - on RDS
local_expensive_view - on local machine

How would you go about doing this?

Thanks,
Marcus



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DDL statement blocked on long-running query with no runtime dependancy

2014-05-06 Thread Tom Lane
Tim Kane tim.k...@gmail.com writes:
 [ DROP INDEX requires exclusive lock on index's table ]

 Is this behaviour by design?

Yes.  Even if you assumed that the DROP had complete information about
all concurrent queries (which it does not, and we could not cheaply
provide that), it would still be possible for an incoming query to start
using the index immediately after DROP looks.  Table-level locks are the
only means we have to prevent such race conditions.

In theory we could narrow the scope of the exclusive locking to just
the targeted index, but that would not actually help much in practice:
all incoming queries would still need to conflict with the DROP, because
the planner will want to inspect every index on the table to see if it's
potentially useful for the new query.

Recent versions of PG do have a DROP INDEX CONCURRENTLY operation, which
goes through a multiple-transaction sequence involving marking the index
invalid, and a lot of waiting, in order to avoid blocking other
transactions.  I'm not sure that that answers your concern though, as
what you seem to want is for the DROP to not wait.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Server process crash - Segmentation fault

2014-05-06 Thread Leif Jensen
   Hello.

   I was running PostgreSQL 9.1.4 when I got a server process crash 
(Segmentation fault) as the postgres log shown below. I tried upgrade to newest 
version 9.3.4, but this gives exactly the same problem.

   It is an (ecpg based) C-program that does tons of these scroll cursor 
exercises. Until recently this worked too but changes to totally different part 
of the program made this happen. (I have made way too many changes to this 
other part to be able to roll back the code :-( ). The system generates data 
all the time for this lookup, but I can grab the SQL from the postgres log and 
run it through psql and get the result I expect, so I don't see how it can be 
data related.

   Please help,

 Leif

.
.
.
22864 2014-05-06 15:37:35.350 CEST LOG:  statement: close execcurs
22864 2014-05-06 15:37:35.350 CEST LOG:  statement: deallocate 
ApplDBConn_22854_f6adeb70_query
22864 2014-05-06 15:37:35.352 CEST DEBUG:  parse 
ApplDBConn_22854_f6adeb70_query: SELECT data_type FROM 
information_schema.columns WHERE table_name = 'l2_hb_water_hours_sum' AND 
column_name = '';
22864 2014-05-06 15:37:35.353 CEST LOG:  statement: declare execcurs  scroll 
cursor  for SELECT data_type FROM information_schema.columns WHERE table_name = 
'l2_hb_water_hours_sum' AND column_name = '
';
22864 2014-05-06 15:37:35.356 CEST LOG:  statement: fetch first in execcurs
22864 2014-05-06 15:37:35.358 CEST LOG:  statement: close execcurs
22864 2014-05-06 15:37:35.358 CEST LOG:  statement: deallocate 
ApplDBConn_22854_f6adeb70_query
22864 2014-05-06 15:37:35.359 CEST LOG:  statement: commit
22864 2014-05-06 15:37:35.359 CEST LOG:  statement: start transaction read only
22864 2014-05-06 15:37:35.360 CEST DEBUG:  parse 
ApplDBConn_22854_f6adeb70_query: SELECT montime, year, month, day, hh, 
gal_hour, exp_hour, unsched_hour FROM l2_hb_water_hours_sum  WHERE  l2_hb_water_
hours_sum.ctrlid =  86  ORDER BY year,month,day,hh OFFSET (SELECT CASE WHEN 
count(*)  2000 THEN count(*) -2000 ELSE 0 END FROM l2_hb_water_hours_sum
WHERE  l2_hb_water_hours_sum.ctrlid =  86 );
22864 2014-05-06 15:37:35.365 CEST LOG:  statement: declare execcurs  scroll 
cursor  for SELECT montime, year, month, day, hh, gal_hour, exp_hour, 
unsched_hour FROM l2_hb_water_hours_sum  WHERE  l2_hb
_water_hours_sum.ctrlid =  86  ORDER BY year,month,day,hh OFFSET (SELECT CASE 
WHEN count(*)  2000 THEN count(*) -2000 ELSE 0 END FROM l2_hb_water_hours_sum  
  WHERE  l2_hb_water_hours_sum.ctrlid =  8
6 );
22864 2014-05-06 15:37:35.432 CEST LOG:  statement: fetch first in execcurs
21702 2014-05-06 15:37:35.440 CEST DEBUG:  server process (PID 22864) was 
terminated by signal 11: Segmentation fault
21702 2014-05-06 15:37:35.440 CEST DETAIL:  Failed process was running: fetch 
first in execcurs
21702 2014-05-06 15:37:35.440 CEST LOG:  server process (PID 22864) was 
terminated by signal 11: Segmentation fault
21702 2014-05-06 15:37:35.440 CEST DETAIL:  Failed process was running: fetch 
first in execcurs
21702 2014-05-06 15:37:35.440 CEST LOG:  terminating any other active server 
processes
21702 2014-05-06 15:37:35.440 CEST DEBUG:  sending SIGQUIT to process 22896
21702 2014-05-06 15:37:35.440 CEST DEBUG:  sending SIGQUIT to process 22893
21702 2014-05-06 15:37:35.440 CEST DEBUG:  sending SIGQUIT to process 22882
21702 2014-05-06 15:37:35.440 CEST DEBUG:  sending SIGQUIT to process 22751
21702 2014-05-06 15:37:35.440 CEST DEBUG:  sending SIGQUIT to process 22749
21702 2014-05-06 15:37:35.440 CEST DEBUG:  sending SIGQUIT to process 22748
21702 2014-05-06 15:37:35.440 CEST DEBUG:  sending SIGQUIT to process 21705
21702 2014-05-06 15:37:35.440 CEST DEBUG:  sending SIGQUIT to process 21704
21702 2014-05-06 15:37:35.440 CEST DEBUG:  sending SIGQUIT to process 21706
21702 2014-05-06 15:37:35.440 CEST DEBUG:  sending SIGQUIT to process 21707
22751 2014-05-06 15:37:35.440 CEST WARNING:  terminating connection because of 
crash of another server process
22751 2014-05-06 15:37:35.440 CEST DETAIL:  The postmaster has commanded this 
server process to roll back the current transaction and exit, because another 
server process exited abnormally and possibl
y corrupted shared memory.
22751 2014-05-06 15:37:35.440 CEST HINT:  In a moment you should be able to 
reconnect to the database and repeat your command.
21702 2014-05-06 15:37:35.440 CEST DEBUG:  sending SIGQUIT to process 21708


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Server process crash - Segmentation fault

2014-05-06 Thread Adrian Klaver

On 05/06/2014 07:08 AM, Leif Jensen wrote:

Hello.

I was running PostgreSQL 9.1.4 when I got a server process crash 
(Segmentation fault) as the postgres log shown below. I tried upgrade to newest 
version 9.3.4, but this gives exactly the same problem.

It is an (ecpg based) C-program that does tons of these scroll cursor exercises. 
Until recently this worked too but changes to totally different part of the program made 
this happen. (I have made way too many changes to this other part to be able to 
roll back the code :-( ). The system generates data all the time for this 
lookup, but I can grab the SQL from the postgres log and run it through psql and get the 
result I expect, so I don't see how it can be data related.

Please help,

  Leif

.
.
.
22864 2014-05-06 15:37:35.350 CEST LOG:  statement: close execcurs
22864 2014-05-06 15:37:35.350 CEST LOG:  statement: deallocate 
ApplDBConn_22854_f6adeb70_query
22864 2014-05-06 15:37:35.352 CEST DEBUG:  parse 
ApplDBConn_22854_f6adeb70_query: SELECT data_type FROM 
information_schema.columns WHERE table_name = 'l2_hb_water_hours_sum' AND 
column_name = '';
22864 2014-05-06 15:37:35.353 CEST LOG:  statement: declare execcurs  scroll 
cursor  for SELECT data_type FROM information_schema.columns WHERE table_name = 
'l2_hb_water_hours_sum' AND column_name = '
';
22864 2014-05-06 15:37:35.356 CEST LOG:  statement: fetch first in execcurs
22864 2014-05-06 15:37:35.358 CEST LOG:  statement: close execcurs
22864 2014-05-06 15:37:35.358 CEST LOG:  statement: deallocate 
ApplDBConn_22854_f6adeb70_query
22864 2014-05-06 15:37:35.359 CEST LOG:  statement: commit
22864 2014-05-06 15:37:35.359 CEST LOG:  statement: start transaction read only
22864 2014-05-06 15:37:35.360 CEST DEBUG:  parse 
ApplDBConn_22854_f6adeb70_query: SELECT montime, year, month, day, hh, 
gal_hour, exp_hour, unsched_hour FROM l2_hb_water_hours_sum  WHERE  l2_hb_water_
hours_sum.ctrlid =  86  ORDER BY year,month,day,hh OFFSET (SELECT CASE WHEN 
count(*)  2000 THEN count(*) -2000 ELSE 0 END FROM l2_hb_water_hours_sum
WHERE  l2_hb_water_hours_sum.ctrlid =  86 );
22864 2014-05-06 15:37:35.365 CEST LOG:  statement: declare execcurs  scroll 
cursor  for SELECT montime, year, month, day, hh, gal_hour, exp_hour, 
unsched_hour FROM l2_hb_water_hours_sum  WHERE  l2_hb
_water_hours_sum.ctrlid =  86  ORDER BY year,month,day,hh OFFSET (SELECT CASE WHEN 
count(*)  2000 THEN count(*) -2000 ELSE 0 END FROM l2_hb_water_hours_sum
WHERE  l2_hb_water_hours_sum.ctrlid =  8
6 );


The code that generates the above would be helpful. The thing that 
catches my eye is that the first time you use 
ApplDBConn_22854_f6adeb70_query the parse and cursor queries are the 
same and all is good. The second time they are not and you get a 
failure. Without seeing what is going in in your code it is hard to tell 
if this significant or not.



22864 2014-05-06 15:37:35.432 CEST LOG:  statement: fetch first in execcurs
21702 2014-05-06 15:37:35.440 CEST DEBUG:  server process (PID 22864) was 
terminated by signal 11: Segmentation fault
21702 2014-05-06 15:37:35.440 CEST DETAIL:  Failed process was running: fetch 
first in execcurs
21702 2014-05-06 15:37:35.440 CEST LOG:  server process (PID 22864) was 
terminated by signal 11: Segmentation fault
21702 2014-05-06 15:37:35.440 CEST DETAIL:  Failed process was running: fetch 
first in execcurs
21702 2014-05-06 15:37:35.440 CEST LOG:  terminating any other active server 
processes



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] copy expensive local view to an RDS instance

2014-05-06 Thread bricklen
On Tue, May 6, 2014 at 5:52 AM, Marcus Engene meng...@engene.se wrote:

 Hi,

 I have a local db behind a firewall etc. Basically, I'd like to do what
 I'd locally would...

 create table abc
 as
 select
 *
 from
 local_expensive_view;

 abc - on RDS
 local_expensive_view - on local machine

 How would you go about doing this?

 Thanks,
 Marcus


A very quick search shows that rds supports dblink, so perhaps that would
work.
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html


Re: [GENERAL] copy expensive local view to an RDS instance

2014-05-06 Thread Marcus Engene

On 06/05/14 16:58, bricklen wrote:


A very quick search shows that rds supports dblink, so perhaps that 
would work.

http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html


Then I'd need to open our servers to external visits. It would be lovely 
if dblink_exec could push a subselect of data instead instead of pull 
from RDS. Does this make sense?


Thanks,
Marcus



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] copy expensive local view to an RDS instance

2014-05-06 Thread bricklen
On Tue, May 6, 2014 at 8:07 AM, Marcus Engene meng...@engene.se wrote:

 On 06/05/14 16:58, bricklen wrote:


 A very quick search shows that rds supports dblink, so perhaps that would
 work.
 http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/
 CHAP_PostgreSQL.html


 Then I'd need to open our servers to external visits. It would be lovely
 if dblink_exec could push a subselect of data instead instead of pull from
 RDS. Does this make sense?


Is the idea to pull data from the RDS to your local machine? If so, dblink
or possibly plproxy[1] should be able to do that.

[1] https://wiki.postgresql.org/wiki/PL/Proxy


Re: [GENERAL] copy expensive local view to an RDS instance

2014-05-06 Thread Marcus Engene

On 06/05/14 17:15, bricklen wrote:
On Tue, May 6, 2014 at 8:07 AM, Marcus Engene meng...@engene.se 
mailto:meng...@engene.se wrote:


On 06/05/14 16:58, bricklen wrote:


A very quick search shows that rds supports dblink, so perhaps
that would work.

http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html


Then I'd need to open our servers to external visits. It would be
lovely if dblink_exec could push a subselect of data instead
instead of pull from RDS. Does this make sense?


Is the idea to pull data from the RDS to your local machine? If so, 
dblink or possibly plproxy[1] should be able to do that.


[1] https://wiki.postgresql.org/wiki/PL/Proxy
Sorry, no, I want to push data from my local machine but I've only seen 
examples of push with dblink_exec and litteral values rather than a 
subselect or smth.


Best regards,
Marcus



Re: [GENERAL] copy expensive local view to an RDS instance

2014-05-06 Thread Paul Jungwirth
 A very quick search shows that rds supports dblink
 Then I'd need to open our servers to external visits.

This is sort of getting away from Postgres, but if the RDS instance is
in a VPC, you could put a VPN on the VPC so dblink wouldn't have to go
over the open Internet.

Paul


On Tue, May 6, 2014 at 8:07 AM, Marcus Engene meng...@engene.se wrote:
 On 06/05/14 16:58, bricklen wrote:


 A very quick search shows that rds supports dblink, so perhaps that would
 work.
 http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html


 Then I'd need to open our servers to external visits. It would be lovely if
 dblink_exec could push a subselect of data instead instead of pull from RDS.
 Does this make sense?


 Thanks,
 Marcus



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



-- 
_
Pulchritudo splendor veritatis.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] DDL statement blocked on long-running query with no runtime dependancy

2014-05-06 Thread Tim Kane
Hi all,

I’ve just noticed that some DDL statements will block if there are other 
queries accessing the relation in question, even where those queries are not 
dependant on the underlying structures.

For instance…

SELECT long_running_query FROM foo;   -- Query A,

--   meanwhile…

CREATE UNIQUE INDEX ON foo USING btree (blah);
-- Fine, no problem

DROP INDEX foo_blah_idx;
-- The above DDL statement will block until Query A completes, even though the 
query cannot possibly rely on the newly created index



Is this behaviour by design? Is it worth allowing DDL statements to occur where 
it can be determined that no currently running query will be affected?






Tim Kane
Senior Development Engineer Lead
t: +44 20 7092 8700
m:
blinkbox music - the easiest way to listen to the music you love, for free
www.blinkboxmusic.com



[GENERAL] HP-UX 11.23

2014-05-06 Thread Steve Wilkins
Hello,

Is there a depot available for 11.23?  I have the source for PostgreSQL 9.03 
and am trying to just get the ODBC shared library to compile but I am have 
several issues; so,  a deport would be great.

Thank you,

Steve Wilkins
Software Engineer III
EVRAZ INC. NA

Phone:  (503) 978-6144
Fax:   (503) 240-5250
Email:   steve.wilk...@evrazincna.commailto:brian.ew...@evrazincna.com



Re: [GENERAL] HP-UX 11.23

2014-05-06 Thread Adrian Klaver

On 05/06/2014 07:07 AM, Steve Wilkins wrote:

Hello,

Is there a depot available for 11.23?  I have the source for PostgreSQL
9.03 and am trying to just get the ODBC shared library to compile but I
am have several issues; so,  a deport would be great.


The only one I could find is for Postgres 9.3.2 and 9.3.4:

http://hpux.connect.org.uk/hppd/hpux/Users/postgresql-9.3.2/

http://hpux.connect.org.uk/hppd/hpux/Users/postgresql-9.3.4/

If the issue is with the Postgres ODBC library I would suggest asking on 
the pgsql-odbc list:


http://www.postgresql.org/list/pgsql-odbc/




Thank you,

*Steve Wilkins*

Software Engineer III

*EVRAZ INC. NA*

Phone:  (503) 978-6144

Fax:   (503) 240-5250

Email: steve.wilk...@evrazincna.com mailto:brian.ew...@evrazincna.com




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] any psql \copy tricks for default-value columns without source data?

2014-05-06 Thread David G Johnston
So, I am trying to import a file into a table and want to assign a sequence
value to each record as it is imported.

I know that I can pre-process the input file and simply add the needed data
but I am curious if maybe there is some trick to having defaults populate
for missing columns WITHOUT explicitly specifying each and every column that
is present?

Thanks!

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/any-psql-copy-tricks-for-default-value-columns-without-source-data-tp5802795.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] any psql \copy tricks for default-value columns without source data?

2014-05-06 Thread Ryan Kelly
On Tue, May 05/06/14, 2014 at 01:22:20PM -0700, David G Johnston wrote:
 So, I am trying to import a file into a table and want to assign a sequence
 value to each record as it is imported.

 I know that I can pre-process the input file and simply add the needed data
 but I am curious if maybe there is some trick to having defaults populate
 for missing columns WITHOUT explicitly specifying each and every column that
 is present?

There isn't really any way to get around specifying the columns. I wrote
a tool to help import files into the database, you could probably make
use of it:

https://github.com/f0rk/csv2table

Assuming the table already exists and your csv has columns with the same
names as the columns in your table:

csv2table --file /path/to/your/file.csv --no-create --copy --backslash | psql

If the table doesn't exist, you could do:

csv2table --file /path/to/your/file.csv --copy --backslash | vipe | psql

And edit the create statement to add a SERIAL column (the copy command
will specify all of the columns in the file for you).

It's a 95% solution I use to get delimited files into the database. It's
not perfect but it works most of the time. If you have any issues or
feature requests feel free to open an issue on github.

-Ryan Kelly

On Tue, May 6, 2014 at 4:22 PM, David G Johnston
david.g.johns...@gmail.com wrote:
 So, I am trying to import a file into a table and want to assign a sequence
 value to each record as it is imported.

 I know that I can pre-process the input file and simply add the needed data
 but I am curious if maybe there is some trick to having defaults populate
 for missing columns WITHOUT explicitly specifying each and every column that
 is present?

 Thanks!

 David J.




 --
 View this message in context: 
 http://postgresql.1045698.n5.nabble.com/any-psql-copy-tricks-for-default-value-columns-without-source-data-tp5802795.html
 Sent from the PostgreSQL - general mailing list archive at Nabble.com.


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] any psql \copy tricks for default-value columns without source data?

2014-05-06 Thread John R Pierce

On 5/6/2014 1:22 PM, David G Johnston wrote:

I know that I can pre-process the input file and simply add the needed data
but I am curious if maybe there is some trick to having defaults populate
for missing columns WITHOUT explicitly specifying each and every column that
is present?


if you didn't specify the columns in your file, how would you expect it 
to know whats there and not there?




--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: any psql \copy tricks for default-value columns without source data?

2014-05-06 Thread David G Johnston
On Tue, May 6, 2014 at 4:48 PM, John R Pierce [via PostgreSQL] 
ml-node+s1045698n5802802...@n5.nabble.com wrote:

 On 5/6/2014 1:22 PM, David G Johnston wrote:
  I know that I can pre-process the input file and simply add the needed
 data
  but I am curious if maybe there is some trick to having defaults
 populate
  for missing columns WITHOUT explicitly specifying each and every column
 that
  is present?

 if you didn't specify the columns in your file, how would you expect it
 to know whats there and not there?


​The default copy behavior is column-order dependent.  If your input file
has 10 columns and the table has 10 columns they get matched up 1-to-1 and
everything works just fine.  It would be nice if there was some way to say
that if the table has 12 columns but the file has 10 columns that the first
10 columns of the table get matched to the file and the remaining two
columns use their default values; that way you can add default columns to
the end of the table and still do an auto-matching import.

David J.
​




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/any-psql-copy-tricks-for-default-value-columns-without-source-data-tp5802795p5802804.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

Re: [GENERAL] Monitoring Pg servers with Microsoft SCOM

2014-05-06 Thread Glen Eustace

On 7/05/2014, at 3:07 am, pgsql-general-ow...@postgresql.org wrote:

 I would just use check_postgres (perl) from your agent script...

Having downloaded and had a look at this script I would agree.  I'll let 
check_postgresl do the heavy lifting and just try and get a build an MP that 
collects the output from the various actions.

Thanks.
 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DDL statement blocked on long-running query with no runtime dependancy

2014-05-06 Thread Tim Kane

Thanks Tom, I see there are a lot of considerations. I guess I just found it
to be an interesting scenario, although not entirely unexpected. It’s not a
big deal of course.

Cheers,

Tim


From:  Tom Lane t...@sss.pgh.pa.us
Date:  Tuesday, 6 May 2014 15:05
To:  Tim Kane tim.k...@gmail.com
Cc:  pgsql-general General pgsql-general@postgresql.org
Subject:  Re: [GENERAL] DDL statement blocked on long-running query with no
runtime dependancy

Tim Kane tim.k...@gmail.com writes:
  [ DROP INDEX requires exclusive lock on index's table ]

  Is this behaviour by design?

Yes.  Even if you assumed that the DROP had complete information about
all concurrent queries (which it does not, and we could not cheaply
provide that), it would still be possible for an incoming query to start
using the index immediately after DROP looks.  Table-level locks are the
only means we have to prevent such race conditions.

In theory we could narrow the scope of the exclusive locking to just
the targeted index, but that would not actually help much in practice:
all incoming queries would still need to conflict with the DROP, because
the planner will want to inspect every index on the table to see if it's
potentially useful for the new query.

Recent versions of PG do have a DROP INDEX CONCURRENTLY operation, which
goes through a multiple-transaction sequence involving marking the index
invalid, and a lot of waiting, in order to avoid blocking other
transactions.  I'm not sure that that answers your concern though, as
what you seem to want is for the DROP to not wait.

regards, tom lane





[GENERAL] Oracle to PostgreSQL replication

2014-05-06 Thread Sameer Kumar
Hi,


I need to setup a replication process for continuously replicating changes
happening in an Oracle Database to a PostgreSQL database.


My Oracle Database is version 11.2 and setup as a cluster with RAC
My Postgres database version is 9.2

Oracle Database is running in Solaris and PostgreSQL is running on RHEL.

Is there any commercial or open source tool available to achieve this?

I was wondering has anyone used foreign data  wrapper or

Best Regards,

*Sameer Kumar | Database Consultant*

*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069533

M: *+65 8110 0350*  T: +65 6438 3504 | www.ashnik.com

*[image: icons]*



[image: Email patch] http://www.ashnik.com/



This email may contain confidential, privileged or copyright material and
is solely for the use of the intended recipient(s).