Re: [GENERAL] [ADMIN] How Many PG_Locks are considered too many

2015-07-30 Thread Renato Oliveira
Thank you appreciated

Sent from my iPhone

> On 30 Jul 2015, at 20:05, Merlin Moncure  wrote:
> 
>> On Thu, Jul 30, 2015 at 9:19 AM, John Scalia  wrote:
>> Seconding Peter on this one; it's a lot more important should one of those
>> locks be hanging around, say for hours or days, not how many have come and
>> gone.
> 
> Also, it's good to focus on *ungranted* locks.   Typically the only
> time I care about granted locks is to find out which process is
> keeping my other process getting its lock granted.
> 
> merlin
> 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Lots of stuck queries after upgrade to 9.4

2015-07-30 Thread Tatsuo Ishii
Heikki,

> A-ha, I succeeded to reproduce this now on my laptop, with pgbench! It
> seems to be important to have a very large number of connections:
> 
> pgbench -n -c400 -j4 -T600 -P5
> 
> That got stuck after a few minutes. I'm using commit_delay=100.

Because the original reporter seems to have difficulty to execute your
test case immediately, I think I would try your test case.

However unfortunately I have not succeeded in reproducing the stuck
problem by using 9.4.4. I tend to think about increasing commit_delay
and/or -c of pgbench and try it out again. Anything I should try other
than this?

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PSA: linux kernel bug in TRIM support corrupts data with md raid levels 0 and 10

2015-07-30 Thread Merlin Moncure
http://www.spinics.net/lists/raid/msg49452.html (via slashdot)

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about copy from with timestamp format

2015-07-30 Thread Alban Hertroys

> On 30 Jul 2015, at 17:59, Sherrylyn Branchaw  wrote:
> 
> From here:
> 
> http://www.postgresql.org/docs/9.4/interactive/sql-copy.html
> 
> "COPY can only be used with plain tables, not with views. However, you can 
> write COPY (SELECT * FROM viewname) TO "
> 
> Right, so you can COPY FROM a view, but not, as far as I can tell, TO a view, 
> unless Alban found a workaround.

Well, that's why I said it _might_ work.

COPY is a bit special, I'm afraid. For starters, although it works _like_ doing 
a bunch of INSERTs, it doesn't perform actual INSERTs. Apparently, that also 
means it won't fire an INSERT rule and thus can't be used with an updatable 
view. There are no rules on such a view (rules rewrite the query) that would 
work for COPY.

Now perhaps that sounds like a COPY rule is warranted for cases like these, but 
that doesn't help, exactly because the COPY command has no place in its syntax 
for expressions (such as this type conversion). INSERT does, hence we can write 
a rule for it…

In hindsight it all makes sense. That doesn't bring you any closer to a 
solution, unfortunately.

> On Thu, Jul 30, 2015 at 11:48 AM, Adrian Klaver  
> wrote:
> On 07/30/2015 08:44 AM, Sherrylyn Branchaw wrote:
> I was thinking that perhaps an updatable view might do the trick?
> 
> Interesting idea! Are you able to get it to work? I keep getting 'ERROR:
>   cannot copy to view "view_ts_test"' even before my trigger fires.
> Inserting, though, works fine.
> 
> From here:
> 
> http://www.postgresql.org/docs/9.4/interactive/sql-copy.html
> 
> "COPY can only be used with plain tables, not with views. However, you can 
> write COPY (SELECT * FROM viewname) TO "
> 
> 
> Still curious why the triggers I'm writing won't fire before my
> statement errors out on copying to a view, or inserting an out-of-range
> timestamp, when the trigger would resolve all the illegal operations if
> it just fired first.
> 
> 
> On Thu, Jul 30, 2015 at 5:57 AM, Alban Hertroys  > wrote:
> 
> 
> > On 30 Jul 2015, at 2:27, Sherrylyn Branchaw  > wrote:
> >
> > Based on your PS asking about data types and commenting that you don't 
> want to put hour in a separate column, it sounds like this is a brand-new 
> table you're creating. If so, and if this is a one-time COPY operation, you 
> can create a text column for the initial import. Then after you're done 
> importing, you can execute
> >
> > ALTER TABLE ts_test ALTER COLUMN ts_fld TYPE TIMESTAMP USING 
> (to_timestamp(ts_fld, 'MMDDHH24'));
> >
> > to convert the format of the imported data to a timestamp. Then you're 
> set.
> >
> > If there will be ongoing imports of more files like this, though, 
> you'll need the intermediate table solution offered by Adrian.
> 
> Or keep both columns and update those where the text-column is NOT
> NULL and the timestamp column is NULL.
> 
> > I was going to suggest a trigger, but it turns out that the data type 
> checking happens even before the BEFORE trigger fires, so you don't get a 
> chance to massage your data before actually inserting it. I got 'ERROR:  
> date/time field value out of range: "2015072913 "' before the 
> trigger even fired. I
> wonder if that's deliberate? I was able to implement a workaround by
> adding a raw_ts_fld column of type text, but an extra column might
> be too ugly for you relative to a temp table, I don't know.
> 
> I was thinking that perhaps an updatable view might do the trick?
> 
> You would need to create a view with the timestamp column converted
> to text in the format in your CSV file. Next you add an INSERT rule
> that does the conversion from text to timestamp and inserts the row
> in the actual table. Finally, you use the view in the COPY statement
> instead of the table.
> Added bonus, you can now also use the view to export your table to
> the same CSV format.
> 
> Alban Hertroys
> --
> If you can't see the forest for the trees,
> cut the trees and you'll find there is no forest.
> 
> 
> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com
> 

Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [ADMIN] How Many PG_Locks are considered too many

2015-07-30 Thread Merlin Moncure
On Thu, Jul 30, 2015 at 9:19 AM, John Scalia  wrote:
> Seconding Peter on this one; it's a lot more important should one of those
> locks be hanging around, say for hours or days, not how many have come and
> gone.

Also, it's good to focus on *ungranted* locks.   Typically the only
time I care about granted locks is to find out which process is
keeping my other process getting its lock granted.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Lots of stuck queries after upgrade to 9.4

2015-07-30 Thread Spiros Ioannou
That would need a replica of the data probably which is not possible
(tablespace is 4TB).








*Spiros Ioannou IT Manager, inAccesswww.inaccess.com
M: +30 6973-903808T: +30 210-6802-358*

On 30 July 2015 at 21:47, Scott Marlowe  wrote:

> You might want to try pg replay: http://laurenz.github.io/pgreplay/
>
> On Thu, Jul 30, 2015 at 7:23 AM, Spiros Ioannou 
> wrote:
>
>> I'm very sorry but we don't have a synthetic load generator for our
>> testing setup, only production and that is on SLA. I would be happy to test
>> the next release though.
>>
>>
>>
>>
>>
>>
>>
>>
>> *Spiros Ioannou IT Manager, inAccesswww.inaccess.com
>> M: +30 6973-903808T: +30 210-6802-358*
>>
>> On 29 July 2015 at 13:42, Heikki Linnakangas  wrote:
>>
>>> On 07/28/2015 11:36 PM, Heikki Linnakangas wrote:
>>>
 A-ha, I succeeded to reproduce this now on my laptop, with pgbench! It
 seems to be important to have a very large number of connections:

 pgbench -n -c400 -j4 -T600 -P5

 That got stuck after a few minutes. I'm using commit_delay=100.

 Now that I have something to work with, I'll investigate this more
 tomorrow.

>>>
>>> Ok, it seems that this is caused by the same issue that I found with my
>>> synthetic test case, after all. It is possible to get a lockup because of
>>> it.
>>>
>>> For the archives, here's a hopefully easier-to-understand explanation of
>>> how the lockup happens. It involves three backends. A and C are insertion
>>> WAL records, while B is flushing the WAL with commit_delay. The byte
>>> positions 2000, 2100, 2200, and 2300 are offsets within a WAL page. 2000
>>> points to the beginning of the page, while the others are later positions
>>> on the same page. WaitToFinish() is an abbreviation for
>>> WaitXLogInsertionsToFinish(). "Update pos X" means a call to
>>> WALInsertLockUpdateInsertingAt(X). "Reserve A-B" means a call to
>>> ReserveXLogInsertLocation, which returned StartPos A and EndPos B.
>>>
>>> Backend A   Backend B   Backend C
>>> -   -   -
>>> Acquire InsertLock 2
>>> Reserve 2100-2200
>>> Calls WaitToFinish()
>>>   reservedUpto is 2200
>>>   sees that Lock 1 is
>>>   free
>>> Acquire InsertLock 1
>>> Reserve 2200-2300
>>> GetXLogBuffer(2200)
>>>  page not in cache
>>>  Update pos 2000
>>>  AdvanceXLInsertBuffer()
>>>   run until about to
>>>   acquire WALWriteLock
>>> GetXLogBuffer(2100)
>>>  page not in cache
>>>  Update pos 2000
>>>  AdvanceXLInsertBuffer()
>>>   Acquire WALWriteLock
>>>   write out old page
>>>   initialize new page
>>>   Release WALWriteLock
>>> finishes insertion
>>> release InsertLock 2
>>> WaitToFinish() continues
>>>   sees that lock 2 is
>>>   free. Returns 2200.
>>>
>>> Acquire WALWriteLock
>>> Call WaitToFinish(2200)
>>>   blocks on Lock 1,
>>>   whose initializedUpto
>>>   is 2000.
>>>
>>> At this point, there is a deadlock between B and C. B is waiting for C
>>> to release the lock or update its insertingAt value past 2200, while C is
>>> waiting for WALInsertLock, held by B.
>>>
>>> To fix that, let's fix GetXLogBuffer() to always advertise the exact
>>> position, not the beginning of the page (except when inserting the first
>>> record on the page, just after the page header, see comments).
>>>
>>> This fixes the problem for me. I've been running pgbench for about 30
>>> minutes without lockups now, while without the patch it locked up within a
>>> couple of minutes. Spiros, can you easily test this patch in your
>>> environment? Would be nice to get a confirmation that this fixes the
>>> problem for you too.
>>>
>>> - Heikki
>>>
>>>
>>
>
>
> --
> To understand recursion, one must first understand recursion.
>


Re: [GENERAL] Lots of stuck queries after upgrade to 9.4

2015-07-30 Thread Scott Marlowe
You might want to try pg replay: http://laurenz.github.io/pgreplay/

On Thu, Jul 30, 2015 at 7:23 AM, Spiros Ioannou  wrote:

> I'm very sorry but we don't have a synthetic load generator for our
> testing setup, only production and that is on SLA. I would be happy to test
> the next release though.
>
>
>
>
>
>
>
>
> *Spiros Ioannou IT Manager, inAccesswww.inaccess.com
> M: +30 6973-903808T: +30 210-6802-358*
>
> On 29 July 2015 at 13:42, Heikki Linnakangas  wrote:
>
>> On 07/28/2015 11:36 PM, Heikki Linnakangas wrote:
>>
>>> A-ha, I succeeded to reproduce this now on my laptop, with pgbench! It
>>> seems to be important to have a very large number of connections:
>>>
>>> pgbench -n -c400 -j4 -T600 -P5
>>>
>>> That got stuck after a few minutes. I'm using commit_delay=100.
>>>
>>> Now that I have something to work with, I'll investigate this more
>>> tomorrow.
>>>
>>
>> Ok, it seems that this is caused by the same issue that I found with my
>> synthetic test case, after all. It is possible to get a lockup because of
>> it.
>>
>> For the archives, here's a hopefully easier-to-understand explanation of
>> how the lockup happens. It involves three backends. A and C are insertion
>> WAL records, while B is flushing the WAL with commit_delay. The byte
>> positions 2000, 2100, 2200, and 2300 are offsets within a WAL page. 2000
>> points to the beginning of the page, while the others are later positions
>> on the same page. WaitToFinish() is an abbreviation for
>> WaitXLogInsertionsToFinish(). "Update pos X" means a call to
>> WALInsertLockUpdateInsertingAt(X). "Reserve A-B" means a call to
>> ReserveXLogInsertLocation, which returned StartPos A and EndPos B.
>>
>> Backend A   Backend B   Backend C
>> -   -   -
>> Acquire InsertLock 2
>> Reserve 2100-2200
>> Calls WaitToFinish()
>>   reservedUpto is 2200
>>   sees that Lock 1 is
>>   free
>> Acquire InsertLock 1
>> Reserve 2200-2300
>> GetXLogBuffer(2200)
>>  page not in cache
>>  Update pos 2000
>>  AdvanceXLInsertBuffer()
>>   run until about to
>>   acquire WALWriteLock
>> GetXLogBuffer(2100)
>>  page not in cache
>>  Update pos 2000
>>  AdvanceXLInsertBuffer()
>>   Acquire WALWriteLock
>>   write out old page
>>   initialize new page
>>   Release WALWriteLock
>> finishes insertion
>> release InsertLock 2
>> WaitToFinish() continues
>>   sees that lock 2 is
>>   free. Returns 2200.
>>
>> Acquire WALWriteLock
>> Call WaitToFinish(2200)
>>   blocks on Lock 1,
>>   whose initializedUpto
>>   is 2000.
>>
>> At this point, there is a deadlock between B and C. B is waiting for C to
>> release the lock or update its insertingAt value past 2200, while C is
>> waiting for WALInsertLock, held by B.
>>
>> To fix that, let's fix GetXLogBuffer() to always advertise the exact
>> position, not the beginning of the page (except when inserting the first
>> record on the page, just after the page header, see comments).
>>
>> This fixes the problem for me. I've been running pgbench for about 30
>> minutes without lockups now, while without the patch it locked up within a
>> couple of minutes. Spiros, can you easily test this patch in your
>> environment? Would be nice to get a confirmation that this fixes the
>> problem for you too.
>>
>> - Heikki
>>
>>
>


-- 
To understand recursion, one must first understand recursion.


Re: [GENERAL] Question about copy from with timestamp format

2015-07-30 Thread Adrian Klaver

On 07/30/2015 08:59 AM, Sherrylyn Branchaw wrote:

 From here:

http://www.postgresql.org/docs/9.4/interactive/sql-copy.html

"COPY can only be used with plain tables, not with views. However, you
can write COPY (SELECT * FROM viewname) TO "

Right, so you can COPY FROM a view, but not, as far as I can tell, TO a
view, unless Alban found a workaround.


As to your other question:

"Still curious why the triggers I'm writing won't fire before my
statement errors out on copying to a view, or inserting an out-of-range
timestamp, when the trigger would resolve all the illegal operations if
it just fired first."

See this thread:

http://www.postgresql.org/message-id/aanlktik6-snypabiaztippjkqxborfvazb1efk0cn...@mail.gmail.com

in particular this post:

http://www.postgresql.org/message-id/24930.1276658...@sss.pgh.pa.us


--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about copy from with timestamp format

2015-07-30 Thread Adrian Klaver

On 07/30/2015 08:59 AM, Sherrylyn Branchaw wrote:

 From here:

http://www.postgresql.org/docs/9.4/interactive/sql-copy.html

"COPY can only be used with plain tables, not with views. However, you
can write COPY (SELECT * FROM viewname) TO "

Right, so you can COPY FROM a view, but not, as far as I can tell, TO a
view, unless Alban found a workaround.


Just to be clear COPY FROM is from file to table and COPY TO is from 
table/view to file.




On Thu, Jul 30, 2015 at 11:48 AM, Adrian Klaver
mailto:adrian.kla...@aklaver.com>> wrote:

On 07/30/2015 08:44 AM, Sherrylyn Branchaw wrote:

I was thinking that perhaps an updatable view might do the trick?

Interesting idea! Are you able to get it to work? I keep getting
'ERROR:
   cannot copy to view "view_ts_test"' even before my trigger fires.
Inserting, though, works fine.


 From here:

http://www.postgresql.org/docs/9.4/interactive/sql-copy.html

"COPY can only be used with plain tables, not with views. However,
you can write COPY (SELECT * FROM viewname) TO "


Still curious why the triggers I'm writing won't fire before my
statement errors out on copying to a view, or inserting an
out-of-range
timestamp, when the trigger would resolve all the illegal
operations if
it just fired first.



On Thu, Jul 30, 2015 at 5:57 AM, Alban Hertroys
mailto:haram...@gmail.com>
>> wrote:


 > On 30 Jul 2015, at 2:27, Sherrylyn Branchaw
mailto:sbranc...@gmail.com>
>> wrote:
 >
 > Based on your PS asking about data types and commenting
that you don't want to put hour in a separate column, it sounds
like this is a brand-new table you're creating. If so, and if
this is a one-time COPY operation, you can create a text column
for the initial import. Then after you're done importing, you
can execute
 >
 > ALTER TABLE ts_test ALTER COLUMN ts_fld TYPE TIMESTAMP
USING (to_timestamp(ts_fld, 'MMDDHH24'));
 >
 > to convert the format of the imported data to a
timestamp. Then you're set.
 >
 > If there will be ongoing imports of more files like this,
though, you'll need the intermediate table solution offered by
Adrian.

 Or keep both columns and update those where the text-column
is NOT
 NULL and the timestamp column is NULL.

 > I was going to suggest a trigger, but it turns out that
the data type checking happens even before the BEFORE trigger
fires, so you don't get a chance to massage your data before
actually inserting it. I got 'ERROR:  date/time field value out
of range: "2015072913  >"' before the trigger even fired. I
 wonder if that's deliberate? I was able to implement a
workaround by
 adding a raw_ts_fld column of type text, but an extra
column might
 be too ugly for you relative to a temp table, I don't know.

 I was thinking that perhaps an updatable view might do the
trick?

 You would need to create a view with the timestamp column
converted
 to text in the format in your CSV file. Next you add an
INSERT rule
 that does the conversion from text to timestamp and inserts
the row
 in the actual table. Finally, you use the view in the COPY
statement
 instead of the table.
 Added bonus, you can now also use the view to export your
table to
 the same CSV format.

 Alban Hertroys
 --
 If you can't see the forest for the trees,
 cut the trees and you'll find there is no forest.




--
Adrian Klaver
adrian.kla...@aklaver.com 





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about copy from with timestamp format

2015-07-30 Thread Sherrylyn Branchaw
>From here:

http://www.postgresql.org/docs/9.4/interactive/sql-copy.html

"COPY can only be used with plain tables, not with views. However, you can
write COPY (SELECT * FROM viewname) TO "

Right, so you can COPY FROM a view, but not, as far as I can tell, TO a
view, unless Alban found a workaround.

On Thu, Jul 30, 2015 at 11:48 AM, Adrian Klaver 
wrote:

> On 07/30/2015 08:44 AM, Sherrylyn Branchaw wrote:
>
>> I was thinking that perhaps an updatable view might do the trick?
>>
>> Interesting idea! Are you able to get it to work? I keep getting 'ERROR:
>>   cannot copy to view "view_ts_test"' even before my trigger fires.
>> Inserting, though, works fine.
>>
>
> From here:
>
> http://www.postgresql.org/docs/9.4/interactive/sql-copy.html
>
> "COPY can only be used with plain tables, not with views. However, you can
> write COPY (SELECT * FROM viewname) TO "
>
>
>> Still curious why the triggers I'm writing won't fire before my
>> statement errors out on copying to a view, or inserting an out-of-range
>> timestamp, when the trigger would resolve all the illegal operations if
>> it just fired first.
>>
>
>
>> On Thu, Jul 30, 2015 at 5:57 AM, Alban Hertroys > > wrote:
>>
>>
>> > On 30 Jul 2015, at 2:27, Sherrylyn Branchaw > > wrote:
>> >
>> > Based on your PS asking about data types and commenting that you
>> don't want to put hour in a separate column, it sounds like this is a
>> brand-new table you're creating. If so, and if this is a one-time COPY
>> operation, you can create a text column for the initial import. Then after
>> you're done importing, you can execute
>> >
>> > ALTER TABLE ts_test ALTER COLUMN ts_fld TYPE TIMESTAMP USING
>> (to_timestamp(ts_fld, 'MMDDHH24'));
>> >
>> > to convert the format of the imported data to a timestamp. Then
>> you're set.
>> >
>> > If there will be ongoing imports of more files like this, though,
>> you'll need the intermediate table solution offered by Adrian.
>>
>> Or keep both columns and update those where the text-column is NOT
>> NULL and the timestamp column is NULL.
>>
>> > I was going to suggest a trigger, but it turns out that the data
>> type checking happens even before the BEFORE trigger fires, so you don't
>> get a chance to massage your data before actually inserting it. I got
>> 'ERROR:  date/time field value out of range: "2015072913 "'
>> before the trigger even fired. I
>> wonder if that's deliberate? I was able to implement a workaround by
>> adding a raw_ts_fld column of type text, but an extra column might
>> be too ugly for you relative to a temp table, I don't know.
>>
>> I was thinking that perhaps an updatable view might do the trick?
>>
>> You would need to create a view with the timestamp column converted
>> to text in the format in your CSV file. Next you add an INSERT rule
>> that does the conversion from text to timestamp and inserts the row
>> in the actual table. Finally, you use the view in the COPY statement
>> instead of the table.
>> Added bonus, you can now also use the view to export your table to
>> the same CSV format.
>>
>> Alban Hertroys
>> --
>> If you can't see the forest for the trees,
>> cut the trees and you'll find there is no forest.
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: [GENERAL] Question about copy from with timestamp format

2015-07-30 Thread Adrian Klaver

On 07/30/2015 08:44 AM, Sherrylyn Branchaw wrote:

I was thinking that perhaps an updatable view might do the trick?

Interesting idea! Are you able to get it to work? I keep getting 'ERROR:
  cannot copy to view "view_ts_test"' even before my trigger fires.
Inserting, though, works fine.


From here:

http://www.postgresql.org/docs/9.4/interactive/sql-copy.html

"COPY can only be used with plain tables, not with views. However, you 
can write COPY (SELECT * FROM viewname) TO "




Still curious why the triggers I'm writing won't fire before my
statement errors out on copying to a view, or inserting an out-of-range
timestamp, when the trigger would resolve all the illegal operations if
it just fired first.




On Thu, Jul 30, 2015 at 5:57 AM, Alban Hertroys mailto:haram...@gmail.com>> wrote:


> On 30 Jul 2015, at 2:27, Sherrylyn Branchaw mailto:sbranc...@gmail.com>> wrote:
>
> Based on your PS asking about data types and commenting that you don't 
want to put hour in a separate column, it sounds like this is a brand-new table 
you're creating. If so, and if this is a one-time COPY operation, you can create a 
text column for the initial import. Then after you're done importing, you can 
execute
>
> ALTER TABLE ts_test ALTER COLUMN ts_fld TYPE TIMESTAMP USING 
(to_timestamp(ts_fld, 'MMDDHH24'));
>
> to convert the format of the imported data to a timestamp. Then you're 
set.
>
> If there will be ongoing imports of more files like this, though, you'll 
need the intermediate table solution offered by Adrian.

Or keep both columns and update those where the text-column is NOT
NULL and the timestamp column is NULL.

> I was going to suggest a trigger, but it turns out that the data type checking happens 
even before the BEFORE trigger fires, so you don't get a chance to massage your data before 
actually inserting it. I got 'ERROR:  date/time field value out of range: "2015072913 
"' before the trigger even fired. I
wonder if that's deliberate? I was able to implement a workaround by
adding a raw_ts_fld column of type text, but an extra column might
be too ugly for you relative to a temp table, I don't know.

I was thinking that perhaps an updatable view might do the trick?

You would need to create a view with the timestamp column converted
to text in the format in your CSV file. Next you add an INSERT rule
that does the conversion from text to timestamp and inserts the row
in the actual table. Finally, you use the view in the COPY statement
instead of the table.
Added bonus, you can now also use the view to export your table to
the same CSV format.

Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Transaction ID Wraparound Monitoring

2015-07-30 Thread Adrian Klaver

On 07/30/2015 08:41 AM, Jan Keirse wrote:

On Thu, Jul 30, 2015 at 2:56 PM, Adrian Klaver
 wrote:

On 07/30/2015 02:55 AM, Jan Keirse wrote:


Hello,

we have some very write heavy databases and I have our monitoring
system watch the transaction age of my databases to be alerted before
we get into problems in case autovacuum can't keep up to avoid
transaction ID wraparound.

The query I am executing is this:
SELECT max(age(pg_database.datfrozenxid)) / 2147483648.0 * 100.0 AS
"Percentage of transaction ID's used" FROM pg_database;

My believe was that if this reaches 100 the database will stop
accepting writes and one must vacuum. I have set alerts on 50 and 90,
the result is around 9 so my believe was autovacuum is working fine
for my workload.
I often see autovacuum kicking in to prevent XID Wraparround, I
thought that was just to be on the safe side and vacuum well before
it's too late.

However today I saw this post:

http://blog.getsentry.com/2015/07/23/transaction-id-wraparound-in-postgres.html

The following line has me worried:
... that database is going to reach a situation where the XID counter
has reached its maximum value. The absolute peak is something around 2
billion, but it can be far lower than that in some situations...

Could someone shed some light on this? Is my query insufficient? Can
the transaction wrapparound freeze problem indeed occur earlier? And
if so, could someone suggest a better query to monitor?



I would look at:

http://www.postgresql.org/docs/9.4/interactive/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND

Which includes some query examples.


Yes, I have seen that documentation and it is because of it that I
believed that my queries were ok, but now I think I may be
misinterpreting or misunderstanding the documentation and have to look
at more information, like autovacuum_multixact_freeze_max_age?



Well if you click on the parameters in the above page you will go to 
their definitions:


So for autovacuum_multixact_freeze_max_age:

http://www.postgresql.org/docs/9.4/interactive/runtime-config-autovacuum.html#GUC-AUTOVACUUM-MULTIXACT-FREEZE-MAX-AGE


--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about copy from with timestamp format

2015-07-30 Thread Sherrylyn Branchaw
I was thinking that perhaps an updatable view might do the trick?

Interesting idea! Are you able to get it to work? I keep getting 'ERROR:
 cannot copy to view "view_ts_test"' even before my trigger fires.
Inserting, though, works fine.

Still curious why the triggers I'm writing won't fire before my statement
errors out on copying to a view, or inserting an out-of-range timestamp,
when the trigger would resolve all the illegal operations if it just fired
first.

On Thu, Jul 30, 2015 at 5:57 AM, Alban Hertroys  wrote:

>
> > On 30 Jul 2015, at 2:27, Sherrylyn Branchaw  wrote:
> >
> > Based on your PS asking about data types and commenting that you don't
> want to put hour in a separate column, it sounds like this is a brand-new
> table you're creating. If so, and if this is a one-time COPY operation, you
> can create a text column for the initial import. Then after you're done
> importing, you can execute
> >
> > ALTER TABLE ts_test ALTER COLUMN ts_fld TYPE TIMESTAMP USING
> (to_timestamp(ts_fld, 'MMDDHH24'));
> >
> > to convert the format of the imported data to a timestamp. Then you're
> set.
> >
> > If there will be ongoing imports of more files like this, though, you'll
> need the intermediate table solution offered by Adrian.
>
> Or keep both columns and update those where the text-column is NOT NULL
> and the timestamp column is NULL.
>
> > I was going to suggest a trigger, but it turns out that the data type
> checking happens even before the BEFORE trigger fires, so you don't get a
> chance to massage your data before actually inserting it. I got 'ERROR:
> date/time field value out of range: "2015072913"' before the trigger even
> fired. I wonder if that's deliberate? I was able to implement a workaround
> by adding a raw_ts_fld column of type text, but an extra column might be
> too ugly for you relative to a temp table, I don't know.
>
> I was thinking that perhaps an updatable view might do the trick?
>
> You would need to create a view with the timestamp column converted to
> text in the format in your CSV file. Next you add an INSERT rule that does
> the conversion from text to timestamp and inserts the row in the actual
> table. Finally, you use the view in the COPY statement instead of the table.
> Added bonus, you can now also use the view to export your table to the
> same CSV format.
>
> Alban Hertroys
> --
> If you can't see the forest for the trees,
> cut the trees and you'll find there is no forest.
>
>


Re: [GENERAL] Transaction ID Wraparound Monitoring

2015-07-30 Thread Jan Keirse
On Thu, Jul 30, 2015 at 2:56 PM, Adrian Klaver
 wrote:
> On 07/30/2015 02:55 AM, Jan Keirse wrote:
>>
>> Hello,
>>
>> we have some very write heavy databases and I have our monitoring
>> system watch the transaction age of my databases to be alerted before
>> we get into problems in case autovacuum can't keep up to avoid
>> transaction ID wraparound.
>>
>> The query I am executing is this:
>> SELECT max(age(pg_database.datfrozenxid)) / 2147483648.0 * 100.0 AS
>> "Percentage of transaction ID's used" FROM pg_database;
>>
>> My believe was that if this reaches 100 the database will stop
>> accepting writes and one must vacuum. I have set alerts on 50 and 90,
>> the result is around 9 so my believe was autovacuum is working fine
>> for my workload.
>> I often see autovacuum kicking in to prevent XID Wraparround, I
>> thought that was just to be on the safe side and vacuum well before
>> it's too late.
>>
>> However today I saw this post:
>>
>> http://blog.getsentry.com/2015/07/23/transaction-id-wraparound-in-postgres.html
>>
>> The following line has me worried:
>> ... that database is going to reach a situation where the XID counter
>> has reached its maximum value. The absolute peak is something around 2
>> billion, but it can be far lower than that in some situations...
>>
>> Could someone shed some light on this? Is my query insufficient? Can
>> the transaction wrapparound freeze problem indeed occur earlier? And
>> if so, could someone suggest a better query to monitor?
>
>
> I would look at:
>
> http://www.postgresql.org/docs/9.4/interactive/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND
>
> Which includes some query examples.

Yes, I have seen that documentation and it is because of it that I
believed that my queries were ok, but now I think I may be
misinterpreting or misunderstanding the documentation and have to look
at more information, like autovacuum_multixact_freeze_max_age?

-- 


 DISCLAIMER 

http://www.tvh.com/glob/en/email-disclaimer

"This message is delivered to all addressees subject to the conditions
set forth in the attached disclaimer, which is an integral part of this
message."


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] user connection not recorded?

2015-07-30 Thread Tom Lane
Adrian Klaver  writes:
> Alright, but the part that has me confused is this comment in the code:

>   * In standalone mode and in autovacuum worker processes, we use a fixed
>   * ID, otherwise we figure it out from the authenticated user name.

> and this

> else if (IsBackgroundWorker)

> I read the above to mean background processes also follow that path.

Yeah, I suppose so, since they don't really have a username to use.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] instr detail

2015-07-30 Thread Ramesh T
here is the example tks help..
http://stackoverflow.com/questions/2965655/how-to-find-the-first-and-last-occurrences-of-a-specific-character-inside-a-stri

On Thu, Jul 30, 2015 at 7:38 PM, Ramesh T 
wrote:

> select position('.' in '.T.homas')
> result
> 
> 1
> it returns first postion.but I need last occurence of ' . ' .
> actual result
> --
> 3
> any help..?appreciated.
>
>
> On Thu, Jul 30, 2015 at 5:00 AM, Tom Lane  wrote:
>
>> Melvin Davidson  writes:
>> > Based om the definition of Oracle instr(), the equivalent PostgreSQL
>> > function would be
>> > position(substring in string).
>>
>> See http://www.postgresql.org/docs/9.4/static/plpgsql-porting.html
>> particularly the "appendix" at the bottom.  I'm not sure that code
>> is still the best way to do it (it's very old), but it's there.
>>
>> regards, tom lane
>>
>
>


Re: [GENERAL] instr detail

2015-07-30 Thread Ramesh T
select position('.' in '.T.homas')
result

1
it returns first postion.but I need last occurence of ' . ' .
actual result
--
3
any help..?appreciated.


On Thu, Jul 30, 2015 at 5:00 AM, Tom Lane  wrote:

> Melvin Davidson  writes:
> > Based om the definition of Oracle instr(), the equivalent PostgreSQL
> > function would be
> > position(substring in string).
>
> See http://www.postgresql.org/docs/9.4/static/plpgsql-porting.html
> particularly the "appendix" at the bottom.  I'm not sure that code
> is still the best way to do it (it's very old), but it's there.
>
> regards, tom lane
>


Re: [GENERAL] user connection not recorded?

2015-07-30 Thread Adrian Klaver

On 07/30/2015 07:46 AM, Tom Lane wrote:

Adrian Klaver  writes:

Eventually got around to figuring that. So just for my reference, the
code snippet I showed from postinit.c seems to show a path where a
username is not used but is  substituted with BOOTSTRAP_SUPERUSERID.


That's single-user mode.


Alright, but the part that has me confused is this comment in the code:

 * In standalone mode and in autovacuum worker processes, we use a fixed
 * ID, otherwise we figure it out from the authenticated user name.

and this

else if (IsBackgroundWorker)

I read the above to mean background processes also follow that path. I 
have been trying to figure more about what goes on internally in 
Postgres and reading the source seems the bet way to do that. Reading it 
wrong is not helpful, so I would appreciate corrections to my 
understanding or lack thereof.





Am I following that correctly and what is BOOTSTRAP_SUPERUSERID?


BOOTSTRAP_SUPERUSERID is the OID of the precreated superuser.

The reason for that hack is to ensure you can still log in, in single-user
mode, even if you've done something stupid like "delete from pg_authid".
You'll be running under a user OID that doesn't actually exist in
pg_authid, but it won't matter because the code will believe you're a
superuser and will never go looking for the missing row.  Then you
can create yourself a new superuser, and resolve not to do that again.


Not that I ever done anything like that:)



regards, tom lane




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] user connection not recorded?

2015-07-30 Thread Tom Lane
Adrian Klaver  writes:
> Eventually got around to figuring that. So just for my reference, the 
> code snippet I showed from postinit.c seems to show a path where a 
> username is not used but is  substituted with BOOTSTRAP_SUPERUSERID.

That's single-user mode.

> Am I following that correctly and what is BOOTSTRAP_SUPERUSERID?

BOOTSTRAP_SUPERUSERID is the OID of the precreated superuser.

The reason for that hack is to ensure you can still log in, in single-user
mode, even if you've done something stupid like "delete from pg_authid".
You'll be running under a user OID that doesn't actually exist in
pg_authid, but it won't matter because the code will believe you're a
superuser and will never go looking for the missing row.  Then you
can create yourself a new superuser, and resolve not to do that again.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] user connection not recorded?

2015-07-30 Thread Adrian Klaver

On 07/30/2015 07:21 AM, Tom Lane wrote:

Adrian Klaver  writes:

On 07/30/2015 06:42 AM, Melvin Davidson wrote:

I can understand that the host is not available in nslookup, but why is
the user not being recorded?



A quick look at the source shows that Postgres system process can have NULL 
username:


Well, the real point is that that message comes out before we've collected
the startup packet, so we don't *have* a username yet.  All we know is
that somebody's opened a TCP connection.  The later "connection
authorized" message will tell you what database user name they gave.


Eventually got around to figuring that. So just for my reference, the 
code snippet I showed from postinit.c seems to show a path where a 
username is not used but is  substituted with BOOTSTRAP_SUPERUSERID.


Am I following that correctly and what is BOOTSTRAP_SUPERUSERID?



If you see lots more "connection received" than "connection authorized"
then somebody is connecting to your postmaster and failing to
authenticate, which is usually a good thing to investigate.

regards, tom lane




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] user connection not recorded?

2015-07-30 Thread Adrian Klaver

On 07/30/2015 07:12 AM, Melvin Davidson wrote:

Thanks  for the quick reply Adrian.


Further testing shows this can happen in other situations.

So:

aklaver@panda:~> psql -d test
psql (9.4.2)
Type "help" for help.

test=>

where trust is set up for local connections yields:

[unknown]-2015-07-30 07:25:36.614 PDT-0LOG:  connection received: 
host=[local]
aklaver-2015-07-30 07:25:36.615 PDT-0LOG:  connection authorized: 
user=aklaver database=test


So unknown is a placeholder until a username is found or not.



On Thu, Jul 30, 2015 at 10:09 AM, Adrian Klaver
mailto:adrian.kla...@aklaver.com>> wrote:

On 07/30/2015 06:42 AM, Melvin Davidson wrote:
> I have a puzzling question.
>
> All through the error log, there are connections for [unknown] user.
> EG:
> 2015-07-30 00:00:00 CDT [6577]: [1-1]: : [unknown]: LOG:  connection
> received: host=173.239.101.98  port=50687
>
> The log_line_prefix   is  %t [%p]: [%l-1]: %h: %u:
>
> I can understand that the host is not available in nslookup, but why is
> the user not being recorded?

A quick look at the source shows that Postgres system process can
have NULL username:

postinit.c:

  else if (IsBackgroundWorker)
 {
 if (username == NULL)
 {
 InitializeSessionUserIdStandalone();
 am_superuser = true;

which is then turned into 'unknown'

elog.c:

case 'u':
 if (MyProcPort)
 {
   const char *username = MyProcPort->user_name;

   if (username == NULL || *username == '\0')
  username = _("[unknown]");


 >
 > *Melvin Davidson*


--
Adrian Klaver
adrian.kla...@aklaver.com 




--
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [ADMIN] How Many PG_Locks are considered too many

2015-07-30 Thread John Scalia
Seconding Peter on this one; it's a lot more important should one of those
locks be hanging around, say for hours or days, not how many have come and
gone.
--
Jay

On Thu, Jul 30, 2015 at 8:57 AM, Renato Oliveira <
renato.olive...@cantabcapital.com> wrote:

> Peter thank you much appreciated
>
> Sent from my iPhone
>
> > On 30 Jul 2015, at 14:54, Peter Eisentraut  wrote:
> >
> >> On 7/30/15 6:13 AM, Renato Oliveira wrote:
> >> We have a Nagios plugin, which monitors pg_locks and almost daily we see
> >> 3000 to 4 pg_locks.
> >>
> >> Can we just ignore them, can we let them grow without worrying?
> >>
> >> How many pg_locks are considered unsafe for any given postgres server?
> >
> > That depends on how many concurrent clients you have and what they are
> > doing.  Every table access will at least create a share lock of some
> > kind, so if you have a lot of activity that does a lot of things, you
> > will see a lot of locks, but that doesn't impact database performance in
> > a significant way.
> >
> > I don't think monitoring the absolute number of locks is useful.  You
> > might want to chart it, to compare over time.  If you want to monitor
> > locks, you could monitor lock waits, which you can get by checking the
> > server log.
> >
>
>
>
> --
> Sent via pgsql-admin mailing list (pgsql-ad...@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin
>


Re: [GENERAL] user connection not recorded?

2015-07-30 Thread Melvin Davidson
Thanks  for the quick reply Adrian.

On Thu, Jul 30, 2015 at 10:09 AM, Adrian Klaver 
wrote:

> On 07/30/2015 06:42 AM, Melvin Davidson wrote:
> > I have a puzzling question.
> >
> > All through the error log, there are connections for [unknown] user.
> > EG:
> > 2015-07-30 00:00:00 CDT [6577]: [1-1]: : [unknown]: LOG:  connection
> > received: host=173.239.101.98 port=50687
> >
> > The log_line_prefix   is  %t [%p]: [%l-1]: %h: %u:
> >
> > I can understand that the host is not available in nslookup, but why is
> > the user not being recorded?
>
> A quick look at the source shows that Postgres system process can have
> NULL username:
>
> postinit.c:
>
>  else if (IsBackgroundWorker)
> {
> if (username == NULL)
> {
> InitializeSessionUserIdStandalone();
> am_superuser = true;
>
> which is then turned into 'unknown'
>
> elog.c:
>
> case 'u':
> if (MyProcPort)
> {
>   const char *username = MyProcPort->user_name;
>
>   if (username == NULL || *username == '\0')
>  username = _("[unknown]");
>
>
> >
> > *Melvin Davidson*
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>



-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] [ADMIN] How Many PG_Locks are considered too many

2015-07-30 Thread Renato Oliveira
Peter thank you much appreciated

Sent from my iPhone

> On 30 Jul 2015, at 14:54, Peter Eisentraut  wrote:
> 
>> On 7/30/15 6:13 AM, Renato Oliveira wrote:
>> We have a Nagios plugin, which monitors pg_locks and almost daily we see
>> 3000 to 4 pg_locks.
>> 
>> Can we just ignore them, can we let them grow without worrying?
>> 
>> How many pg_locks are considered unsafe for any given postgres server?
> 
> That depends on how many concurrent clients you have and what they are
> doing.  Every table access will at least create a share lock of some
> kind, so if you have a lot of activity that does a lot of things, you
> will see a lot of locks, but that doesn't impact database performance in
> a significant way.
> 
> I don't think monitoring the absolute number of locks is useful.  You
> might want to chart it, to compare over time.  If you want to monitor
> locks, you could monitor lock waits, which you can get by checking the
> server log.
> 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] user connection not recorded?

2015-07-30 Thread Melvin Davidson
I have a puzzling question.

All through the error log, there are connections for [unknown] user.
EG:
2015-07-30 00:00:00 CDT [6577]: [1-1]: : [unknown]: LOG:  connection
received: host=173.239.101.98 port=50687

The log_line_prefix   is  %t [%p]: [%l-1]: %h: %u:

I can understand that the host is not available in nslookup, but why is the
user not being recorded?

*Melvin Davidson*


Re: [GENERAL] [ADMIN] How Many PG_Locks are considered too many

2015-07-30 Thread Peter Eisentraut
On 7/30/15 6:13 AM, Renato Oliveira wrote:
> We have a Nagios plugin, which monitors pg_locks and almost daily we see
> 3000 to 4 pg_locks.
> 
> Can we just ignore them, can we let them grow without worrying?
> 
> How many pg_locks are considered unsafe for any given postgres server?

That depends on how many concurrent clients you have and what they are
doing.  Every table access will at least create a share lock of some
kind, so if you have a lot of activity that does a lot of things, you
will see a lot of locks, but that doesn't impact database performance in
a significant way.

I don't think monitoring the absolute number of locks is useful.  You
might want to chart it, to compare over time.  If you want to monitor
locks, you could monitor lock waits, which you can get by checking the
server log.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Logical decoding off of a replica?

2015-07-30 Thread Curt Micol
Great, thank you for the response!

On Thu, Jul 30, 2015 at 7:33 AM, Andres Freund  wrote:
> On 2015-07-28 17:54:57 +, Curt Micol wrote:
>> Hello,
>>
>> I've been working to get a replica setup to perform logical decoding
>> and haven't been able to get the right configuration. I've tried
>> everything I can think of. While researching I found this post on
>> Stack Overflow:
>> http://stackoverflow.com/questions/30351628/logical-decoding-on-a-standby-node
>>
>> Mostly I want to verify that logical decoding is not currently
>> supported on a replica.
>
> Correct, logical decoding is currently not supported on a replica. I'm
> not aware of recent progress towards allowing that, but I still consider
> it not to be too hard to implement.
>
> Regards,
>
> Andres



-- 
# Curt Micol


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Transaction ID Wraparound Monitoring

2015-07-30 Thread Adrian Klaver

On 07/30/2015 02:55 AM, Jan Keirse wrote:

Hello,

we have some very write heavy databases and I have our monitoring
system watch the transaction age of my databases to be alerted before
we get into problems in case autovacuum can't keep up to avoid
transaction ID wraparound.

The query I am executing is this:
SELECT max(age(pg_database.datfrozenxid)) / 2147483648.0 * 100.0 AS
"Percentage of transaction ID's used" FROM pg_database;

My believe was that if this reaches 100 the database will stop
accepting writes and one must vacuum. I have set alerts on 50 and 90,
the result is around 9 so my believe was autovacuum is working fine
for my workload.
I often see autovacuum kicking in to prevent XID Wraparround, I
thought that was just to be on the safe side and vacuum well before
it's too late.

However today I saw this post:
http://blog.getsentry.com/2015/07/23/transaction-id-wraparound-in-postgres.html

The following line has me worried:
... that database is going to reach a situation where the XID counter
has reached its maximum value. The absolute peak is something around 2
billion, but it can be far lower than that in some situations...

Could someone shed some light on this? Is my query insufficient? Can
the transaction wrapparound freeze problem indeed occur earlier? And
if so, could someone suggest a better query to monitor?


I would look at:

http://www.postgresql.org/docs/9.4/interactive/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND

Which includes some query examples.



Kind Regards,

Jan Keirse




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Exclusively locking parent tables while disinheriting children.

2015-07-30 Thread Rowan Collins

Hi,

When working with partition sets, we're seeing occasional errors of 
"could not find inherited attribute..." in Select queries. This is 
apparently caused when an "ALTER TABLE ... NO INHERIT" runs concurrently 
with another transaction selecting from the relevant child table.


I found an old bug report filed against 8.3 back in 2008 [1] I can still 
reproduce the test case in that report on 9.1.11 and 9.3.5, and it seems 
to match what we're seeing in production.


Tom Lane said at the time that a lock would cause more problems than it 
solved [2], but when I add an explicit lock statement ("LOCK TABLE ONLY 
p_tbl, c1_tbl IN ACCESS EXCLUSIVE MODE;") between "BEGIN" and "ALTER 
TABLE", I get the behaviour I would expect - the SELECT blocks until the 
transaction is committed, then returns rows from the remaining child table.


So what I want to understand is what the risk of adding this lock are - 
under what circumstances would I expect to see dead locks if I manually 
added this lock to my partition maintenance functions?


If there aren't any, should the database itself acquire this lock during 
the ALTER TABLE process? There is mention in previous discussions of 
DROP TABLE also not taking a lock, but even if that case isn't fixable, 
fixing NO INHERIT would at least provide a documented (and quite 
intuitive) way to achieve this safely - always disinherit your children 
before dropping them.



[1] 
http://www.postgresql.org/message-id/200806171229.m5HCTfsI091593%40wwwmaster.postgresql.org

[2] http://www.postgresql.org/message-id/19666.1213709303%40sss.pgh.pa.us

Regards,
--
Rowan Collins
[IMSoP]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How Many PG_Locks are considered too many

2015-07-30 Thread Renato Oliveira
Hi

I have few questions, if anyone could help me, it will be very much appreciated.

We have a Nagios plugin, which monitors pg_locks and almost daily we see 3000 
to 4 pg_locks.

Can we just ignore them, can we let them grow without worrying?
How many pg_locks are considered unsafe for any given postgres server?

Thank you

Renato




Re: [GENERAL] Question about copy from with timestamp format

2015-07-30 Thread Alban Hertroys

> On 30 Jul 2015, at 2:27, Sherrylyn Branchaw  wrote:
> 
> Based on your PS asking about data types and commenting that you don't want 
> to put hour in a separate column, it sounds like this is a brand-new table 
> you're creating. If so, and if this is a one-time COPY operation, you can 
> create a text column for the initial import. Then after you're done 
> importing, you can execute 
> 
> ALTER TABLE ts_test ALTER COLUMN ts_fld TYPE TIMESTAMP USING 
> (to_timestamp(ts_fld, 'MMDDHH24'));
> 
> to convert the format of the imported data to a timestamp. Then you're set.
> 
> If there will be ongoing imports of more files like this, though, you'll need 
> the intermediate table solution offered by Adrian.

Or keep both columns and update those where the text-column is NOT NULL and the 
timestamp column is NULL.

> I was going to suggest a trigger, but it turns out that the data type 
> checking happens even before the BEFORE trigger fires, so you don't get a 
> chance to massage your data before actually inserting it. I got 'ERROR:  
> date/time field value out of range: "2015072913"' before the trigger even 
> fired. I wonder if that's deliberate? I was able to implement a workaround by 
> adding a raw_ts_fld column of type text, but an extra column might be too 
> ugly for you relative to a temp table, I don't know.

I was thinking that perhaps an updatable view might do the trick?

You would need to create a view with the timestamp column converted to text in 
the format in your CSV file. Next you add an INSERT rule that does the 
conversion from text to timestamp and inserts the row in the actual table. 
Finally, you use the view in the COPY statement instead of the table.
Added bonus, you can now also use the view to export your table to the same CSV 
format.

Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Transaction ID Wraparound Monitoring

2015-07-30 Thread Jan Keirse
Hello,

we have some very write heavy databases and I have our monitoring
system watch the transaction age of my databases to be alerted before
we get into problems in case autovacuum can't keep up to avoid
transaction ID wraparound.

The query I am executing is this:
SELECT max(age(pg_database.datfrozenxid)) / 2147483648.0 * 100.0 AS
"Percentage of transaction ID's used" FROM pg_database;

My believe was that if this reaches 100 the database will stop
accepting writes and one must vacuum. I have set alerts on 50 and 90,
the result is around 9 so my believe was autovacuum is working fine
for my workload.
I often see autovacuum kicking in to prevent XID Wraparround, I
thought that was just to be on the safe side and vacuum well before
it's too late.

However today I saw this post:
http://blog.getsentry.com/2015/07/23/transaction-id-wraparound-in-postgres.html

The following line has me worried:
... that database is going to reach a situation where the XID counter
has reached its maximum value. The absolute peak is something around 2
billion, but it can be far lower than that in some situations...

Could someone shed some light on this? Is my query insufficient? Can
the transaction wrapparound freeze problem indeed occur earlier? And
if so, could someone suggest a better query to monitor?

Kind Regards,

Jan Keirse

-- 


 DISCLAIMER 

http://www.tvh.com/glob/en/email-disclaimer

"This message is delivered to all addressees subject to the conditions
set forth in the attached disclaimer, which is an integral part of this
message."


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Logical decoding off of a replica?

2015-07-30 Thread Andres Freund
On 2015-07-28 17:54:57 +, Curt Micol wrote:
> Hello,
> 
> I've been working to get a replica setup to perform logical decoding
> and haven't been able to get the right configuration. I've tried
> everything I can think of. While researching I found this post on
> Stack Overflow:
> http://stackoverflow.com/questions/30351628/logical-decoding-on-a-standby-node
> 
> Mostly I want to verify that logical decoding is not currently
> supported on a replica.

Correct, logical decoding is currently not supported on a replica. I'm
not aware of recent progress towards allowing that, but I still consider
it not to be too hard to implement.

Regards,

Andres


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general