Re: [GENERAL] Question about wal files / pg_xlogs

2016-08-05 Thread Patrick B
Hi guys,


I can now confirm that by taking off the IONICE command solved my problem.

Thanks a lot !
Patrick


[GENERAL] Logical Decoding Failover

2016-08-05 Thread Colin Morelli
Hey all,

I'm using logical decoding in my application to capture change streams and
ship them to Kafka. However, logical replication slots aren't included in
the WAL and thus don't make it to replicas. In the case of a failover, it's
not clear what (if anything) can be done to continue receiving a change
stream.

I noticed there was a failover slot patch that looks like it won't be
making it for 9.6 (if at all), but does anyone else have suggestions for
handling failover + logical decoding? Is there an answer, or is it limited
to being a very manual process for now?

Thanks in advance.

Best,
Colin


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-08-05 Thread Tatsuo Ishii
> Yes, the VACUUM truncation is still an issue. But statements are
> retryable, just like deadlocks.
> 
> Unfo the truncation logic always kicks in or small tables of less than
> 16 blocks. It's more forgiving on bigger tables.

Oh, I didn't know that. Thanks for the info.

> Maybe we could defer the truncation on the standby in some cases.

Do we want to add this to the TODO list?

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] perc 9 series fastpath feature for database data

2016-08-05 Thread Pavel Suderevsky
Hi,

I need advice regarding RAID Controller configuration for PostgreSQL data.
Known common best practices:
- RAID cache at least 512MB
- BBU is must have
- cache mode = write back
- disk write cache mode = disabled for hdd and low level ssd and disk write
cache mode = enabled for ssd enterprise level with backup condenser
- barrier=0
- vm.dirty_background_bytes=67108864 and vm.dirty_bytes=536870912 for RAID
with 512MB cache.


But now I want to ask about special configuration for Dell PowerEdge RAID
Controller (PERC) 9 series with 2Gb write cache due to Fastpath feature
listed in Dell's documentation:

> FastPathis a feature that improves application performance by delivering
> high I/O per second (IOPs) for
> the Solid State Drives (SSD).

To enable FastPathon a virtual disk the Dell PowerEdge RAID Controller
> (PERC) 9 series cache policies
> need to be set to Write-Throughand No Read Ahead. This enables FastPathto
> use the proper data path
> through the controller based on command (read/write), IO size, and RAID
> type.


Using dell enterprise level SSD's in RAID 10 for PostgreSQL data is
supposed.

It seems that with this raid controller and ssd's the best practice would
be slightly corrected:
- cache mode = write through
- disk write cache mode = disabled
- barrier=0
- vm.dirty_background_bytes=16777216 and vm.dirty_bytes=134217728

Wouldn't this configuration be dangerous in case of power-loss disaster and
especially isn't it dangerous to use barrier=0 with Write Through raid
cache mode?

Thanks in advance.


Re: [GENERAL] Detecting if current transaction is modifying the database

2016-08-05 Thread Christian Ohler
On Fri, Aug 5, 2016 at 1:26 PM, Rob Sargent  wrote:
> On 08/05/2016 02:15 PM, Christian Ohler wrote:
>>
>> I'm looking for a statement (or sequence of statements) that, when run
>> within a transaction, tells the client if any writes are happening in
>> that transaction – basically an interface similar to my proposed
>> solution.  I have some database wrapper code on the client that passes
>> along arbitrary statements to Postgres, and I'd like for that wrapper
>> code to be able to determine whether the current transaction is a
>> write (for various purposes including logging).  It would be nice not
>> to make the client-side wrapper code dependent on instrumentation of
>> the database schema itself.
>>
>> What can you tell me about my proposed solution?  Does it do what I
>> describe I want from it?  Are there limitations I should be aware of?
>
> At what point do you intend to inform the client that the db will be
> (significantly) altered? You're planned call is within the transaction and
> presumably late in the sequence (so the locks have been created). Not sure
> if your client can see any results until after the transaction has been
> largely executed.  Does that matter?

If I'm understanding you correctly, it does not matter – I'm looking
for a method to determine whether the current transaction includes any
writes _so far_.

Thanks,
Christian.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Detecting if current transaction is modifying the database

2016-08-05 Thread Christian Ohler
On Fri, Aug 5, 2016 at 1:24 PM, Tom Lane  wrote:
> Christian Ohler  writes:
>> Thanks, fair point.  I should have mentioned that I know about triggers but
>> was hoping to find a less invasive mechanism (IIUC, I'd have to install a
>> trigger on every table) – it seems to me that Postgres should just be able
>> to tell me whether COMMIT will do anything, it obviously has to track that
>> somehow (or some approximation of it).
>
> You could check to see if the current transaction has had an XID assigned,
> or if it's emitted any WAL records.  There are already tests for those
> sorts of conditions in various places, though I do not think they're
> exposed at the SQL level.

Yes, checking if the current transaction has an XID assigned was the
idea behind my proposed solution above:

SELECT count(*) FROM pg_locks WHERE pid=pg_backend_pid() AND
locktype='transactionid' AND mode='ExclusiveLock' AND granted;

(See my original email for why I think this checks whether an XID is
assigned.)  It works in my superficial tests.  Does it look like I'm
checking for the right conditions?

Checking for WAL records is an interesting idea, thanks – it seems
like that would be an even more direct test than whether an XID has
been assigned.  I looked at pg_current_xlog_insert_location(), but
that seems to be global, not limited to the current transaction, so
doesn't seem usable for this.


>> Another thing I should have mentioned is that I don't consider incrementing
>> a sequence to be a modification.
>
> Things might not work the way you want on that...

It's fine if they result in false positives (but don't seem to for my
above method).

Thanks,
Christian.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Detecting if current transaction is modifying the database

2016-08-05 Thread Rob Sargent

On 08/05/2016 02:15 PM, Christian Ohler wrote:

On Fri, Aug 5, 2016 at 12:55 PM, Rob Sargent  wrote:

What sort of interface are you looking for.  Where/When would you grab the 
information? Do what with it?  Log triggers are the typical pattern here (with 
packages just for that sort of thing).

I'm looking for a statement (or sequence of statements) that, when run
within a transaction, tells the client if any writes are happening in
that transaction – basically an interface similar to my proposed
solution.  I have some database wrapper code on the client that passes
along arbitrary statements to Postgres, and I'd like for that wrapper
code to be able to determine whether the current transaction is a
write (for various purposes including logging).  It would be nice not
to make the client-side wrapper code dependent on instrumentation of
the database schema itself.

What can you tell me about my proposed solution?  Does it do what I
describe I want from it?  Are there limitations I should be aware of?
At what point do you intend to inform the client that the db will be 
(significantly) altered? You're planned call is within the transaction 
and presumably late in the sequence (so the locks have been created). 
Not sure if your client can see any results until after the transaction 
has been largely executed.  Does that matter?




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Detecting if current transaction is modifying the database

2016-08-05 Thread Tom Lane
Christian Ohler  writes:
> Thanks, fair point.  I should have mentioned that I know about triggers but
> was hoping to find a less invasive mechanism (IIUC, I'd have to install a
> trigger on every table) – it seems to me that Postgres should just be able
> to tell me whether COMMIT will do anything, it obviously has to track that
> somehow (or some approximation of it).

You could check to see if the current transaction has had an XID assigned,
or if it's emitted any WAL records.  There are already tests for those
sorts of conditions in various places, though I do not think they're
exposed at the SQL level.

> Another thing I should have mentioned is that I don't consider incrementing
> a sequence to be a modification.

Things might not work the way you want on that...

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Detecting if current transaction is modifying the database

2016-08-05 Thread Christian Ohler
On Fri, Aug 5, 2016 at 12:55 PM, Rob Sargent  wrote:
>
> What sort of interface are you looking for.  Where/When would you grab the 
> information? Do what with it?  Log triggers are the typical pattern here 
> (with packages just for that sort of thing).

I'm looking for a statement (or sequence of statements) that, when run
within a transaction, tells the client if any writes are happening in
that transaction – basically an interface similar to my proposed
solution.  I have some database wrapper code on the client that passes
along arbitrary statements to Postgres, and I'd like for that wrapper
code to be able to determine whether the current transaction is a
write (for various purposes including logging).  It would be nice not
to make the client-side wrapper code dependent on instrumentation of
the database schema itself.

What can you tell me about my proposed solution?  Does it do what I
describe I want from it?  Are there limitations I should be aware of?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Detecting if current transaction is modifying the database

2016-08-05 Thread Rob Sargent



On 08/05/2016 01:48 PM, Christian Ohler wrote:
Thanks, fair point.  I should have mentioned that I know about 
triggers but was hoping to find a less invasive mechanism (IIUC, I'd 
have to install a trigger on every table) – it seems to me that 
Postgres should just be able to tell me whether COMMIT will do 
anything, it obviously has to track that somehow (or some 
approximation of it).


Another thing I should have mentioned is that I don't consider 
incrementing a sequence to be a modification.



On Fri, Aug 5, 2016 at 12:35 PM, Alex Ignatov 
> wrote:


Hi! Make trigger function

Alex Ignatov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company




On Fri, Aug 5, 2016 at 10:25 PM +0300, "Christian Ohler"
> wrote:

Hi,

I'm trying to find a way to have Postgres tell me if the
current transaction would modify database if I committed it
now.  I can live with a conservative approximation (sometimes
– ideally, rarely – get a "yes" even though nothing would be
modified, but never get a "no" even though there are pending
modifications).  It's acceptable (probably even desirable) if
a no-op write operation like "UPDATE foo SET bar = 1 WHERE bar
= 1" is considered a modification.

(The use case is an audit log mechanism vaguely similar to
pgMemento.)


This sentence from
https://www.postgresql.org/docs/9.5/static/view-pg-locks.html
 :

> If a permanent ID is assigned to the transaction (which
normally happens
> only if the transaction changes the state of the database),
it also holds
> an exclusive lock on its permanent transaction ID until it ends.

makes me think that I can perhaps do it as follows:

SELECT count(*) FROM pg_locks WHERE pid=pg_backend_pid() AND
locktype='transactionid' AND mode='ExclusiveLock' AND granted;

Is that right?  "Permanent transaction ID" refers to the XID,
correct?  Are there other, better ways?  Are there ways to
avoid false positives due to temp tables?

Thanks in advance,
Christian.


What sort of interface are you looking for.  Where/When would you grab 
the information? Do what with it?  Log triggers are the typical pattern 
here (with packages just for that sort of thing).




Re: [GENERAL] Detecting if current transaction is modifying the database

2016-08-05 Thread Christian Ohler
Thanks, fair point.  I should have mentioned that I know about triggers but
was hoping to find a less invasive mechanism (IIUC, I'd have to install a
trigger on every table) – it seems to me that Postgres should just be able
to tell me whether COMMIT will do anything, it obviously has to track that
somehow (or some approximation of it).

Another thing I should have mentioned is that I don't consider incrementing
a sequence to be a modification.


On Fri, Aug 5, 2016 at 12:35 PM, Alex Ignatov 
wrote:

> Hi! Make trigger function
>
> Alex Ignatov
> Postgres Professional: http://www.postgrespro.com
> Russian Postgres Company
>
>
>
>
> On Fri, Aug 5, 2016 at 10:25 PM +0300, "Christian Ohler" 
> wrote:
>
> Hi,
>>
>> I'm trying to find a way to have Postgres tell me if the current
>> transaction would modify database if I committed it now.  I can live with a
>> conservative approximation (sometimes – ideally, rarely – get a "yes" even
>> though nothing would be modified, but never get a "no" even though there
>> are pending modifications).  It's acceptable (probably even desirable) if a
>> no-op write operation like "UPDATE foo SET bar = 1 WHERE bar = 1" is
>> considered a modification.
>>
>> (The use case is an audit log mechanism vaguely similar to pgMemento.)
>>
>>
>> This sentence from https://www.postgresql.org/
>> docs/9.5/static/view-pg-locks.html :
>>
>> > If a permanent ID is assigned to the transaction (which normally happens
>> > only if the transaction changes the state of the database), it also
>> holds
>> > an exclusive lock on its permanent transaction ID until it ends.
>>
>> makes me think that I can perhaps do it as follows:
>>
>> SELECT count(*) FROM pg_locks WHERE pid=pg_backend_pid() AND
>> locktype='transactionid' AND mode='ExclusiveLock' AND granted;
>>
>> Is that right?  "Permanent transaction ID" refers to the XID, correct?
>> Are there other, better ways?  Are there ways to avoid false positives due
>> to temp tables?
>>
>> Thanks in advance,
>> Christian.
>>
>>


Re: [GENERAL] Streaming Replica Master-Salve Config.

2016-08-05 Thread John R Pierce

On 8/4/2016 9:15 AM, Eduardo Morras wrote:

If you set max_connections too high, those connections will compete/figth for 
same resources, CPU processing, I/O to disks, Memory and caches, Locks, and 
postgres will spend more time managing the resources than doing real work. 
Believe me (or us) set it as we say and use a bouncer like pgbouncer. It can 
run on the same server.


idle connections only use a small amount of memory, a process, a socket, 
and some file handles.when you have multiple databases, its 
impossible to share a connection pool across them.


the OP is talking about having 350 'tenants' each with their own 
database and user on a single server.


your 1 connection per core suggestion is ludicrious for this 
scenario. in many database applications, most connections are idle 
most of the time.   sure you don't want much over about 2-4X your cpu 
thread count actually active doing queries at the same time if you want 
the max transaction/second aggregate throughput, but you can still get 
acceptable performance several times higher than that, depending on the 
workload, in my benchmarks the aggregate TPS rolls off fairly slowly for 
quite a ways past the 2-4 connections per hardware thread or core level, 
at least doing simple OLTP stuff on a high concurrency storage system 
(lots of fast disks in raid10)






--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Detecting if current transaction is modifying the database

2016-08-05 Thread Alex Ignatov
Hi! Make trigger function 

Alex Ignatov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company




On Fri, Aug 5, 2016 at 10:25 PM +0300, "Christian Ohler"  
wrote:










Hi,
I'm trying to find a way to have Postgres tell me if the current transaction 
would modify database if I committed it now.  I can live with a conservative 
approximation (sometimes – ideally, rarely – get a "yes" even though nothing 
would be modified, but never get a "no" even though there are pending 
modifications).  It's acceptable (probably even desirable) if a no-op write 
operation like "UPDATE foo SET bar = 1 WHERE bar = 1" is considered a 
modification.
(The use case is an audit log mechanism vaguely similar to pgMemento.)

This sentence from 
https://www.postgresql.org/docs/9.5/static/view-pg-locks.html :
> If a permanent ID is assigned to the transaction (which normally happens> 
> only if the transaction changes the state of the database), it also holds> an 
> exclusive lock on its permanent transaction ID until it ends.

makes me think that I can perhaps do it as follows:
SELECT count(*) FROM pg_locks WHERE pid=pg_backend_pid() AND 
locktype='transactionid' AND mode='ExclusiveLock' AND granted;

Is that right?  "Permanent transaction ID" refers to the XID, correct?  Are 
there other, better ways?  Are there ways to avoid false positives due to temp 
tables?
Thanks in advance,Christian.








[GENERAL] Detecting if current transaction is modifying the database

2016-08-05 Thread Christian Ohler
Hi,

I'm trying to find a way to have Postgres tell me if the current
transaction would modify database if I committed it now.  I can live with a
conservative approximation (sometimes – ideally, rarely – get a "yes" even
though nothing would be modified, but never get a "no" even though there
are pending modifications).  It's acceptable (probably even desirable) if a
no-op write operation like "UPDATE foo SET bar = 1 WHERE bar = 1" is
considered a modification.

(The use case is an audit log mechanism vaguely similar to pgMemento.)


This sentence from
https://www.postgresql.org/docs/9.5/static/view-pg-locks.html :

> If a permanent ID is assigned to the transaction (which normally happens
> only if the transaction changes the state of the database), it also holds
> an exclusive lock on its permanent transaction ID until it ends.

makes me think that I can perhaps do it as follows:

SELECT count(*) FROM pg_locks WHERE pid=pg_backend_pid() AND
locktype='transactionid' AND mode='ExclusiveLock' AND granted;

Is that right?  "Permanent transaction ID" refers to the XID, correct?  Are
there other, better ways?  Are there ways to avoid false positives due to
temp tables?

Thanks in advance,
Christian.


Re: [GENERAL] fun fact about temp tables

2016-08-05 Thread Alex Ignatov


On 05.08.2016 18:54, Tom Lane wrote:

Alex Ignatov  writes:

On 05.08.2016 17:51, Tom Lane wrote:

Sure.  Just like it reserves space for ordinary tables right away,
long before there's any need to push the data out of shared_buffers.
Otherwise, you might find yourself having to throw an "out of disk
space" error after having already committed the relevant INSERTs.

How about out of space when we filling WAL files?

What about it?  That will be reported before committing, too.

What Grigory wants would imply committing and then sometime later
saying "oh, wait ... remember that data we told you we'd committed?
We lied."

Temp tables do indeed disappear at session end (and a fortiori after
a crash), but that doesn't create an excuse for them not to have
normal transactional behavior within the session.

regards, tom lane


If temp table fits in temp_buffer why do we have to reserve disk space 
for that table?


If we commit after filling temp table ok=> Not enough temp_buffers for 
the new one temp table write the first one to disk=> Not enough space 
for temp file ok - our system in any way cant work further.


Cant see any problems in writing temp table data to disk only when 
temp_buffer is full.


Any arguments against that behavior?

Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] fun fact about temp tables

2016-08-05 Thread Grigory Smolkin



On 08/05/2016 07:18 PM, Andrew Sullivan wrote:

On Fri, Aug 05, 2016 at 06:02:08PM +0300, Grigory Smolkin wrote:

But it`s temporary table so it`s equal to saying 'I don`t care about this
data' and I can get 'out of disk space' regardless of using temporary
tables.

What are we winning here?

Surely, that the transaction operates in a predictable way?  A temp
table doesn't say, "I don't care about this data," it says, "I don't
care about this data over the long haul."  I've had lots of data go
through temp tables that I really really wanted to get into some other
place later, and it'd suck if the transaction failed half way through
because it turns out there's nowhere to put the data I've just staged.

A



But in that case you loose your data is case of power outage, deadlock 
or network problem.
As it seems to me you can either 'care about your data' and use regular 
tables, protected by wal, or don`t  and use temp tables.
What am trying to understand, does temp tables really worth that many 
disk operations? First we create empty file, then reserve space for it 
and then we
write data in case of temp_buffers overflow. If there are many temp 
tables it`s starting to eat a lot of I/O.

Wouldn`t it be more effective to create file for temp table on demand?
I think for most temp tables operations  temp_buffers memory will be enough.


--
Grigory Smolkin
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



Re: [GENERAL] fun fact about temp tables

2016-08-05 Thread Andrew Sullivan
On Fri, Aug 05, 2016 at 06:02:08PM +0300, Grigory Smolkin wrote:
> But it`s temporary table so it`s equal to saying 'I don`t care about this
> data' and I can get 'out of disk space' regardless of using temporary
> tables.
> 
> What are we winning here?

Surely, that the transaction operates in a predictable way?  A temp
table doesn't say, "I don't care about this data," it says, "I don't
care about this data over the long haul."  I've had lots of data go
through temp tables that I really really wanted to get into some other
place later, and it'd suck if the transaction failed half way through
because it turns out there's nowhere to put the data I've just staged.

A

-- 
Andrew Sullivan
a...@crankycanuck.ca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] fun fact about temp tables

2016-08-05 Thread Grigory Smolkin

I can get error anyway on first select because of hint bit write. no?


On 08/05/2016 06:54 PM, Tom Lane wrote:

Alex Ignatov  writes:

On 05.08.2016 17:51, Tom Lane wrote:

Sure.  Just like it reserves space for ordinary tables right away,
long before there's any need to push the data out of shared_buffers.
Otherwise, you might find yourself having to throw an "out of disk
space" error after having already committed the relevant INSERTs.

How about out of space when we filling WAL files?

What about it?  That will be reported before committing, too.

What Grigory wants would imply committing and then sometime later
saying "oh, wait ... remember that data we told you we'd committed?
We lied."

Temp tables do indeed disappear at session end (and a fortiori after
a crash), but that doesn't create an excuse for them not to have
normal transactional behavior within the session.

regards, tom lane


--
Grigory Smolkin
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



Re: [GENERAL] fun fact about temp tables

2016-08-05 Thread Tom Lane
Alex Ignatov  writes:
> On 05.08.2016 17:51, Tom Lane wrote:
>> Sure.  Just like it reserves space for ordinary tables right away,
>> long before there's any need to push the data out of shared_buffers.
>> Otherwise, you might find yourself having to throw an "out of disk
>> space" error after having already committed the relevant INSERTs.

> How about out of space when we filling WAL files?

What about it?  That will be reported before committing, too.

What Grigory wants would imply committing and then sometime later
saying "oh, wait ... remember that data we told you we'd committed?
We lied."

Temp tables do indeed disappear at session end (and a fortiori after
a crash), but that doesn't create an excuse for them not to have
normal transactional behavior within the session.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] fun fact about temp tables

2016-08-05 Thread Tom Lane
Grigory Smolkin  writes:
> Thank you for your answer.
> But it`s temporary table so it`s equal to saying 'I don`t care about 
> this data' and I can get 'out of disk space' regardless of using 
> temporary tables.

> What are we winning here?

Sane behavior.  Would you really want this:

=> CREATE TEMP TABLE foo(...);
CREATE TABLE
=> INSERT INTO foo SELECT lots-o-rows;
INSERT 
... much later ...
=> SELECT * FROM unrelated_temp_table;
ERROR: out of disk space

If we might suffer out-of-disk-space while flushing a buffer, that's
what we'd risk.  So we allocate the disk space before accepting the
INSERT in the first place.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] fun fact about temp tables

2016-08-05 Thread Alex Ignatov


On 05.08.2016 17:51, Tom Lane wrote:

Grigory Smolkin  writes:

I`ve noticed interesting aspect in temp tables working. It appears
postgres is trying to reserve space on disk for temp tables even before
temp_buffers overflow.

Sure.  Just like it reserves space for ordinary tables right away,
long before there's any need to push the data out of shared_buffers.
Otherwise, you might find yourself having to throw an "out of disk
space" error after having already committed the relevant INSERTs.

regards, tom lane


How about out of space when we filling WAL files? Just rollback and 
nothing else. We can fall in this with space reservation for WAL files.


In this situation with temp table we reserved space long before we ever 
need it if we have size of temp table < temp_buffer.
 Why not just rollback transaction in "Out of space" situation? With 
this preliminary reservation we use HDD resource but in fact we dont 
need it.
In situation with high rate creation of temp table, we saturate HDD 
resource with "just in case" reservation.


Have we any other explanation except "out of space" problem? May be 
there is some fundamental things tied with that?


Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] fun fact about temp tables

2016-08-05 Thread Grigory Smolkin

Thank you for your answer.

But it`s temporary table so it`s equal to saying 'I don`t care about 
this data' and I can get 'out of disk space' regardless of using 
temporary tables.


What are we winning here?


On 08/05/2016 05:51 PM, Tom Lane wrote:

Grigory Smolkin  writes:

I`ve noticed interesting aspect in temp tables working. It appears
postgres is trying to reserve space on disk for temp tables even before
temp_buffers overflow.

Sure.  Just like it reserves space for ordinary tables right away,
long before there's any need to push the data out of shared_buffers.
Otherwise, you might find yourself having to throw an "out of disk
space" error after having already committed the relevant INSERTs.

regards, tom lane


--
Grigory Smolkin
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



Re: [GENERAL] fun fact about temp tables

2016-08-05 Thread Tom Lane
Grigory Smolkin  writes:
> I`ve noticed interesting aspect in temp tables working. It appears 
> postgres is trying to reserve space on disk for temp tables even before 
> temp_buffers overflow.

Sure.  Just like it reserves space for ordinary tables right away,
long before there's any need to push the data out of shared_buffers.
Otherwise, you might find yourself having to throw an "out of disk
space" error after having already committed the relevant INSERTs.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] fun fact about temp tables

2016-08-05 Thread Grigory Smolkin

Hello, everyone!

I`ve noticed interesting aspect in temp tables working. It appears 
postgres is trying to reserve space on disk for temp tables even before 
temp_buffers overflow.


test4=# show temp_buffers ;
 temp_buffers
--
 8MB

test4=# create temp table t(a int, b int);

strace:

-

open("base/65677/t3_73931", O_RDONLY)   = -1 ENOENT (No such file or 
directory)

stat("base/65677", {st_mode=S_IFDIR|0700, st_size=12288, ...}) = 0
open("base/65677/t3_73931", O_RDWR|O_CREAT|O_EXCL, 0600) = 6
open("base/65677/12828_fsm", O_RDWR)= 8
lseek(8, 0, SEEK_END)   = 24576
open("base/65677/12958_fsm", O_RDWR)= 9
lseek(9, 0, SEEK_END)   = 24576
open("base/65677/12851_fsm", O_RDWR)= 12
lseek(12, 0, SEEK_END)  = 24576
open("base/65677/12840_fsm", O_RDWR)= 13
lseek(13, 0, SEEK_END)  = 24576
open("base/65677/12840", O_RDWR)= 14
lseek(14, 0, SEEK_END)  = 360448
close(6)= 0

--

test4=# INSERT INTO t (a, b) SELECT NULL, i FROM generate_series(1,1000) i;
---

open("base/65677/t3_73931_fsm", O_RDWR) = -1 ENOENT (No such file or 
directory)

open("base/65677/t3_73931", O_RDWR) = 15
lseek(15, 0, SEEK_END)  = 0
lseek(15, 0, SEEK_END)  = 0
write(15, 
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 
8192) = 8192


---


test4=# select pg_size_pretty(pg_total_relation_size('t'));
 pg_size_pretty

 64 kB
(1 row)


Postgres filling relation file with nulls page by page. Isn`t that just 
kind of killing the whole idea of temp tables?




--
Grigory Smolkin
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-08-05 Thread Simon Riggs
On 5 August 2016 at 14:06, Tatsuo Ishii  wrote:

> On primary:
> test=# vacuum verbose t1;
> INFO:  vacuuming "public.t1"
> INFO:  "t1": removed 3 row versions in 1 pages
> INFO:  "t1": found 3 removable, 0 nonremovable row versions in 1 out of 1 
> pages
> DETAIL:  0 dead row versions cannot be removed yet.
> There were 0 unused item pointers.
> Skipped 0 pages due to buffer pins.
> 0 pages are entirely empty.
> CPU 0.00s/0.00u sec elapsed 0.00 sec.
> INFO:  "t1": truncated 1 to 0 pages
> DETAIL:  CPU 0.00s/0.00u sec elapsed 0.00 sec.
> VACUUM
>
> After while on standby:
> test=# select * from t1;
> FATAL:  terminating connection due to conflict with recovery
> DETAIL:  User was holding a relation lock for too long.
> HINT:  In a moment you should be able to reconnect to the database and repeat 
> your command.
> server closed the connection unexpectedly
> This probably means the server terminated abnormally
> before or while processing the request.

Yes, the VACUUM truncation is still an issue. But statements are
retryable, just like deadlocks.

Unfo the truncation logic always kicks in or small tables of less than
16 blocks. It's more forgiving on bigger tables.

Maybe we could defer the truncation on the standby in some cases.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about wal files / pg_xlogs

2016-08-05 Thread Alex Ignatov

Hello!

As I can see files is delivered not with delay but with timeshift.

1. Can you show me restore_command on slave?

2. Also can you check archived WAL creation time on slaves in archive 
location after you copied them with archive_command? Is in near WAL 
creation time in pg_xlogs? Or different?


3. How do you check timezone equivalence between master and slave? What


Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company

On 04.08.2016 05:21, Patrick B wrote:

Hi all,

I'm currently using PostgreSQL 9.2. I noticed that the wal_files are 
being generated by the master well, no problems. But on the slaves, it 
seems to be a delay to the delivery of those wal_files.


I got two slaves using streaming replication and wal files shipment 
from Master.


*On the master:*

ls -ltr /var/lib/pgsql/9.2/data/pg_xlogs/
Aug  4 02:18 0002159D00D1
Aug  4 02:18 0002159D00D2
Aug  4 02:18 0002159D00D3


select * from pg_current_xlog_location();
159D/D6C8DAF8


So, seems to be ok.


*On the slave:*


ls -ltr /var/lib/pgsql/9.2/wal_archive:
Aug  4 00:58 0002159C0071
Aug  4 00:58 0002159C0072
Aug  4 00:58 0002159C0073


See the time difference? 2 hours? It seems the files are being 
delivered with 2 hours delay.


The streaming replication is working fine... But if that goes down, 
I'll need the wal_files up to date to recover the database.


How can I see what's going on? What would be the steps? any tips?

Cheers
Patrick.




Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-08-05 Thread Tatsuo Ishii
>>> https://www.postgresql.org/docs/current/static/runtime-config-replication.html#GUC-HOT-STANDBY-FEEDBACK
>>
>> I wonder if their problem could be fixed by using
>> hot_standby_feedback.  I have encountered similar problem but it seems
>> hot_standby_feedback was not any help in this case:
>>
>> https://www.postgresql.org/message-id/20130829.164457.863984798767991096.t-ishii%40sraoss.co.jp
> 
> There have been various bugs and enhancements over the years, not all
> of which were backpatched.

The paticular case still does not work with PostgreSQL 9.5.3.

On primary:
create table t1(i int);
insert into t1 values(1),(2),(3);

On standby:
begin;
test=# select version();
version 


 PostgreSQL 9.5.3 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 
4.8.4-2ubuntu1~14.04.1) 4.8.4, 64-bit
(1 row)

select * from t1;
 i 
---
 1
 2
 3
(3 rows)

On primary:
delete from t1;

On standby:
select * from t1;
 i 
---
(0 rows)

On primary:
test=# vacuum verbose t1;
INFO:  vacuuming "public.t1"
INFO:  "t1": removed 3 row versions in 1 pages
INFO:  "t1": found 3 removable, 0 nonremovable row versions in 1 out of 1 pages
DETAIL:  0 dead row versions cannot be removed yet.
There were 0 unused item pointers.
Skipped 0 pages due to buffer pins.
0 pages are entirely empty.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  "t1": truncated 1 to 0 pages
DETAIL:  CPU 0.00s/0.00u sec elapsed 0.00 sec.
VACUUM

After while on standby:
test=# select * from t1;
FATAL:  terminating connection due to conflict with recovery
DETAIL:  User was holding a relation lock for too long.
HINT:  In a moment you should be able to reconnect to the database and repeat 
your command.
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to replace last 4 digital phone number into star using regexp_replace?

2016-08-05 Thread Michail Shurutov
You should grouping first 7 digits to keep "([0-9]{7})" and
replace last 4 digits \d{4}:
postgres@postgres=# select
regexp_replace('1891018,1391018,232,abc','([0-9]{7})\d{4}',E'\\1','g')
;
 regexp_replace
-
 1891018,1391018,232,abc
(1 row)

Time: 73,528 ms
postgres@postgres=#

05.08.2016 13:35, sunpeng пишет:
> hi, friends,
>Our phone number format is 11 digital, i want to replace last 4
> digital into star, for example:
>Replace this string including 11 digital phone number: 
>'1891018,1391018,232,abc'
> into:
>  '1891018,1391018,232,abc'
>I try to use:
>select
> regexp_replace('1891018,1391018,232,abc','[0-9]{7}(\d+)','','g')
>But it doesn't work, it will return:
>  ",,232,abc"
>  Can anyone help me? Thanks a million!
> 
> Best Regards
> peng

-- 
WBR, Michail Shurutov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] how to replace last 4 digital phone number into star using regexp_replace?

2016-08-05 Thread sunpeng
hi, friends,
   Our phone number format is 11 digital, i want to replace last 4 digital
into star, for example:
   Replace this string including 11 digital phone number:
   '1891018,1391018,232,abc'
into:
 '1891018,1391018,232,abc'
   I try to use:
   select
regexp_replace('1891018,1391018,232,abc','[0-9]{7}(\d+)','','g')
   But it doesn't work, it will return:
 ",,232,abc"
 Can anyone help me? Thanks a million!

Best Regards
peng


Re: [SPAM] Re: [GENERAL] Streaming Replica Master-Salve Config.

2016-08-05 Thread Moreno Andreo

Il 04/08/2016 18:15, Eduardo Morras ha scritto:

[...]
a) As others said, max_connections = 200 is too high. Set it at your number of 
cores (I use number of cores -1)

Excuse me for crossthreading, but I have to make things clearer to me.
That's one of the things I feel hard to understand how to approach in my 
architecture.
My server has 350 DB with 350 users, everyone with its DB. Every user 
has a "persistent" connection (used to replicate with rubyrep) and some 
"burst" connections to connect to colleagues DB when necessary.

I'm going to split it across 2 servers, but it doesn't change things.
Even with pgbouncer, how can I manage having all these replication 
connections (plus "burst" connections) with pgbouncer and a low (<100) 
max_connections? Is it even possible?

Thanks
Moreno.

---   ---
Eduardo Morras 







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-08-05 Thread Simon Riggs
On 5 August 2016 at 09:16, Tatsuo Ishii  wrote:
>> On 27/07/16 18:54, Chris Travers wrote:
>>> Another one I think they obliquely referred to (in the subtle problems
>>> section) was the fact that if you have longer-running queries on the
>>> replica with a lot of updates, you can get funny auto-vacuum-induced
>>> errors (writes from autovacuum on the master can interrupt queries on
>>> the slave).  BTW if there is interest in what could be done for that,
>>> something which allows autovacuum to decide how long to wait before
>>> cleaning up dead tuples would be a great enhancement.
>>
>> You mean something like hot_standby_feedback?
>>
>> https://www.postgresql.org/docs/current/static/runtime-config-replication.html#GUC-HOT-STANDBY-FEEDBACK
>
> I wonder if their problem could be fixed by using
> hot_standby_feedback.  I have encountered similar problem but it seems
> hot_standby_feedback was not any help in this case:
>
> https://www.postgresql.org/message-id/20130829.164457.863984798767991096.t-ishii%40sraoss.co.jp

There have been various bugs and enhancements over the years, not all
of which were backpatched.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Uber migrated from Postgres to MySQL

2016-08-05 Thread Tatsuo Ishii
> On 27/07/16 18:54, Chris Travers wrote:
>> Another one I think they obliquely referred to (in the subtle problems
>> section) was the fact that if you have longer-running queries on the
>> replica with a lot of updates, you can get funny auto-vacuum-induced
>> errors (writes from autovacuum on the master can interrupt queries on
>> the slave).  BTW if there is interest in what could be done for that,
>> something which allows autovacuum to decide how long to wait before
>> cleaning up dead tuples would be a great enhancement.
> 
> You mean something like hot_standby_feedback?
> 
> https://www.postgresql.org/docs/current/static/runtime-config-replication.html#GUC-HOT-STANDBY-FEEDBACK

I wonder if their problem could be fixed by using
hot_standby_feedback.  I have encountered similar problem but it seems
hot_standby_feedback was not any help in this case:

https://www.postgresql.org/message-id/20130829.164457.863984798767991096.t-ishii%40sraoss.co.jp

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general