Re: [HACKERS] pg_basebackup -x/X doesn't play well with archive_mode wal_keep_segments

2015-02-12 Thread Sergey Konoplev
Hi,

On Mon, Jan 5, 2015 at 4:34 AM, Andres Freund and...@2ndquadrant.com wrote:
  pg_receivexlog: could not create archive status file
  mmm/archive_status/00010003.done: No such file or
  directory
 
  Dang. Stupid typo. And my tests didn't catch it, because I had
  archive_directory in the target directory :(

I started getting these errors after upgrading from 9.2.8 to 9.2.10.
Is it something critical that requires version downgrade or I can just
ignore that errors?

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (499) 346-7196, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_basebackup -x/X doesn't play well with archive_mode wal_keep_segments

2015-02-12 Thread Sergey Konoplev
On Thu, Feb 12, 2015 at 11:13 AM, Andres Freund and...@2ndquadrant.com wrote:
I started getting these errors after upgrading from 9.2.8 to 9.2.10.
Is it something critical that requires version downgrade or I can just
ignore that errors?

 What errors are you getting in precisely which circumstances? You're using 
 pg-receivexlog?

Errors like this one

pg_receivexlog: could not create archive status file
/mnt/archive/wal/archive_status/000402AF00B7.done: No
such file or directory
pg_receivexlog: disconnected..

on

Linux xyz 3.2.0-76-generic #111-Ubuntu SMP
PostgreSQL 9.2.10

Yes, I use pg_receivexlog. I also use a wrapper/watchdog script around
pg_receivexlog which tracks failures and restarts the latter.

The WAL files time correlates with the pg_receivexlog failures.

postgres@xyz:~$ ls -ltr /mnt/archive/wal/ | tail
-rw--- 1 postgres postgres 16777216 Feb 12 10:58 000402B60011
-rw--- 1 postgres postgres 16777216 Feb 12 11:02 000402B60012
-rw--- 1 postgres postgres 16777216 Feb 12 11:06 000402B60013
-rw--- 1 postgres postgres 16777216 Feb 12 11:11 000402B60014
-rw--- 1 postgres postgres 16777216 Feb 12 11:15 000402B60015
-rw--- 1 postgres postgres 16777216 Feb 12 11:19 000402B60016
-rw--- 1 postgres postgres 16777216 Feb 12 11:23 000402B60017
-rw--- 1 postgres postgres 16777216 Feb 12 11:27 000402B60018
-rw--- 1 postgres postgres 16777216 Feb 12 11:30 000402B60019
-rw--- 1 postgres postgres 16777216 Feb 12 11:32
000402B6001A.partial

postgres@xyz:~$ cat /var/log/pgcookbook/manage_pitr-wal.log | tail
Thu Feb 12 11:15:18 PST 2015 ERROR manage_pitr.sh: Problem occured
during WAL archiving: pg_receivexlog: could not create archive status
file /mnt/archive/wal/archive_status/000402B60015.done:
No such file or directory
pg_receivexlog: disconnected..
Thu Feb 12 11:19:33 PST 2015 ERROR manage_pitr.sh: Problem occured
during WAL archiving: pg_receivexlog: could not create archive status
file /mnt/archive/wal/archive_status/000402B60016.done:
No such file or directory
pg_receivexlog: disconnected..
Thu Feb 12 11:23:38 PST 2015 ERROR manage_pitr.sh: Problem occured
during WAL archiving: pg_receivexlog: could not create archive status
file /mnt/archive/wal/archive_status/000402B60017.done:
No such file or directory
pg_receivexlog: disconnected..
Thu Feb 12 11:27:32 PST 2015 ERROR manage_pitr.sh: Problem occured
during WAL archiving: pg_receivexlog: could not create archive status
file /mnt/archive/wal/archive_status/000402B60018.done:
No such file or directory
pg_receivexlog: disconnected..
Thu Feb 12 11:30:34 PST 2015 ERROR manage_pitr.sh: Problem occured
during WAL archiving: pg_receivexlog: could not create archive status
file /mnt/archive/wal/archive_status/000402B60019.done:
No such file or directory
pg_receivexlog: disconnected..

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (499) 346-7196, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_basebackup -x/X doesn't play well with archive_mode wal_keep_segments

2015-02-12 Thread Sergey Konoplev
On Thu, Feb 12, 2015 at 11:40 AM, Andres Freund and...@2ndquadrant.com wrote:
 This obviously should not be the case. I'll have a look in a couple of hours. 
 Until then you can likely  just work around the problem by creating the 
 archive_status directory.

Thank you. Just let me know if you need some extra info or debugging.

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (499) 346-7196, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_basebackup -x/X doesn't play well with archive_mode wal_keep_segments

2015-02-12 Thread Sergey Konoplev
On Thu, Feb 12, 2015 at 4:18 PM, Andres Freund and...@2ndquadrant.com wrote:
 No need for debugging. It's plain and simply a (cherry-pick) conflict I
 resolved wrongly during backpatching. 9.3, 9.4 and master do not have
 that problem. That whole fix was quite painful because every single
 release had significantly different code :(. pg_basebackup/ is pretty
 messy.
 I'm not sure why my testsuite didn't trigger that problem. Possibly
 because a retry makes things work :(

 Somewhat uckily it's 9.2 only (9.3, 9.4 and master look correct, earlier
 releases don't have pg_receivexlog) and can quite easily be worked
 around by creating the archive_status directory.

The workaround works perfectly for me in this case, I'm going to
updrade it up to 9.4 anyway soon.

Thank you, Andres.

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (499) 346-7196, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade and epoch

2014-09-02 Thread Sergey Konoplev
On Tue, Sep 2, 2014 at 7:59 PM, Bruce Momjian br...@momjian.us wrote:
 On Wed, Apr 23, 2014 at 12:41:41PM -0700, Sergey Konoplev wrote:
 On Wed, Apr 23, 2014 at 5:26 AM, Bruce Momjian br...@momjian.us wrote:
  Sergey, are you seeing a problem only because you are
  interacting with other systems that didn't reset their epoch?

 I faced this after upgrading clusters with PgQ Skytools3 installed
 only. They didn't interact with any other systems.

 I have developed the attached patch which causes pg_upgrade to preserve
 the transaction epoch.  I plan to apply this for PG 9.5.

That is a great news. Thank you, Bruce.

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (499) 346-7196, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade and epoch

2014-04-23 Thread Sergey Konoplev
On Wed, Apr 23, 2014 at 5:26 AM, Bruce Momjian br...@momjian.us wrote:
 Sergey, are you seeing a problem only because you are
 interacting with other systems that didn't reset their epoch?

I faced this after upgrading clusters with PgQ Skytools3 installed
only. They didn't interact with any other systems.

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade and epoch

2014-04-22 Thread Sergey Konoplev
On Tue, Apr 22, 2014 at 6:33 PM, Sergey Burladyan eshkin...@gmail.com wrote:
 Current pg_upgrade copy XID into new cluster, but not it epoch. Why?

 Without epoch from old cluster txid_current() in upgraded database return
 lower value than before upgrade. This break, for example, PgQ and it must
 be fixed by hand after upgrade with pg_resetxlog.

 PS: see
 http://lists.pgfoundry.org/pipermail/skytools-users/2014-April/001812.html

BTW, I didn't manage to make a test case yet. Recently, when I was
migrating several servers to skytools3 and upgrading from 9.0 to 9.2,
I noticed that epoch was copied, timeline id was 0 after upgrade, but
skytools3 sometimes still didn't like it. Also note sometimes here,
so in some cases everything was okay, but in some it wasn't. I still
can't explain this, but incrementing timeline id always helped.

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade and epoch

2014-04-22 Thread Sergey Konoplev
On Tue, Apr 22, 2014 at 8:08 PM, Sergey Burladyan eshkin...@gmail.com wrote:
 On Wed, Apr 23, 2014 at 6:38 AM, Sergey Konoplev gray...@gmail.com wrote:
 BTW, I didn't manage to make a test case yet. Recently, when I was
 migrating several servers to skytools3 and upgrading from 9.0 to 9.2,
 I noticed that epoch was copied, timeline id was 0 after upgrade, but
 ...

Sorry, just noticed a typo: * timeline id = next xid

 This is strange, if I not mistaken XID copied by  copy_clog_xlog_xid(void):
 http://doxygen.postgresql.org/pg__upgrade_8c_source.html#l00398
 and there is no epoch (-e XIDEPOCH) in pg_resetxlog call args

I can only guess here.

+ Bruce Momjian

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cube extension kNN support

2014-03-31 Thread Sergey Konoplev
On Thu, Mar 27, 2014 at 3:26 PM, Sergey Konoplev gray...@gmail.com wrote:
 On Sun, Sep 22, 2013 at 4:38 PM, Stas Kelvich stas.kelv...@gmail.com wrote:
 Here is the patch that introduces kNN search for cubes with euclidean, 
 taxicab and chebyshev distances.

 What is the status of this patch?

Referring to our private conversation with Alexander Korotkov, the
patch is in WIP state currently, and, hopefully, will be ready by 9.5.
I'm ready to actively participate in its testing on a real world
production set of data.

I'm not sure if it is doable at all, but are there any possibility to
implement here, or, what would be just great, any ready/half ready
solutions of a Hamming distance based kNN search?

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cube extension kNN support

2014-03-31 Thread Sergey Konoplev
On Mon, Mar 31, 2014 at 12:09 PM, Alexander Korotkov
aekorot...@gmail.com wrote:
 I'm not sure if it is doable at all, but are there any possibility to
 implement here, or, what would be just great, any ready/half ready
 solutions of a Hamming distance based kNN search?

 Cube dealing with float8 numbers. There is another patch making it work with
 other number types. But Hamming distance is for bit vectors, isn't it?

It can be generalized as for character vectors. Though, I agree, that
was an off topic question in some extent. I was wondering if there
were any postgres related indexable Hamming/Manhattan distance
experiments/thoughts/discussions, if kNN can be used here or not,
because from my understanding it can be represented as spatial (I
might be very wrong here).

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cube extension kNN support

2014-03-27 Thread Sergey Konoplev
Hi everyone,

On Sun, Sep 22, 2013 at 4:38 PM, Stas Kelvich stas.kelv...@gmail.com wrote:
 Here is the patch that introduces kNN search for cubes with euclidean, 
 taxicab and chebyshev distances.

What is the status of this patch?

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Streaming replication bug in 9.3.2, WAL contains references to invalid pages

2014-01-03 Thread Sergey Konoplev
On Thu, Jan 2, 2014 at 11:59 AM, Christophe Pettus x...@thebuild.com wrote:
 In both cases, the indicated relation was a primary key index.  In one case, 
 rebuilding the primary key index caused the problem to go away permanently 
 (to date).  In the second case, the problem returned even after a full dump / 
 restore of the master database (that is, after a dump / restore of the 
 master, and reimaging the secondary, the problem returned at the same primary 
 key index, although of course with a different OID value).

 It looks like this has been experienced on 9.2.6, as well:

 
 http://www.postgresql.org/message-id/flat/CAL_0b1s4QCkFy_55kk_8XWcJPs7wsgVWf8vn4=jxe6v4r7h...@mail.gmail.com

This problem worries me a lot too. If someone is interested I still
have a file system copy of the buggy cluster including WAL.

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Polymorphic function calls

2013-12-30 Thread Sergey Konoplev
On Mon, Dec 30, 2013 at 2:03 AM, knizhnik knizh...@garret.ru wrote:
 On 12/30/2013 01:22 AM, Sergey Konoplev wrote:
 On Sun, Dec 29, 2013 at 8:44 AM, knizhnik knizh...@garret.ru wrote:
 But passing direved_table type instead of base_table type to volume()
 function for record belonging to derived_table seems to be possible and
 not
 contradicting something, isn't it?

 Correct.

 Postgres chooses a function based on the passed signature. When you
 specify base_table it will choose volume(base_table) and for
 base_table it will be volume(derived_table) as well.

 I think you mean and for derived_table it will be base_table as well.

Sure. Sorry for the typo.

 Certainly I understand the reasons of such behavior and that it will be
 difficult to introduce some other non-contradictory model...

 But polymorphism is one of the basic features of OO approach. Definitely
 PostgreSQL is not OODBMS. But it supports  inheritance.
 And I wonder if it is possible/desirable to make an exception for function
 invocation rules for derived tables?

 Actually a query on table with inherited tables is implemented as join of
 several independent table traversals.

I would say it is more like union,

 But for all derived tables we restrict context to the scope of base table.
 If it is possible to leave real record type when it is passed to some
 function, we can support polymorphic calls...

 Will it violate some principles/rules? I am not sure...

Well, it is not implemented in Postgres.

 FYI, there is a common practice to follow the DRY principle with
 inheritance and polymorphic functions in Postgres. On your example it
 might be shown like this:

 create function volume(r base_table) returns integer as $$ begin
 return r.x*r.y;
 end; $$ language plpgsql strict stable;

 create function volume(r derived_table) returns integer as $$ begin
 return volume(r::base_table) *r.z;
 end; $$ language plpgsql strict stable;

 I do not completely understand you here.
 Function of derived class can refere to the overridden function when  it's
 result depends on result of the overridden function.
 But it is not always true, for example you can derived class Circle from
 Ellipse, but formula for circle volume do not need to use implementation of
 volume for ellipse.

Sure, it is not universal thing.

 In any case, my question was not how to implement polymorphic volume
 function.
 I asked whether it is possible to change PostgreSQL function lookup rules to
 support something like virtual calls.

You can use this hack to make it.

create or replace function volume(r base_table, t text) returns integer as $$
declare v integer;
begin execute format('select volume(r) from %I r where r.x = %L', t,
r.x) into v;
return v;
end; $$ language plpgsql;

select volume(r, tableoid::regclass::text) from base_table r;
 volume

  2
 60
(2 rows)

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Polymorphic function calls

2013-12-29 Thread Sergey Konoplev
On Sun, Dec 29, 2013 at 8:44 AM, knizhnik knizh...@garret.ru wrote:
 create function volume(r base_table) returns integer as $$ begin return
 r.x*r.y; end; $$ language plpgsql strict stable;
 create function volume(r derived_table) returns integer as $$ begin return
 r.x*r.y*r.z; end; $$ language plpgsql strict stable;

[...]

 Execution plans of first and second queries are very similar.
 The difference is that type of r_1 in first query is base_table.
 It is obvious that query should return fixed set of columns, so

 select * from base_table;

 can not return z column.
 But passing direved_table type instead of base_table type to volume()
 function for record belonging to derived_table seems to be possible and not
 contradicting something, isn't it?

Correct.

Postgres chooses a function based on the passed signature. When you
specify base_table it will choose volume(base_table) and for
base_table it will be volume(derived_table) as well.

FYI, there is a common practice to follow the DRY principle with
inheritance and polymorphic functions in Postgres. On your example it
might be shown like this:

create function volume(r base_table) returns integer as $$ begin
return r.x*r.y;
end; $$ language plpgsql strict stable;

create function volume(r derived_table) returns integer as $$ begin
return volume(r::base_table) *r.z;
end; $$ language plpgsql strict stable;

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Use MAP_HUGETLB where supported (v3)

2013-10-30 Thread Sergey Konoplev
On Tue, Oct 29, 2013 at 9:31 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Sergey Konoplev gray...@gmail.com writes:
 On Wed, Oct 23, 2013 at 11:03 PM, Abhijit Menon-Sen a...@2ndquadrant.com 
 wrote:
 This is a slightly reworked version of the patch submitted by Richard
 Poole last month, which was based on Christian Kruse's earlier patch.

 Is it possible that this patch will be included in a minor version of
 9.3? IMHO hugepages is a very important ability that postgres lost in
 9.3, and it would be great to have it back ASAP.

 Say what?  There's never been any hugepages support in Postgres.

There were an ability to back shared memory with hugepages when using
=9.2. I use it on ~30 servers for several years and it brings 8-17%
of performance depending on the memory size. Here you will find
several paragraphs of the description about how to do it
https://github.com/grayhemp/pgcookbook/blob/master/database_server_configuration.md.
Just search for the 'hugepages' word on the page.

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Use MAP_HUGETLB where supported (v3)

2013-10-30 Thread Sergey Konoplev
On Wed, Oct 30, 2013 at 8:11 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Sergey Konoplev gray...@gmail.com writes:
 On Tue, Oct 29, 2013 at 9:31 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Say what?  There's never been any hugepages support in Postgres.

 There were an ability to back shared memory with hugepages when using
 =9.2. I use it on ~30 servers for several years and it brings 8-17%
 of performance depending on the memory size. Here you will find
 several paragraphs of the description about how to do it
 https://github.com/grayhemp/pgcookbook/blob/master/database_server_configuration.md.

 What this describes is how to modify Postgres to request huge pages.
 That's hardly built-in support.

I wasn't talking about a built-in support. It was about an ability (a
way) to back sh_buf with hugepages.

 In any case, as David already explained, we don't do feature additions
 in minor releases.  We'd be especially unlikely to make an exception
 for this, since it has uncertain portability and benefits.  Anything
 that carries portability risks has got to go through a beta testing
 cycle before we'll unleash it on the masses.

Yes, I got the idea. Thanks both of you for clarification.

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Use MAP_HUGETLB where supported (v3)

2013-10-30 Thread Sergey Konoplev
On Wed, Oct 30, 2013 at 11:50 AM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
  There were an ability to back shared memory with hugepages when using
  =9.2. I use it on ~30 servers for several years and it brings 8-17%
  of performance depending on the memory size. Here you will find
  several paragraphs of the description about how to do it
  https://github.com/grayhemp/pgcookbook/blob/master/database_server_configuration.md.
 
  What this describes is how to modify Postgres to request huge pages.
  That's hardly built-in support.

 I wasn't talking about a built-in support. It was about an ability (a
 way) to back sh_buf with hugepages.

 Then what you need is to set
 dynamic_shared_memory_type = sysv
 in postgresql.conf.

Neither I found this parameter in the docs nor it works when I specify
it in postgresql.conf.

LOG:  unrecognized configuration parameter
dynamic_shared_memory_type in file
/etc/postgresql/9.3/main/postgresql.conf line 114
FATAL:  configuration file /etc/postgresql/9.3/main/postgresql.conf
contains errors

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Use MAP_HUGETLB where supported (v3)

2013-10-30 Thread Sergey Konoplev
On Wed, Oct 30, 2013 at 12:17 PM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
  I wasn't talking about a built-in support. It was about an ability (a
  way) to back sh_buf with hugepages.

 Then what you need is to set
 dynamic_shared_memory_type = sysv
 in postgresql.conf.

 The above is mistaken -- there's no way to disable the mmap() segment in
 9.3, other than recompiling with EXEC_BACKEND which is probably
 undesirable for other reasons.

Alternatively, I assume it could be linked with libhugetlbfs and you
don't need any source modifications in this case. However I am not
sure it will work with shared memory.

 I don't think I had ever heard of that recipe to use huge pages in
 previous versions; since the win is probably significant in some
 systems, we could have made this configurable.

There are several articles in the web describing how to do this,
except the mine one. And the win becomes mostly significant when you
have 64GB and more on your server.

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Use MAP_HUGETLB where supported (v3)

2013-10-30 Thread Sergey Konoplev
On Wed, Oct 30, 2013 at 12:51 PM, Sergey Konoplev gray...@gmail.com wrote:
 On Wed, Oct 30, 2013 at 12:17 PM, Alvaro Herrera
 alvhe...@2ndquadrant.com wrote:
  I wasn't talking about a built-in support. It was about an ability (a
  way) to back sh_buf with hugepages.

 Then what you need is to set
 dynamic_shared_memory_type = sysv
 in postgresql.conf.

 The above is mistaken -- there's no way to disable the mmap() segment in
 9.3, other than recompiling with EXEC_BACKEND which is probably
 undesirable for other reasons.

 Alternatively, I assume it could be linked with libhugetlbfs and you
 don't need any source modifications in this case. However I am not
 sure it will work with shared memory.

BTW, I managed to run 9.3 backed with hugepages after I put
HUGETLB_MORECORE (see man libhugetlbfs) to the environment yesterday,
but, after some time of working, it failed with messages showed below.

syslog:

Oct 29 17:53:13 grayhemp kernel: [150579.903875] PID 7584 killed due
to inadequate hugepage pool

postgres:

libhugetlbfslibhugetlbfs2013-10-29 17:53:21 PDT LOG:  server process
(PID 7584) was terminated by signal 7: Bus error
2013-10-29 17:53:21 PDT LOG:  terminating any other active server processes
2013-10-29 1
7:53:21 PDT WARNING:  terminating connection because of crash of
another server process
2013-10-29 17:53:21 PDT DETAIL:  The postmaster has commanded this
server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.

My theory is that it has happened after the amount of huge pages
(vm.nr_overcommit_hugepages + vm.nr_hugepages) was exceeded, but I
might be wrong.

Does anybody has some thoughts of why it has happened and how to work abound it?

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Use MAP_HUGETLB where supported (v3)

2013-10-29 Thread Sergey Konoplev
Hi,

On Wed, Oct 23, 2013 at 11:03 PM, Abhijit Menon-Sen a...@2ndquadrant.com 
wrote:
 This is a slightly reworked version of the patch submitted by Richard
 Poole last month, which was based on Christian Kruse's earlier patch.

Is it possible that this patch will be included in a minor version of
9.3? IMHO hugepages is a very important ability that postgres lost in
9.3, and it would be great to have it back ASAP.

Thank you.

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Any reasons to not move pgstattuple to core?

2013-10-03 Thread Sergey Konoplev
Hi,

In my practice I quite often face the problem of bloated tables. I
usually use pgstattuple to perform investigations. I also create a
tool that uses UPDATEs based way to smoothly remove bloat
(https://github.com/grayhemp/pgtoolkit), and it partially depends on
pgstatuple too. To be more precise it gets much more effective with
pgstattuple.

Sometimes its installation leads to a headache, because it requires an
approve from security and admins, it also a problem when I have a
read-only access or no access to the database at all (eg. when
consulting somebody by IM or phone). I think I am not the only person
who faced these nuances.

According to this I would like to know if it is possible to move
pgstattuple to core? And if it is I would like to request this
feature.

Thank you.

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Any reasons to not move pgstattuple to core?

2013-10-03 Thread Sergey Konoplev
On Thu, Oct 3, 2013 at 3:55 PM, Peter Geoghegan p...@heroku.com wrote:
 On Thu, Oct 3, 2013 at 3:34 PM, Robert Haas robertmh...@gmail.com wrote:
 Well, this is a general problem with any extension - somebody might
 want it on a system on which the admin is unable or unwilling to
 install it.  But we can't put every possible extension in core.

 The flip-side is that we could have made an awful lot of built-in
 things extensions, but for whatever reason chose not to. I'm not
 necessarily in favor of putting pgstattuple in core, but the question
 should be asked: Why should we do this here? In what way is
 pgstattuple like or not like the other things that are in core?

I would highlight it as it became a kind of routine one. Also,
sometimes it is required to solve problems, not to make new features,
so it often can not wait.

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] System catalog bloat removing safety

2013-09-18 Thread Sergey Konoplev
Hi all,

How safe is it to use the technique described by the link below with
system catalog tables to remove bloat?

(in a couple of words it is about moving tuples to the beginning of
the table with a special way of updating)

http://www.depesz.com/2010/10/17/reduce-bloat-of-table-without-longexclusive-locks/

Are there any caveats?

Thank you.

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] System catalog bloat removing safety

2013-09-18 Thread Sergey Konoplev
On Wed, Sep 18, 2013 at 2:06 AM, Andres Freund and...@2ndquadrant.com wrote:
 On 2013-09-17 23:12:24 -0700, Sergey Konoplev wrote:
 How safe is it to use the technique described by the link below with
 system catalog tables to remove bloat?
 (in a couple of words it is about moving tuples to the beginning of
 the table with a special way of updating)
 http://www.depesz.com/2010/10/17/reduce-bloat-of-table-without-longexclusive-locks/

 At least for pg_index it's not safe, we use the xmin of rows when
 indcheckxmin = true.
 I am not aware of other cases, but I'd be hesitant to do so.

Thank you, Andres, I did not know this. Just wondering, why it was
made of type bool and not xid?

 You have bloat because of lots of temporary tables IIRC?

Actually I am developing a bloat removing tool based on the updating technique:

https://code.google.com/p/pgtoolkit/source/browse/trunk/README

Now the --system-catalog feature marked as experimental. About
pg_index - I think the solution is to exclude it from processing
strictly.

Anyone has other ideas of what system tables it might be affected
except pg_index?

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] System catalog vacuum issues

2013-08-22 Thread Sergey Konoplev
On Wed, Aug 21, 2013 at 2:33 PM, Jim Nasby j...@nasby.net wrote:
 That is the problem. Exactly what Jim was writing about. Autovacuum
 have no chance to clean dead tuples at the end of the table because
 they are created too intensively. In the latest versions autovacuum
 behaves so it would stop working when a concurrent lock is acquired.
 As he suggested you should use vacuum in cron, however it might make
 other procecess, that create/drop tables to wait.


 Hrm... even if vacuum cost delay is set? I recall some talk about doing some
 minimal waiting for the lock, but thought that'd only happen if cost delay
 was 0.

 That really doesn't matter though. The whole idea of a cron'd vacuum is to
 *stop bloat from happening to begin with*. If there's no bloat to begin
 with, getting the lock to truncate will be a non-issue.

Well, according to the pgstattuple log OP showed, free percent jumps
from 1.82 to 70.07 in one minute, so I suppose an empty tail is
inevitable anyway, so there should be locks to truncate by vacuum, if
I understand things correct.

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] System catalog vacuum issues

2013-08-19 Thread Sergey Konoplev
On Thu, Aug 15, 2013 at 7:03 PM, Vlad Arkhipov arhi...@dc.baikal.ru wrote:
 Do you have some processes that intensively create tables or columns
 and then delete them or create them in transaction and rollback the
 transaction?

 There are many processes that create and drop temporary tables.

That is the problem. Exactly what Jim was writing about. Autovacuum
have no chance to clean dead tuples at the end of the table because
they are created too intensively. In the latest versions autovacuum
behaves so it would stop working when a concurrent lock is acquired.
As he suggested you should use vacuum in cron, however it might make
other procecess, that create/drop tables to wait.

Another solution would be to factor out the temp tables usage from the
logic. Could you please describe what are you using temp tables for?
There might be another, more effective solution of this issue.

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] System catalog vacuum issues

2013-08-14 Thread Sergey Konoplev
On Tue, Aug 13, 2013 at 10:31 PM, Vlad Arkhipov arhi...@dc.baikal.ru wrote:
 I used to use VACUUM FULL periodically to resolve the issue, but the problem
 arises again in 2-3 months.
 Here is the statistics (from pgstattuple). I run VACUUM FULL on 2013-08-07.

 date| relpages | reltuples | table_len | tuple_count | tuple_percent
 | dead_tuple_count | dead_tuple_len | free_space | free_percent |
 autovacuum_count
 +--+---+---+-+---+--+++--+--
  2013-08-08 |39029 |109096 | 319725568 |   37950 |  1.66
 |52540 |7355600 |  296440048 |92.72 |

Are you sure you did VACUUM FULL pg_attribute on Aug 7, could you
please confirm that free_percent arises from 0 to 92% in one day?

Do you have some processes that intensively create tables or columns
and then delete them or create them in transaction and rollback the
transaction?

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] System catalog vacuum issues

2013-08-06 Thread Sergey Konoplev
On Mon, Aug 5, 2013 at 10:56 PM, Vlad Arkhipov arhi...@dc.baikal.ru wrote:
 dcdb=# select relname, n_live_tup, n_dead_tup, last_vacuum, last_autovacuum
 from pg_stat_sys_tables where relname = 'pg_attribute';
relname| n_live_tup | n_dead_tup | last_vacuum  |
 last_autovacuum
 --+++---+---
  pg_attribute |   39318086 | 395478 | 2013-08-06 14:47:48.187259+09 |
 2013-08-06 13:43:03.162286+09

What pgstattuple shows on this table?
http://www.postgresql.org/docs/9.2/interactive/pgstattuple.html

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

Profile: http://www.linkedin.com/in/grayhemp
Phone: USA +1 (415) 867-9984, Russia +7 (901) 903-0499, +7 (988) 888-1979
Skype: gray-hemp
Jabber: gray...@gmail.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] System catalog vacuum issues

2013-08-06 Thread Sergey Konoplev
On Tue, Aug 6, 2013 at 12:37 AM, Vlad Arkhipov arhi...@dc.baikal.ru wrote:
 What pgstattuple shows on this table?
 http://www.postgresql.org/docs/9.2/interactive/pgstattuple.html


 dcdb=# select * from pgstattuple('pg_catalog.pg_attribute');
  table_len  | tuple_count | tuple_len | tuple_percent | dead_tuple_count |
 dead_tuple_len | dead_tuple_percent | free_space | free_percent
 +-+---+---+--++++--
  6363938816 |   48786 |   6830040 |  0.11 | 1459439 |
 204321460 |   3.21 | 5939017376 | 93.32
 (1 row)

I guess you need to VACUUM FULL pg_attribute, if it is possible in
your situation of course. If it is not, let me know, I have another
one tricky way of solving this problem in my mind.

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

Profile: http://www.linkedin.com/in/grayhemp
Phone: USA +1 (415) 867-9984, Russia +7 (901) 903-0499, +7 (988) 888-1979
Skype: gray-hemp
Jabber: gray...@gmail.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WAL segments (names) not in a sequence

2013-05-23 Thread Sergey Konoplev
On Thu, May 23, 2013 at 1:25 AM, Amit Langote amitlangot...@gmail.com wrote:
 Okay, now I understand. Also, looking at his ls -l pg_xlog, I could
 find that modified timestamps of all those pre-allocated segments are
 about similar (around 12:10), whereas the latest modified time (15:37)
 is of segment 0001000E00A7.

 Wonder if whatever configuration he is using is sub-optimal that these
 many WAL segments can be re-cycled upon a checkpoint? Or is this okay?

Is archive_mode=on?
What is archive_command?
Is the server in the recovery mode?

--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

Profile: http://www.linkedin.com/in/grayhemp
Phone: USA +1 (415) 867-9984, Russia +7 (901) 903-0499, +7 (988) 888-1979
Skype: gray-hemp
Jabber: gray...@gmail.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WAL segments (names) not in a sequence

2013-05-23 Thread Sergey Konoplev
On Thu, May 23, 2013 at 6:18 AM, German Becker german.bec...@gmail.com wrote:
 Let me describe the process I follow to get to this. What I am doing is
 testing a migration from 8.3 to 9.1. They way I plan to do it is the
 following.
 1) Create the schema
 2) import the biggest tables, which are not updated,only growing, with COPY
 (this is about 35gb of data)
 2)import the small, changing part of the data


 The target system is 9.1 with streaming relication.
 For steps 1 and 2, I set a restore configuration, that amongs other things
 like more work mem, it sets archive_mode=off and wal_level=minimal (attached
 the difference between restore and normal).
 The archive_command is just a cp wrapped in a shell script in case I need to
 change it.

You can not migrate between any major versions with WAL based or
streaming replication.

Use either full dump/restore or schema only dump/restore plus trigger
based replication (londiste, slony) to migrate data.

--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

Profile: http://www.linkedin.com/in/grayhemp
Phone: USA +1 (415) 867-9984, Russia +7 (901) 903-0499, +7 (988) 888-1979
Skype: gray-hemp
Jabber: gray...@gmail.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] high io BUT huge amount of free memory

2013-04-22 Thread Sergey Konoplev
On Mon, Apr 22, 2013 at 11:22 AM, Миша Тюрин tmih...@bk.ru wrote:
 free -g
  total   used   free sharedbuffers cached
 Mem:   378250128  0  0229
 -/+ buffers/cache: 20357

 and
 ! disks usage 100%  (free 128GB! WHY?)

 disk throughput - up-to 30MB/s (24r+6w)
 io - up-to 2,5-3K/s (0,5w + 2-2,5r)

What do iostat -xk 10 and vmstat -SM 10 show?



 3) so maybe I've got something like this
 http://www.databasesoup.com/2012/04/red-hat-kernel-cache-clearing-issue.html
 or this
 http://comments.gmane.org/gmane.comp.db.sqlite.general/79457

 4) now i think
 a) upgrade linux core or
 b) set buffers to something like 300-320Gb
 my warm work set is about 300-400GB
 db at all - 700GB

 typical work load - pk-index-scans

 --
 looking forward
 thanks

 Mikhail


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers




--
Kind regards,
Sergey Konoplev
Database and Software Consultant

Profile: http://www.linkedin.com/in/grayhemp
Phone: USA +1 (415) 867-9984, Russia +7 (901) 903-0499, +7 (988) 888-1979
Skype: gray-hemp
Jabber: gray...@gmail.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] Multiple Slave Failover with PITR

2012-09-03 Thread Sergey Konoplev
On Sun, Sep 2, 2012 at 4:12 PM, Bruce Momjian br...@momjian.us wrote:
 Do we ever want to document a way to connect slaves to a new master,
 rather than recreating the slave?

I have written an instruction for myself and I am planning to publish
it on http://code.google.com/p/pgcookbook/. See the attachment.

Hope you will find it useful. If anybody would like to provide any
criticism I will highly appreciate it.


 ---

 On Tue, Mar 27, 2012 at 10:47:48AM -0700, Ken Brush wrote:
 Hello everyone,

 I notice that the documentation at:
 http://wiki.postgresql.org/wiki/Binary_Replication_Tutorial

 Doesn't contain steps in a Multiple Slave setup for re-establishing
 them after a slave has become the new master.

 Based on the documentation, here are the most fail-proof steps I came up 
 with:

 1. Master dies :(
 2. Touch the trigger file on the most caught up slave.
 3. Slave is now the new master :)
 4. use pg_basebackup or other binary replication trick (rsync, tar
 over ssh, etc...) to bring the other slaves up to speed with the new
 master.
 5. start the other slaves pointing to the new master.

 But, that can take time (about 1-2 hours) with my medium sized DB
 (580GB currently).

 After testing a few different ideas that I gleaned from posts on the
 mail list, I came up with this alternative method:

 1. Master dies :(
 2. Touch the trigger file on the most caught up slave
 3. Slave is now the new master.
 4. On the other slaves do the following:
 5. Shutdown postgres on the slave
 6. Delete every file in /data/pgsql/data/pg_xlog
 7. Modify the recovery.conf file to point to the new master and
 include the line recovery_target_timeline='latest'
 8. Copy the history file from the new master to the slave (it's the
 most recent #.history file in the xlog directory)
 9. Startup postgres on the slave and watch it sync up to the new
 master (about 1-5 minutes usually)

 My question is this. Is the alternative method adequate? I tested it a
 bit and couldn't find any problems with data loss or inconsistency.

 I still use the fail-proof method above to re-incorporate the old
 master as a new slave.

 Sincerely,
 -Ken

 --
 Sent via pgsql-general mailing list (pgsql-gene...@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general

 --
   Bruce Momjian  br...@momjian.ushttp://momjian.us
   EnterpriseDB http://enterprisedb.com

   + It's impossible for everything to be true. +


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers



-- 
Sergey Konoplev

a database and software architect
http://www.linkedin.com/in/grayhemp

Jabber: gray...@gmail.com Skype: gray-hemp Phone: +79160686204
Streaming replication based failover

Let us suppose that there is a hot standby replication set up in a cluster. The 
db1 server is a master and dbX are replicas. Also suppose that we need to do a 
failover on one of our slaves.

First we need to define what replica will be a new master.

In case of the master failure you need to find the most caught up replica. To 
do this compare WAL replay locations on replicas and chose the one with the 
biggest value.

postgres=# SELECT pg_last_xlog_replay_location();
 pg_last_xlog_replay_location 
--
 BAA/37DA2888
(1 row)

Note that If you chose not the most caught up one than other replicas that have 
replayed later WAL entries must be reconfigured from scratch. Otherwise their 
data can be corrupted and you will not get any warnings about it.

In case of the planned switchover choose one that will be a new master on your 
own.

Then stop all the slaves except the new master. Let say the remaining slave is 
db2.

Use the command below to help the master and the remaining slave get ahead of 
other slaves if you are not sure that they already are. The command creates a 
minimal WAL entry.

postgres=# SELECT txid_current();

Now touch the failover file on the remaining slave to promote it as a new 
master.

postgres@db2: ~ $ touch /path/to/failover

And you will see similar things in logs.

2011-04-05 11:46:48 MSD @ 69971 [4d9ac05d.11153-5]:LOG:  trigger file found: 
/db/data/failover
2011-04-05 11:46:48 MSD @ 69974 [4d9ac05d.11156-2]:FATAL:  terminating 
walreceiver process due to administrator command
2011-04-05 11:46:48 MSD @ 69971 [4d9ac05d.11153-6]:LOG:  redo done at 0/4012A68
2011-04-05 11:46:48 MSD @ 69971 [4d9ac05d.11153-7]:LOG:  last completed 
transaction was at log time 2011-04-05 11:41:29.199406+04
2011-04-05 11:46:48 MSD @ 69971 [4d9ac05d.11153-8]:LOG:  selected new timeline 
ID: 2
2011-04-05 11:46:49 MSD @ 69971 [4d9ac05d.11153-9]:LOG:  archive recovery 
complete
2011-04-05 11:46:49 MSD @ 69972 [4d9ac05d.11154-1]:LOG:  checkpoint starting: 
end-of-recovery immediate wait
2011-04-05 11:46:49

Re: [HACKERS] pg_statistic, lack of documentation

2012-01-27 Thread Sergey Konoplev
On Fri, Jan 27, 2012 at 9:14 PM, Robert Haas robertmh...@gmail.com wrote:
 On Sat, Jan 14, 2012 at 7:34 AM, Sergey Konoplev gray...@gmail.com wrote:
 I've added a note to that effect to the documentation for ANALYZE,
 which seems like a more appropriate place than the pg_statistic
 documentation.

Thank you.


 --
 Robert Haas
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company



-- 
Sergey Konoplev

Blog: http://gray-hemp.blogspot.com
LinkedIn: http://ru.linkedin.com/in/grayhemp
JID/GTalk: gray...@gmail.com Skype: gray-hemp

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pg_statistic, lack of documentation

2012-01-14 Thread Sergey Konoplev
Hi,

http://www.postgresql.org/docs/9.1/interactive/catalog-pg-statistic.html

It specifies that entries are created by ANALYZE, but does not mention
that if a table is empty the entry for it is not created. Probably it
is worth to add to the docs. The test case is below.

grayhemp@[local]:5432 test=#create table t1 (i integer);CREATE
TABLEgrayhemp@[local]:5432 test=#select stanullfrac, stawidth from
pg_statistic where starelid = 't1'::regclass; stanullfrac | stawidth
-+--(0 rows)
grayhemp@[local]:5432 test=#analyze t1;ANALYZEgrayhemp@[local]:5432
test=#select stanullfrac, stawidth from pg_statistic where starelid =
't1'::regclass; stanullfrac | stawidth -+--(0
rows)
grayhemp@[local]:5432 test=#insert into t1 values (1);INSERT 0
1grayhemp@[local]:5432 test=#select stanullfrac, stawidth from
pg_statistic where starelid = 't1'::regclass; stanullfrac | stawidth
-+--(0 rows)
grayhemp@[local]:5432 test=#analyze t1;ANALYZEgrayhemp@[local]:5432
test=#select stanullfrac, stawidth from pg_statistic where starelid =
't1'::regclass; stanullfrac | stawidth -+--
   0 |        4(1 row)

-- 
Sergey Konoplev

Blog: http://gray-hemp.blogspot.com
LinkedIn: http://ru.linkedin.com/in/grayhemp
JID/GTalk: gray...@gmail.com Skype: gray-hemp

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [GENERAL] Odd VACUUM behavior when it is expected to truncate last empty pages

2011-08-04 Thread Sergey Konoplev
Thank you very much, your explanation helped a lot.

This is the tool I needed the solution for
http://code.google.com/p/pc-tools/ if you are interested.

On 4 August 2011 01:10, Pavan Deolasee pavan.deola...@gmail.com wrote:
 On Wed, Aug 3, 2011 at 12:33 PM, Pavan Deolasee
 pavan.deola...@gmail.com wrote:


 The only problem, other than a surprising behavior that you noted,
 that I see with this approach is that we might repeatedly try to
 truncate a relation which in fact does not have anything to truncate.
 The worst  thing is we might unnecessarily take an exclusive lock on
 the table.


 So it seems we tried to fix this issue sometime back
 http://archives.postgresql.org/pgsql-hackers/2008-12/msg01994.php

 But I don't quite understand how the fix would really work.
 nonempty_pages would most likely be set at a value lower than relpages
 if the last page in the relation is all-visible according to the
 visibility map. Did we mean to test (nonempty_pages  0) there ? But
 even that may not work except for the case when there are no dead
 tuples in the relation.

 Thanks,
 Pavan

 --
 Pavan Deolasee
 EnterpriseDB     http://www.enterprisedb.com




-- 
Sergey Konoplev

Blog: http://gray-hemp.blogspot.com /
Linkedin: http://ru.linkedin.com/in/grayhemp /
JID/GTalk: gray...@gmail.com / Skype: gray-hemp

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] function_name.parameter_name

2010-09-07 Thread Sergey Konoplev
Hi,

On 7 September 2010 20:35, Tom Lane t...@sss.pgh.pa.us wrote:
 How does $subject differ from what we already do?  See
 http://www.postgresql.org/docs/9.0/static/plpgsql-structure.html

So will it be possible to do things like this?

1.
CREATE FUNCTION func_name(arg_name text) RETURNS integer AS $$
BEGIN
RAISE INFO '%', func_name.arg_name;
...

2.
CREATE FUNCTION func_name() RETURNS integer AS $$
DECLARE
var_name text := 'bla';
BEGIN
RAISE INFO '%', func_name.var_name;
...

3.
CREATE FUNCTION func_very_very_very_very_long_name() RETURNS integer AS $$
 func_alias 
DECLARE
var_name text := 'bla';
BEGIN
RAISE INFO '%', func_alias.var_name;
...


-- 
Sergey Konoplev

Blog: http://gray-hemp.blogspot.com /
Linkedin: http://ru.linkedin.com/in/grayhemp /
JID/GTalk: gray...@gmail.com / Skype: gray-hemp / ICQ: 29353802

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unpredictable shark slowdown after migrating to 8.4

2009-11-18 Thread Sergey Konoplev
Thank you for the hints.

 Why only those modes?  I'd search for locks with granted=false, then see
 all the other locks held by the process that's holding the conflicting
 lock with granted=true (i.e. the one you're waiting on).


Something like this?

SELECT
granted,
pid,
virtualxid,
transactionid,
virtualtransaction,
count(1) AS locks,
current_query
FROM
pg_locks AS l
LEFT JOIN pg_stat_activity AS a ON
pid = procpid
GROUP BY 1, 2, 3, 4, 5, 7
ORDER BY 1, 6 DESC;


And two more queries to do extended analysis of its results after restarting PG:

SELECT
pg_stat_activity.datname,
pg_class.relname,
pg_locks.transactionid,
pg_locks.mode,
pg_locks.granted,
pg_stat_activity.usename,
pg_stat_activity.current_query,
pg_stat_activity.query_start,
age(now(),pg_stat_activity.query_start) AS age,
pg_stat_activity.procpid
FROM
pg_stat_activity,
pg_locks
LEFT OUTER JOIN pg_class ON
pg_locks.relation = pg_class.oid
WHERE
pg_locks.pid = pg_stat_activity.procpid
ORDER BY
query_start;


SELECT * FROM pg_locks;


Are there another things I should do when the problem rise up again?

-- 
Regards,
Sergey Konoplev

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unpredictable shark slowdown after migrating to 8.4

2009-11-17 Thread Sergey Konoplev
On Mon, Nov 16, 2009 at 9:56 PM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
 Sergey Konoplev escribió:

 I tried to get locks with this queries

 Did you try pg_locks?


I tried monitor locks with pgrowlocks. Isn't it better way? If it
isn't what points should I pay attention with pg_lock?

I've just write the query

SELECT pid, count(1) AS locks, current_query
FROM pg_locks AS l LEFT JOIN pg_stat_activity AS a ON pid = procpid
WHERE l.mode IN ('RowExclusiveLock', 'ShareUpdateExclusiveLock',
'ExclusiveLock')
GROUP BY 1,3 ORDER BY 2 DESC LIMIT 10;

would it be what we need?

-- 
Regards,
Sergey Konoplev

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unpredictable shark slowdown after migrating to 8.4

2009-11-17 Thread Sergey Konoplev

 Can you show us the non-commented settings from your postgresql.conf?

Working postgresql.conf http://pastie.org/702748


 Can you show us what the vmstat output looks like when everything is
 running normally?  It looks like the blocks out are pretty high, but I
 don't know how that compares to normal for you.


Here it is http://pastie.org/702742



-- 
Regards,
Sergey Konoplev

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unpredictable shark slowdown after migrating to 8.4

2009-11-17 Thread Sergey Konoplev
On Mon, Nov 16, 2009 at 10:17 PM, Andres Freund and...@anarazel.de wrote:
 On Wednesday 11 November 2009 18:50:46 Sergey Konoplev wrote:
 Hello community,


 Second time after migration 8.3.7 -- 8.4.1 I was caught by this
 problem. Migration was 8 days ago.
 (note, I never seen such situation on 8.3)
 Is 8.4 configured similarly to 8.3?


It is.

8.3 conf - http://pastie.org/702752
8.4 conf - http://pastie.org/702748


-- 
Regards,
Sergey Konoplev
--
PostgreSQL articles in english  russian
http://gray-hemp.blogspot.com/search/label/postgresql/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Unpredictable shark slowdown after migrating to 8.4

2009-11-16 Thread Sergey Konoplev
On Thu, Nov 12, 2009 at 4:42 AM, Robert Haas robertmh...@gmail.com wrote:
 On Wed, Nov 11, 2009 at 12:50 PM, Sergey Konoplev gray...@gmail.com wrote:
 Was this situation mentioned before and is there a solution or
 workaround? (I didn't find any) If not please give me a glue where to
 dig or what information should I provide?

 I think you should use log_min_duration_statement or auto_explain to
 figure out which queries are giving you grief. I don't believe that
 8.4 is in general slower than 8.3, so there must be something about
 how you are using it that is making it slower for you.  But without
 more information it's hard to speculate what it might be.  It's also
 not entirely clear that your 8.4 setup is really the same as your 8.3
 setup.  You might have different configuration, differences in your
 tables or table data, differences in workload, etc.  Without
 controlling for all those factors it's hard to draw any conclusions.

Well I turned on log_min_duration_statement, set up auto_explain,
pg_stat_statements, pgrowlocks, pg_buffercache, wrote some monitoring
queries and started to wait when the situation repeats.

Today it happens! Situation was absolutely the same as I described in
my previous letter. One more thing I noticed about CPU user-time this
time is that after connections count gets close to pgbouncer threshold
it decreased from ~800 to ~10 very fast.

Log monitoring shows me that query plans were the same as usual (thanx
auto_explain).

I reset pg_stat_statements and few minutes later did select from it. I
noticed that slowest queries was on tables with high number of updates
(but isn't it usual?).

I tried to get locks with this queries

SELECT
t.tablename,
(SELECT count(*) FROM pgrowlocks(schemaname || '.' || t.tablename)) AS locks
FROM pg_tables t
WHERE schemaname = 'public'
ORDER BY 2 DESC
LIMIT 10;

SELECT * FROM pgrowlocks('public.person_online');

but nothing was returned.

Here is portions of vmstat and iostat results http://pastie.org/701326

This time situation was saved by PG restart to. Obviously all I
provided tells almost nothing and I'm very confused with it. So please
tell me what else could I do to get more info?

 Also, I don't believe this is an appropriate topic for pgsql-hackers.
 If you have EXPLAIN ANALYZE results for the affected queries, try
 pgsql-performance.

I do have but this results are good and the same as when nothing has
happened when everything is allright.

-- 
Regards,
Sergey Konoplev

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Unpredictable shark slowdown after migrating to 8.4

2009-11-11 Thread Sergey Konoplev
Hello community,


Second time after migration 8.3.7 -- 8.4.1 I was caught by this
problem. Migration was 8 days ago.
(note, I never seen such situation on 8.3)

Environment:

PostgreSQL 8.4.1 + patch
http://archives.postgresql.org/pgsql-committers/2009-10/msg00056.php
CentOS release 5.4 (Final)
SunFire X4600 M2; 3U; 8xOpteron 8380 2.5GHz; 96GB; 12x146GB 15K RPM
DB size ~90G TPS~1000

Symptoms:

In short period of time total query execution time increases incredibly.

Time   Sum duration (ms)
17:17 12811
17:18 8870
17:19 33744
17:20 9991
17:21 13392
17:22 163928
17:23 1151709
17:24 12112797  -- here it cuts due to connection threshold
17:25 12305390
17:26 12970853
17:27 12957648

LA changes rather insignificantly from ~6 to ~8.
CPU user time increases from ~350 to ~600 ), system from ~50 to ~250.
Neither additional significant  disc activity nor iowait.
Another thing I noticed is autoanalyze finish on tables that few
minutes later were used in the first and mostly canceled by timeout
queries.


First time I assigned the blame to multiply locks on one of the
mentioned above tables. There was heavy delete. I started delete rows
manually by small batches and found a record that hung my delete for a
long time (many times greater then stmt timeout) even when I tried to
delete it by PK. Situation was saved by disabling some functional that
uses this table until next day when I got rid of the aggressive
deletes.

Second time I didn't find any reason that explains the situation.

Was this situation mentioned before and is there a solution or
workaround? (I didn't find any) If not please give me a glue where to
dig or what information should I provide?

Thank you.


-- 
Regards,
Sergey Konoplev
--
PostgreSQL articles in english  russian
http://gray-hemp.blogspot.com/search/label/postgresql/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] 8.4.1 strange GiST (btree_gist?) messages + index row size error (possible BUG, test case + test data)

2009-10-26 Thread Sergey Konoplev
Hi, All

I faced this odd situation when I was migrating my data from 8.3.7 to
8.4.1. After setting up instance, applying schema dump w/o and indexes
and loading data I was trying to create this indexes and got a lot of
multiply  messages picksplit method for column 2 of index ... doesn't
support secondary split after each gist index containing 2+ columns.
Finally I got this message index row requires 10440 bytes, maximum
size is 8191 after creation of one complex index.

Test-case:

1. Install PG 8.4.1 + btree_gist, set log_statement = 'all', create test_db

2. Create test table

CREATE TABLE test_table
(
  obj_id bigint NOT NULL,
  obj_status_did smallint NOT NULL DEFAULT 5,
  obj_created timestamp with time zone NOT NULL DEFAULT now(),
  obj_main_pic_obj_id bigint,
  obj_tsvector tsvector NOT NULL DEFAULT ''::tsvector,
  person_photo_is_best boolean NOT NULL DEFAULT false,
  person_vislvl smallint NOT NULL DEFAULT 9,
  CONSTRAINT pk_test_table PRIMARY KEY (obj_id)
);

3. Load a data into the table

Data dump is here http://drop.io/rdccygi (it was created with COPY
(SELECT...) TO '...' using psql from 8.3.7)

COPY test_table FROM '/tmp/data.dump';

4. Try to create this indexes

CREATE INDEX i_test_table__created_tsvector ON test_table USING gist
(obj_created, obj_tsvector) WHERE obj_status_did = 1;

CREATE INDEX i_test_table__tsvector_vislvl_by_photo_created ON
test_table USING btree (obj_tsvector, person_vislvl,
(COALESCE(person_photo_is_best::integer, 0)) DESC,
sign(COALESCE(obj_main_pic_obj_id, 0::bigint)::double precision) DESC,
obj_created DESC) WHERE obj_status_did = 1;

And you will see something like this http://drop.io/5tla8sg

p.s. One thing I have forgotten to write - I tried it on Ubuntu 9.04,
PG was built from sources.

-- 
Regards,
Sergey Konoplev

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Feature request: DEFAULT as input value of function argument

2009-09-03 Thread Sergey Konoplev
 IMHO convenient solution is to make possible to specify something like
 COLUMN_DEFAULT as input value of function.

 I wonder if it's possible.


So, what do you think of it?

-- 
Regards,
Sergey Konoplev
--
PostgreSQL articles in english  russian
http://gray-hemp.blogspot.com/search/label/postgresql/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Feature request: DEFAULT as input value of function argument

2009-09-02 Thread Sergey Konoplev
Hi, community

The case:

E.g. I need to write a function that should do updates of a table. In
the function I have to specify necessary columns I need to update as
arguments and write update of the columns with this arguments.
Everything is ok but the case when I need to skip some column in
update to use DEFAULT value of the column.

IMHO convenient solution is to make possible to specify something like
COLUMN_DEFAULT as input value of function.

I wonder if it's possible.

-- 
Regards,
Sergey Konoplev
--
PostgreSQL articles in english  russian
http://gray-hemp.blogspot.com/search/label/postgresql/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Crash in gist insertion on pathological box data

2009-03-27 Thread Sergey Konoplev
On Thu, Mar 26, 2009 at 5:39 PM, Andrew Gierth
and...@tao11.riddles.org.uk wrote:
 A user on IRC reported a crash (backend segfault) in GiST insertion
 (in 8.3.5 but I can reproduce this in today's HEAD) that turns out
 to be due to misbehaviour of gist_box_picksplit.

 The nature of the problem is this: if gist_box_picksplit doesn't find
 a good disposition on the first try, then it tries to split the data
 again based on the positions of the box centers. But there's a problem
 here with floating-point rounding; it's possible for the average of N
 floating-point values to be strictly greater (or less) than all of the
 values individually, and the function then returns with, for example,
 all the entries assigned to the left node, and nothing in the right
 node. This causes gistSplit to try and split the left node again, with
 predictable results.


I probably have a workaround. As I understand the problem it touches
gist indexes with one box type field only. After googling picksplit
and reading some info I supposed that If another (distinctive) field
would be appended to the index (after the box field) then another
(old) picksplit functionality would be started instead of new (buggy)
one. Andrew approved my assumption on IRC. So I found all the indexes
(gist) with one box field and recreated them with extra column (bigint
PK field). Well on this moment our DB has been working for a 22 hour
without crashes and errors.

Of course not being pg-hacker I can't guaranty that my assumption is
absolutely correct and I welcome your criticism.

-- 
Regards,
Sergey Konoplev
--
PostgreSQL articles in english  russian
http://gray-hemp.blogspot.com/search/label/postgresql/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Crash in gist insertion on pathological box data

2009-03-26 Thread Sergey Konoplev
On Thu, Mar 26, 2009 at 5:39 PM, Andrew Gierth
and...@tao11.riddles.org.uk wrote:
 A user on IRC reported a crash (backend segfault) in GiST insertion
 (in 8.3.5 but I can reproduce this in today's HEAD) that turns out
 to be due to misbehaviour of gist_box_picksplit.


Andrew, thank you for the test case and report.

p.s. The user Andrew mentioned above is me and if you have a question
to me I am ready to answer it.

-- 
Regards,
Sergey Konoplev
--
PostgreSQL articles in english  russian
http://gray-hemp.blogspot.com/search/label/postgresql/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers