Re: [GENERAL] Feature request: fsync and commit_delay options per database

2015-06-30 Thread Bráulio Bhavamitra
On Tue, Jun 30, 2015 at 3:43 AM, Jeff Janes jeff.ja...@gmail.com wrote:
 2015-06-29 15:18 GMT-07:00 Bráulio Bhavamitra brauli...@gmail.com:

 Hello all,

 After reading
 http://stackoverflow.com/questions/9407442/optimise-postgresql-for-fast-testing
 I've tried to use commit_delay to make commits really slow on a test
 environment. Unfortunetely, the maximum value is 100ms (100_000
 microseconds).

 Besides increasing it, it would be great to have these two options
 (fsync and commit_delay) per database, that is, valid only for
 databases configured with them. That would greatly speed up test
 running and still make the cluster available for other real
 databases.

 Is this feature or something similar planned?


 fsync is inherently across the cluster, so that can't be set per database.
 You can configure a different commit_delay in each database on the cluster
 using alter database jjanes set commit_delay to 1000; for example, but if
 different databases have different settings they will interact with each
 other in complex, unintuitive ways.  And it is not really clear what you are
 trying to accomplish by doing this.
Great! But for commit_delay to be an usable parameter for in-memory
test databases, it should allow for much higher delays. I would be
happy with 10 minutes, for instance. Is there a reason for a
limitation of 100ms?


 Running multiple clusters on the same server is pretty easy to do, as long
 your client allows you configure which port number it connects to.  If you
 really want fsync on for one database and off for another one, but each
 database in a different cluster.
Nice, will try that too, but would prefer the commit_delay setup above.


 Cheers,

 Jeff



-- 
Lute pela sua ideologia. Seja um com sua ideologia. Viva pela sua
ideologia. Morra por sua ideologia P.R. Sarkar

EITA - Educação, Informação e Tecnologias para Autogestão
http://cirandas.net/brauliobo
http://eita.org.br

Paramapurusha é meu pai e Parama Prakriti é minha mãe. O universo é
meu lar e todos nós somos cidadãos deste cosmo. Este universo é a
imaginação da Mente Macrocósmica, e todas as entidades estão sendo
criadas, preservadas e destruídas nas fases de extroversão e
introversão do fluxo imaginativo cósmico. No âmbito pessoal, quando
uma pessoa imagina algo em sua mente, naquele momento, essa pessoa é a
única proprietária daquilo que ela imagina, e ninguém mais. Quando um
ser humano criado mentalmente caminha por um milharal também
imaginado, a pessoa imaginada não é a propriedade desse milharal, pois
ele pertence ao indivíduo que o está imaginando. Este universo foi
criado na imaginação de Brahma, a Entidade Suprema, por isso a
propriedade deste universo é de Brahma, e não dos microcosmos que
também foram criados pela imaginação de Brahma. Nenhuma propriedade
deste mundo, mutável ou imutável, pertence a um indivíduo em
particular; tudo é o patrimônio comum de todos.
Restante do texto em
http://cirandas.net/brauliobo/blog/a-problematica-de-hoje-em-dia


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plpgsql question: select into multiple variables ?

2015-06-30 Thread Day, David
Hi Yari,

Thanks for the response.
You did make the “simplified concept” function more rational.

However,
This was kind of a non-sense function to demonstrate the problem I was having 
with the “select fields” and the “into variables”.
As pointed out by Adrian Klaver and  Tom Lane,  the real problem was in casts 
that I was using were confusing the parser and were un-necessary.

Appreciate your thought and effort.


Regards


Dave



From: Yasin Sari [mailto:yasinsar...@googlemail.com]
Sent: Tuesday, June 30, 2015 3:26 AM
To: Day, David
Subject: Re: [GENERAL] plpgsql question: select into multiple variables ?

Hi David,

this works for me.

CREATE OR REPLACE FUNCTION sys.time_test (
  out first_weekend date,
  out last_weekend date
)
RETURNS SETOF record AS
$body$
BEGIN


  SELECT 
COALESCE(MIN(CAL_DATE),'01-jun-2014'),COALESCE(MAX(CAL_DATE),'01-jun-2014')
  into first_weekend,last_weekend
  FROM sys.calendar
WHERE month_of_year = (extract(MONTH FROM current_date))::int AND
  year_of_date = (extract(YEAR FROM current_date))::int AND
 day_of_week IN ( 'Sat','Sun');
return next;

END
$body$
LANGUAGE 'plpgsql'
VOLATILE
CALLED ON NULL INPUT
SECURITY INVOKER
COST 100 ROWS 1000;

On Mon, Jun 29, 2015 at 10:07 PM, Day, David 
d...@redcom.commailto:d...@redcom.com wrote:
Hi,



Postgres version 9.3.9


What is wrong with my usage of the plpgsql  select into concept
I have a function to look into a calendar table to find the first and
Last weekend date of a month.

In this simplified concept function I end up with a NULL for first or last 
weekend variable.


create or replace function sys.time_test ()
returns date as
$$
DECLARE
   first_weekend date;
   last_weekend date;
BEGIN

  SELECT MIN(CAL_DATE),MAX(CAL_DATE) INTO first_weekend::date, 
last_weekend::date FROM sys.calendar
  WHERE month_of_year = (extract(MONTH FROM current_date))::int AND
  year_of_date = (extract(YEAR FROM current_date))::int AND
 day_of_week IN ( 'Sat','Sun');

 RETURN( COALESCE(last_weekend,'01-jun-2014'));

END
$$
LANGUAGE plpgsql volatile;


If I execute the same select logic from a psql shell I get the correct result.


(1 row)

ace_db=# ace_db=#  SELECT MIN(CAL_DATE),MAX(CAL_DATE) FROM sys.calendar cal 

WHERE cal.month_of_year = 
(extract(MONTH FROM current_date))::int AND 
 
cal.year_of_date = (extract(YEAR FROM current_date))::int AND   

 cal.day_of_week IN ( 'Sat','Sun');
min |max
+
 2015-06-06 | 2015-06-28
(1 row)


If I simplify to a single variable it works. i.e


create or replace function sys.time_test ()
returns date as
$$
DECLARE
   first_weekend date;
   last_weekend date;
BEGIN

  SELECT MIN(CAL_DATE) INTO first_weekend::date FROM sys.calendar
  WHERE month_of_year = (extract(MONTH FROM current_date))::int AND
  year_of_date = (extract(YEAR FROM current_date))::int AND
 day_of_week IN ( 'Sat','Sun');

 RETURN( COALESCE(first_weekend,'01-jun-2014'));

END
$$
LANGUAGE plpgsql volatile;



I suppose I can adjust to write my actual function to have 2 selects; one for 
each variable.
However, I thought according to the documentation the targets could/must match 
the result columns for select into ?


Thoughts


Thanks


Dave Day




--
Sent via pgsql-general mailing list 
(pgsql-general@postgresql.orgmailto:pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



[GENERAL] Hardware question

2015-06-30 Thread Leonard Boyce
Hoping to tap into the hive mind here.

I'm looking to upgrade drives on some of our db servers and am hoping
someone has run/tested a similar setup in the past and can share their
wisdom.

I have a pair of Dell R910s with H700 controllers and am looking to
replace the existing drives with SSDs. Right now I'm looking at
populating both systems with x16 Intel 400GB S3700 drives in RAID-10
array which seem to fall right in the middle price/performance wise.

Does anyone have any experience using the Intel S3700s (not Dell
branded version) with the H700 controller?

Anything I should be aware of? Are there any known issues?

Thanks,
Leonard


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] how to extract the page address from the ctid

2015-06-30 Thread Marc Mamin

Hello,

while waiting for the new BRIN and improved GIN Indexes :-)

I'd like to get a rough feeling, how some column's values are distributed on 
the internal pages.

e.g.:

SELECT c, count(*) FROM
(
   SELECT substring(ctid::text, '^.([^,]+)'),
count(distinct myColumn) as c
FROM myTable
GROUP BY substring(ctid::text, '^.([^,]+)')
)foo
GROUP BY c order by 1 desc;


Is there a quicker way to extract the page reference from the ctid?

regards,

Marc Mamin
 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to extract the page address from the ctid

2015-06-30 Thread Michael Paquier
On Tue, Jun 30, 2015 at 10:56 PM, Marc Mamin m.ma...@intershop.de wrote:
 while waiting for the new BRIN and improved GIN Indexes :-)

 I'd like to get a rough feeling, how some column's values are distributed on 
 the internal pages.
 Is there a quicker way to extract the page reference from the ctid?

Isn't it something that pageinspect can do directly for you? It has
been extended for brin indexes.
http://www.postgresql.org/docs/devel/static/pageinspect.html
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Streaming Questions and Sizing

2015-06-30 Thread Andy Erskine
I have db of 123GB Currently and this is streaming to a secondary DB which
also shows a size of 123GB (Using pgAdmin)

The db's both reside on a 250GB directorys and on the Master i'm using 60%
of capacity which seems expected
On the secondary i am using 88% of the disks capacity. I assume this is
something to do with the WAL segments ? which is currently
wal_keep_segments = 500

If this is the issue how do i go about tuning this to what is required ?

If i make the segments to small then a restore may end up with an un
synchronised database right ?

thanks.


[GENERAL] Ubuntu 14.04 LTS install problem

2015-06-30 Thread Urs Berner

I installed Ubuntu 14.04 LTS
apg-get update ... apt-get upgrade ...

and looked at
www.postgresql.org/download/linux/ubuntu
then added apt repository /etc/apt/sources.list.d/pgdg.list

When I try

apt-get install postgresql-9.4

error:
 depends on postgresql-common (= 142~) what should not get installed
... you have defect packages ...

any ideas?

Urs






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PgPool Configuration Document required

2015-06-30 Thread Melvin Davidson
google search works fine for that:


http://pgpool.net/mediawiki/index.php/Documentation

http://www.pgpool.net/mediawiki/images/pgpool-en-3.3.0.pdf


On Tue, Jun 30, 2015 at 12:05 PM, Jimit Amin jimitam...@gmail.com wrote:

 Dear Sir/Mam,

 Can I have technical documentation for configuration of PgPool?




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


[GENERAL] PgPool Configuration Document required

2015-06-30 Thread Jimit Amin
Dear Sir/Mam,

Can I have technical documentation for configuration of PgPool?


Re: [GENERAL] PgPool Configuration Document required

2015-06-30 Thread Joshua D. Drake


On 06/30/2015 09:05 AM, Jimit Amin wrote:

Dear Sir/Mam,

Can I have technical documentation for configuration of PgPool?



http://www.pgpool.net/mediawiki/index.php/Main_Page

--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing I'm offended is basically telling the world you can't
control your own emotions, so everyone else should do it for you.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PgPool Configuration Document required

2015-06-30 Thread Jimit Amin
Dear Sir,

Sorry for this type of question, I have already configured PgPool but
facing 3 issue. That's why I thought that I may have done wrong setup.

Issue faced
I have streaming replication setup 1 master and 1 read-only server and
PgPool

1. I have already mentioned nextval in blacklistfunction but sometime
PgPool fires nextval on read-only server. I have log file.

2. I am not able to see modified data updated by same transaction because

Begin;
Update  ; --- master server

Select that data ... ; --- I am not able to see updated data by previous
update because this select executed on read-only server

3. From application I am executing one procedure which returns cursor..
I am getting error like unnamed portal  dose not exists, while checking
log I got to know that exec procedure done on read-only slave server and
fetch all in unnamed portal  executed on slave server.

Thanks,
Jimit Amin
On 30 Jun 2015 21:52, Joshua D. Drake j...@commandprompt.com wrote:


 On 06/30/2015 09:05 AM, Jimit Amin wrote:

 Dear Sir/Mam,

 Can I have technical documentation for configuration of PgPool?


 http://www.pgpool.net/mediawiki/index.php/Main_Page

 --
 Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
 PostgreSQL Centered full stack support, consulting and development.
 Announcing I'm offended is basically telling the world you can't
 control your own emotions, so everyone else should do it for you.



Re: [GENERAL] Which replication is the best for our case ?

2015-06-30 Thread Arthur Silva
On Tue, Jun 30, 2015 at 1:57 PM, ben.play benjamin.co...@playrion.com
wrote:

 Hi guys,

 Thank you a lot for your answers.

 In fact, I tried to write the easiest explanation of my problem in order to
 be understood...
 My project is developed with Symfony and Doctrine (BERK, i know ...).

 The project has more than 2 years and Doctrine makes some bad decisions and
 lock all the table for a while.
 We are developing the project without Doctrine but it will not be available
 within 1 year...

 To be more precise : We have a database with more than 400 Gb and ONE table
 with more than 100 Gb of data. This is huge for doctrine. When the cron
 runs, it writes a lot on the disks in temporary file (although we have 128
 GB of Ram...). Of course, each table is well indexes...

 That is why I'm thinking about replication : My server A (master) is for my
 users... and my server B is a server reserved for calculations (and this
 server B which writes on the base)

 This is a image of my dream system :
 
 http://postgresql.nabble.com/file/n5855916/9e41ce1f-38ea-4fba-a437-a43c598e655c.jpg
 
 (If you can't see the image :
 http://tof.canardpc.com/view/9e41ce1f-38ea-4fba-a437-a43c598e655c.jpg)


 Thank you a lot for your help !



 --
 View this message in context:
 http://postgresql.nabble.com/Which-replication-is-the-best-for-our-case-tp5855685p5855916.html
 Sent from the PostgreSQL - general mailing list archive at Nabble.com.


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Hello,

Streaming replication will do just fine from ServerA to ServerB, but as for
the rest of the data flow I'm afraid we will need more details.


Re: [GENERAL] PgPool Configuration Document required

2015-06-30 Thread Joshua D. Drake


On 06/30/2015 09:35 AM, Jimit Amin wrote:

Dear Sir,

Sorry for this type of question, I have already configured PgPool but
facing 3 issue. That's why I thought that I may have done wrong setup.



I would suggest the pgpool mailing lists.

JD

--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing I'm offended is basically telling the world you can't
control your own emotions, so everyone else should do it for you.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Which replication is the best for our case ?

2015-06-30 Thread ben.play
Hi guys,

Thank you a lot for your answers. 

In fact, I tried to write the easiest explanation of my problem in order to
be understood... 
My project is developed with Symfony and Doctrine (BERK, i know ...).

The project has more than 2 years and Doctrine makes some bad decisions and
lock all the table for a while. 
We are developing the project without Doctrine but it will not be available
within 1 year... 

To be more precise : We have a database with more than 400 Gb and ONE table
with more than 100 Gb of data. This is huge for doctrine. When the cron
runs, it writes a lot on the disks in temporary file (although we have 128
GB of Ram...). Of course, each table is well indexes...

That is why I'm thinking about replication : My server A (master) is for my
users... and my server B is a server reserved for calculations (and this
server B which writes on the base)

This is a image of my dream system : 
http://postgresql.nabble.com/file/n5855916/9e41ce1f-38ea-4fba-a437-a43c598e655c.jpg
 
(If you can't see the image :
http://tof.canardpc.com/view/9e41ce1f-38ea-4fba-a437-a43c598e655c.jpg)


Thank you a lot for your help !



--
View this message in context: 
http://postgresql.nabble.com/Which-replication-is-the-best-for-our-case-tp5855685p5855916.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] very slow queries and ineffective vacuum

2015-06-30 Thread Lukasz Wrobel
Hello.

I have multiple problems with my database, the biggest of which is how to
find out what is actually wrong.

First of all I have a 9.3 postgres database that is running for about a
month. Right now the queries on that database are running very slowly
(select with a simple where on a non-indexed column on a table with about
5000 records takes 1,5s, a complicated hibernate select with 7 joins on
tables of about 5000 records takes about 15s, insert or update on a table
with 35000 records takes up to 20 mins).

The tables and indexes on those tables are bloated to the point where this
query: https://wiki.postgresql.org/wiki/Show_database_bloat shows wasted
bytes in hundreds of MB.

For whatever reason there is also no data in pg_stat* tables.

So due to the long query times, there are multiple errors in my application
logs like No free connection available or Could not synchronize database
state with session, or Failed to rollback transaction and the
application fails to start in the required time.

The only thing that helps fix the situation seems to be vacuum full of the
entire database. Regular vacuum doesn't even lower the dead tuples count
(which appear by the thousands during application launching). Reindex of
all the indexes in the database didn't help as well. All autovacuum
parameters are default.

There doesn't seem to be any issues with disk space, memory or CPU, as
neither of those is even 50% used (as per df and top).

Is there any good tool that will monitor the queries and generate a report
with useful information on what might be the problem? I tried pg_badger,
but all I got were specific queries and their times, but the long query
times are just one of the symptoms of what's wrong with the database, not
the cause.

Perhaps I'm missing some indexes on the tables (creating them on the
columns on which the where clause was used in the long queries seemed to
halve their times). Also how can I monitor my transactions and if they are
closed properly?

I will be grateful for any help and if you need more details I can provide
them if possible.

Best regards.
Lukasz


Re: [GENERAL] very slow queries and ineffective vacuum

2015-06-30 Thread Pavel Stehule
Hi

What is an output of VACUUM VERBOSE statement?

VACUUM can be blocked by some forgotten transaction. Check your
pg_stat_activity table for some old process in idle in transaction state.
Then connection should not be reused, and you can see a error messages
about missing connections. I found this issue more time in Java application
- when it doesn't handle transactions correctly. Same effect can have
forgotten 2PC transaction.

When VACUUM long time was not executed - the most fast repair process is a
export via pg_dump and load. Another way is dropping all indexes, VACUUM
FULL and creating fresh indexes.

Autovacuum is based on tracking statistics - you have to see your tables in
table pg_stat_user_tables, and you can check there autovacuum timestamp.
Sometimes autovacuum has too low priority and it is often cancelled.

Regards

Pavel Stehule

2015-06-30 14:57 GMT+02:00 Lukasz Wrobel 
lukasz.wro...@motorolasolutions.com:

 Hello.

 I have multiple problems with my database, the biggest of which is how to
 find out what is actually wrong.

 First of all I have a 9.3 postgres database that is running for about a
 month. Right now the queries on that database are running very slowly
 (select with a simple where on a non-indexed column on a table with about
 5000 records takes 1,5s, a complicated hibernate select with 7 joins on
 tables of about 5000 records takes about 15s, insert or update on a table
 with 35000 records takes up to 20 mins).

 The tables and indexes on those tables are bloated to the point where this
 query: https://wiki.postgresql.org/wiki/Show_database_bloat shows wasted
 bytes in hundreds of MB.

 For whatever reason there is also no data in pg_stat* tables.

 So due to the long query times, there are multiple errors in my
 application logs like No free connection available or Could not
 synchronize database state with session, or Failed to rollback
 transaction and the application fails to start in the required time.

 The only thing that helps fix the situation seems to be vacuum full of the
 entire database. Regular vacuum doesn't even lower the dead tuples count
 (which appear by the thousands during application launching). Reindex of
 all the indexes in the database didn't help as well. All autovacuum
 parameters are default.

 There doesn't seem to be any issues with disk space, memory or CPU, as
 neither of those is even 50% used (as per df and top).

 Is there any good tool that will monitor the queries and generate a report
 with useful information on what might be the problem? I tried pg_badger,
 but all I got were specific queries and their times, but the long query
 times are just one of the symptoms of what's wrong with the database, not
 the cause.

 Perhaps I'm missing some indexes on the tables (creating them on the
 columns on which the where clause was used in the long queries seemed to
 halve their times). Also how can I monitor my transactions and if they are
 closed properly?

 I will be grateful for any help and if you need more details I can provide
 them if possible.

 Best regards.
 Lukasz



Re: [GENERAL] very slow queries and ineffective vacuum

2015-06-30 Thread William Dunn
Hello Lukasz,

On Tue, Jun 30, 2015 at 8:57 AM, Lukasz Wrobel 
lukasz.wro...@motorolasolutions.com wrote:


 There doesn't seem to be any issues with disk space, memory or CPU, as
 neither of those is even 50% used (as per df and top).


Are you using the default PostgreSQL configuration settings, or have you
custom tuned them? The default settings are targeted for wide compatibility
and are not optimized for performance. If PostgreSQL is performing badly
and using a small amount of system resources it is likely some tuning is
needed. See docs:
http://www.postgresql.org/docs/current/static/runtime-config.html


On Tue, Jun 30, 2015 at 8:57 AM, Lukasz Wrobel 
lukasz.wro...@motorolasolutions.com wrote:


 For whatever reason there is also no data in pg_stat* tables.


You can also turn on tracking (for statistics views) by enabling statistics
collection in the config
http://www.postgresql.org/docs/current/static/runtime-config-statistics.html

*Will J. Dunn*
*willjdunn.com http://willjdunn.com*

On Tue, Jun 30, 2015 at 8:57 AM, Lukasz Wrobel 
lukasz.wro...@motorolasolutions.com wrote:

 Hello.

 I have multiple problems with my database, the biggest of which is how to
 find out what is actually wrong.

 First of all I have a 9.3 postgres database that is running for about a
 month. Right now the queries on that database are running very slowly
 (select with a simple where on a non-indexed column on a table with about
 5000 records takes 1,5s, a complicated hibernate select with 7 joins on
 tables of about 5000 records takes about 15s, insert or update on a table
 with 35000 records takes up to 20 mins).

 The tables and indexes on those tables are bloated to the point where this
 query: https://wiki.postgresql.org/wiki/Show_database_bloat shows wasted
 bytes in hundreds of MB.

 For whatever reason there is also no data in pg_stat* tables.

 So due to the long query times, there are multiple errors in my
 application logs like No free connection available or Could not
 synchronize database state with session, or Failed to rollback
 transaction and the application fails to start in the required time.

 The only thing that helps fix the situation seems to be vacuum full of the
 entire database. Regular vacuum doesn't even lower the dead tuples count
 (which appear by the thousands during application launching). Reindex of
 all the indexes in the database didn't help as well. All autovacuum
 parameters are default.

 There doesn't seem to be any issues with disk space, memory or CPU, as
 neither of those is even 50% used (as per df and top).

 Is there any good tool that will monitor the queries and generate a report
 with useful information on what might be the problem? I tried pg_badger,
 but all I got were specific queries and their times, but the long query
 times are just one of the symptoms of what's wrong with the database, not
 the cause.

 Perhaps I'm missing some indexes on the tables (creating them on the
 columns on which the where clause was used in the long queries seemed to
 halve their times). Also how can I monitor my transactions and if they are
 closed properly?

 I will be grateful for any help and if you need more details I can provide
 them if possible.

 Best regards.
 Lukasz



Re: [GENERAL] very slow queries and ineffective vacuum

2015-06-30 Thread William Dunn
On Tue, Jun 30, 2015 at 8:57 AM, Lukasz Wrobel 
lukasz.wro...@motorolasolutions.com wrote:


 Perhaps I'm missing some indexes on the tables (creating them on the
 columns on which the where clause was used in the long queries seemed to
 halve their times). Also how can I monitor my transactions and if they are
 closed properly?


To track transactions that have not been left idle but not committed or
rolled back you would:

1) Set track_activities true in the config (doc:
http://www.postgresql.org/docs/current/static/runtime-config-statistics.html#GUC-TRACK-ACTIVITIES
)
2) Query the pg_stat_activity view for connections where state = 'idle in
transaction' (doc:
http://www.postgresql.org/docs/current/static/monitoring-stats.html#PG-STAT-ACTIVITY-VIEW
)

As you would suspect, transactions that have been left idle in
transaction prevent vacuum from removing old tuples (because they are
still in scope for that transaction)

*Will J. Dunn*
*willjdunn.com http://willjdunn.com*

On Tue, Jun 30, 2015 at 4:27 PM, William Dunn dunn...@gmail.com wrote:

 Hello Lukasz,

 On Tue, Jun 30, 2015 at 8:57 AM, Lukasz Wrobel 
 lukasz.wro...@motorolasolutions.com wrote:


 There doesn't seem to be any issues with disk space, memory or CPU, as
 neither of those is even 50% used (as per df and top).


 Are you using the default PostgreSQL configuration settings, or have you
 custom tuned them? The default settings are targeted for wide compatibility
 and are not optimized for performance. If PostgreSQL is performing badly
 and using a small amount of system resources it is likely some tuning is
 needed. See docs:
 http://www.postgresql.org/docs/current/static/runtime-config.html


 On Tue, Jun 30, 2015 at 8:57 AM, Lukasz Wrobel 
 lukasz.wro...@motorolasolutions.com wrote:


 For whatever reason there is also no data in pg_stat* tables.


 You can also turn on tracking (for statistics views) by enabling
 statistics collection in the config
 http://www.postgresql.org/docs/current/static/runtime-config-statistics.html

 *Will J. Dunn*
 *willjdunn.com http://willjdunn.com*

 On Tue, Jun 30, 2015 at 8:57 AM, Lukasz Wrobel 
 lukasz.wro...@motorolasolutions.com wrote:

 Hello.

 I have multiple problems with my database, the biggest of which is how to
 find out what is actually wrong.

 First of all I have a 9.3 postgres database that is running for about a
 month. Right now the queries on that database are running very slowly
 (select with a simple where on a non-indexed column on a table with about
 5000 records takes 1,5s, a complicated hibernate select with 7 joins on
 tables of about 5000 records takes about 15s, insert or update on a table
 with 35000 records takes up to 20 mins).

 The tables and indexes on those tables are bloated to the point where
 this query: https://wiki.postgresql.org/wiki/Show_database_bloat shows
 wasted bytes in hundreds of MB.

 For whatever reason there is also no data in pg_stat* tables.

 So due to the long query times, there are multiple errors in my
 application logs like No free connection available or Could not
 synchronize database state with session, or Failed to rollback
 transaction and the application fails to start in the required time.

 The only thing that helps fix the situation seems to be vacuum full of
 the entire database. Regular vacuum doesn't even lower the dead tuples
 count (which appear by the thousands during application launching). Reindex
 of all the indexes in the database didn't help as well. All autovacuum
 parameters are default.

 There doesn't seem to be any issues with disk space, memory or CPU, as
 neither of those is even 50% used (as per df and top).

 Is there any good tool that will monitor the queries and generate a
 report with useful information on what might be the problem? I tried
 pg_badger, but all I got were specific queries and their times, but the
 long query times are just one of the symptoms of what's wrong with the
 database, not the cause.

 Perhaps I'm missing some indexes on the tables (creating them on the
 columns on which the where clause was used in the long queries seemed to
 halve their times). Also how can I monitor my transactions and if they are
 closed properly?

 I will be grateful for any help and if you need more details I can
 provide them if possible.

 Best regards.
 Lukasz





Re: [GENERAL] Which replication is the best for our case ?

2015-06-30 Thread John R Pierce

On 6/30/2015 9:57 AM, ben.play wrote:

To be more precise : We have a database with more than 400 Gb and ONE table
with more than 100 Gb of data. This is huge for doctrine. When the cron
runs, it writes a lot on the disks in temporary file (although we have 128
GB of Ram...). Of course, each table is well indexes...

That is why I'm thinking about replication : My server A (master) is for my
users... and my server B is a server reserved for calculations (and this
server B which writes on the base)

This is a image of my dream system :
http://postgresql.nabble.com/file/n5855916/9e41ce1f-38ea-4fba-a437-a43c598e655c.jpg  
(If you can't see the image :

http://tof.canardpc.com/view/9e41ce1f-38ea-4fba-a437-a43c598e655c.jpg)



what happens when master A continues to update/insert into these tables 
that your cron job is batch updating on the offline copy ? How would you 
merge those changes in ?




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] very slow queries and ineffective vacuum

2015-06-30 Thread William Dunn
Sorry I meant to say, To track transactions that *have been* left idle but
not committed or rolled back you would...
Typo

*Will J. Dunn*
*willjdunn.com http://willjdunn.com*

On Tue, Jun 30, 2015 at 4:33 PM, William Dunn dunn...@gmail.com wrote:

 On Tue, Jun 30, 2015 at 8:57 AM, Lukasz Wrobel 
 lukasz.wro...@motorolasolutions.com wrote:


 Perhaps I'm missing some indexes on the tables (creating them on the
 columns on which the where clause was used in the long queries seemed to
 halve their times). Also how can I monitor my transactions and if they are
 closed properly?


 To track transactions that have not been left idle but not committed or
 rolled back you would:

 1) Set track_activities true in the config (doc:
 http://www.postgresql.org/docs/current/static/runtime-config-statistics.html#GUC-TRACK-ACTIVITIES
 )
 2) Query the pg_stat_activity view for connections where state = 'idle in
 transaction' (doc:
 http://www.postgresql.org/docs/current/static/monitoring-stats.html#PG-STAT-ACTIVITY-VIEW
 )

 As you would suspect, transactions that have been left idle in
 transaction prevent vacuum from removing old tuples (because they are
 still in scope for that transaction)

 *Will J. Dunn*
 *willjdunn.com http://willjdunn.com*

 On Tue, Jun 30, 2015 at 4:27 PM, William Dunn dunn...@gmail.com wrote:

 Hello Lukasz,

 On Tue, Jun 30, 2015 at 8:57 AM, Lukasz Wrobel 
 lukasz.wro...@motorolasolutions.com wrote:


 There doesn't seem to be any issues with disk space, memory or CPU, as
 neither of those is even 50% used (as per df and top).


 Are you using the default PostgreSQL configuration settings, or have you
 custom tuned them? The default settings are targeted for wide compatibility
 and are not optimized for performance. If PostgreSQL is performing badly
 and using a small amount of system resources it is likely some tuning is
 needed. See docs:
 http://www.postgresql.org/docs/current/static/runtime-config.html


 On Tue, Jun 30, 2015 at 8:57 AM, Lukasz Wrobel 
 lukasz.wro...@motorolasolutions.com wrote:


 For whatever reason there is also no data in pg_stat* tables.


 You can also turn on tracking (for statistics views) by enabling
 statistics collection in the config
 http://www.postgresql.org/docs/current/static/runtime-config-statistics.html

 *Will J. Dunn*
 *willjdunn.com http://willjdunn.com*

 On Tue, Jun 30, 2015 at 8:57 AM, Lukasz Wrobel 
 lukasz.wro...@motorolasolutions.com wrote:

 Hello.

 I have multiple problems with my database, the biggest of which is how
 to find out what is actually wrong.

 First of all I have a 9.3 postgres database that is running for about a
 month. Right now the queries on that database are running very slowly
 (select with a simple where on a non-indexed column on a table with about
 5000 records takes 1,5s, a complicated hibernate select with 7 joins on
 tables of about 5000 records takes about 15s, insert or update on a table
 with 35000 records takes up to 20 mins).

 The tables and indexes on those tables are bloated to the point where
 this query: https://wiki.postgresql.org/wiki/Show_database_bloat shows
 wasted bytes in hundreds of MB.

 For whatever reason there is also no data in pg_stat* tables.

 So due to the long query times, there are multiple errors in my
 application logs like No free connection available or Could not
 synchronize database state with session, or Failed to rollback
 transaction and the application fails to start in the required time.

 The only thing that helps fix the situation seems to be vacuum full of
 the entire database. Regular vacuum doesn't even lower the dead tuples
 count (which appear by the thousands during application launching). Reindex
 of all the indexes in the database didn't help as well. All autovacuum
 parameters are default.

 There doesn't seem to be any issues with disk space, memory or CPU, as
 neither of those is even 50% used (as per df and top).

 Is there any good tool that will monitor the queries and generate a
 report with useful information on what might be the problem? I tried
 pg_badger, but all I got were specific queries and their times, but the
 long query times are just one of the symptoms of what's wrong with the
 database, not the cause.

 Perhaps I'm missing some indexes on the tables (creating them on the
 columns on which the where clause was used in the long queries seemed to
 halve their times). Also how can I monitor my transactions and if they are
 closed properly?

 I will be grateful for any help and if you need more details I can
 provide them if possible.

 Best regards.
 Lukasz






Re: [GENERAL] Turn off streaming replication - leaving Master running

2015-06-30 Thread Andy Erskine
Cheers all.

On 30 June 2015 at 15:58, John R Pierce pie...@hogranch.com wrote:

 On 6/29/2015 10:55 PM, Andy Erskine wrote:

 agreed there is an element of risk.

 however a backup of the master will be carried out prior (ok there could
 potentially be a gap of data during downtime)

 unfortunately i have nothing else big enough (diskwise) to run my tests
 on.


 rent a virtual server for a few days from Amazon or someone.



 --
 john r pierce, recycling bits in santa cruz



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 

*Andy Erskine*



*JDS Australia*
*P* 1300 780 432 | *M* 0431919301 | *F* 03 90124578

*E:* *andy.ersk...@jds.net.au andy.ersk...@jds.net.au*

*W*: www.jds.net.au
Level 8, 2 Russell Street, Melbourne, VIC 3000
GPO Box 4777, Melbourne VIC 3001

[image: JDS Signature v1]


Re: [GENERAL] Feature request: fsync and commit_delay options per database

2015-06-30 Thread Jeff Janes
2015-06-29 15:18 GMT-07:00 Bráulio Bhavamitra brauli...@gmail.com:

 Hello all,

 After reading
 http://stackoverflow.com/questions/9407442/optimise-postgresql-for-fast-testing
 I've tried to use commit_delay to make commits really slow on a test
 environment. Unfortunetely, the maximum value is 100ms (100_000
 microseconds).

 Besides increasing it, it would be great to have these two options
 (fsync and commit_delay) per database, that is, valid only for
 databases configured with them. That would greatly speed up test
 running and still make the cluster available for other real
 databases.

 Is this feature or something similar planned?


fsync is inherently across the cluster, so that can't be set per database.
You can configure a different commit_delay in each database on the cluster
using alter database jjanes set commit_delay to 1000; for example, but if
different databases have different settings they will interact with each
other in complex, unintuitive ways.  And it is not really clear what you
are trying to accomplish by doing this.

Running multiple clusters on the same server is pretty easy to do, as long
your client allows you configure which port number it connects to.  If you
really want fsync on for one database and off for another one, but each
database in a different cluster.

Cheers,

Jeff


Re: [GENERAL] Turn off streaming replication - leaving Master running

2015-06-30 Thread John R Pierce

On 6/29/2015 10:55 PM, Andy Erskine wrote:

agreed there is an element of risk.

however a backup of the master will be carried out prior (ok there 
could potentially be a gap of data during downtime)


unfortunately i have nothing else big enough (diskwise) to run my 
tests on.


rent a virtual server for a few days from Amazon or someone.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] very slow queries and ineffective vacuum

2015-06-30 Thread Alvaro Herrera
Lukasz Wrobel wrote:
 Hello.
 
 I have multiple problems with my database, the biggest of which is how to
 find out what is actually wrong.
 
 First of all I have a 9.3 postgres database that is running for about a
 month. Right now the queries on that database are running very slowly
 (select with a simple where on a non-indexed column on a table with about
 5000 records takes 1,5s, a complicated hibernate select with 7 joins on
 tables of about 5000 records takes about 15s, insert or update on a table
 with 35000 records takes up to 20 mins).

What's your operating system?

What does pg_stat_user_tables tell you about the vacuum times for the
bloated tables?  Mainly, is autovacuum processing them at all?  If not,
are there log entries about autovacuum trouble (those would show up as
ERROR mentioning automatic vacuuming)?  If not, is autovacuum running at
all, and is the stats collector working properly?

I'd recommend setting log_autovacuum_min_duration to a value other than
the default -1 and see whether it is doing anything.

Also useful for debugging would be the VACUUM VERBOSE output for
problematic tables.

Maybe your tuple death rate is higher than what autovacuum can cope
with, with default settings.  In that case maybe you need a larger
autovacuum_max_workers setting and/or a decrease of
autovacuum_vacuum_cost_delay and/or a change of autovacuum_naptime.
Sometimes, manual vacuuming of individual problematic tables also helps.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] very slow queries and ineffective vacuum

2015-06-30 Thread William Dunn
Jerry,

On Tue, Jun 30, 2015 at 5:31 PM, Jerry Sievers gsiever...@comcast.net
 wrote:


 foodb/postgres
 =# \d pg_stat_activity|pg_prepared_xacts
 View pg_catalog.pg_prepared_xacts
Column|   Type   | Modifiers
 -+--+---
  transaction | xid  |
  gid | text |
  prepared| timestamp with time zone |
  owner   | name |
  database| name |

View pg_catalog.pg_stat_activity
   Column  |   Type   | Modifiers
 --+--+---
  datid| oid  |
  datname  | name |
  pid  | integer  |
  usesysid | oid  |
  usename  | name |
  application_name | text |
  client_addr  | inet |
  client_hostname  | text |
  client_port  | integer  |
  backend_start| timestamp with time zone |
  xact_start   | timestamp with time zone |
  query_start  | timestamp with time zone |
  state_change | timestamp with time zone |
  waiting  | boolean  |
  state| text |
  query| text |

 foodb/postgres
 =#


What exactly are you trying to tell us? If you want to provide someone
details about one of the system views it is probably better to link them to
the official documentation which lists not only the view's fields and their
datatype but also their meaning,what they will be in their specific
Postgres version, and any additional notes the community deemed useful

*Will J. Dunn*
*willjdunn.com http://willjdunn.com*

On Tue, Jun 30, 2015 at 5:31 PM, Jerry Sievers gsiever...@comcast.net
wrote:

 William Dunn dunn...@gmail.com writes:

  Sorry I meant to say, To track transactions that have been left idle
 but not committed or rolled back you would...
  Typo


 foodb/postgres
 =# \d pg_stat_activity|pg_prepared_xacts
 View pg_catalog.pg_prepared_xacts
Column|   Type   | Modifiers
 -+--+---
  transaction | xid  |
  gid | text |
  prepared| timestamp with time zone |
  owner   | name |
  database| name |

View pg_catalog.pg_stat_activity
   Column  |   Type   | Modifiers
 --+--+---
  datid| oid  |
  datname  | name |
  pid  | integer  |
  usesysid | oid  |
  usename  | name |
  application_name | text |
  client_addr  | inet |
  client_hostname  | text |
  client_port  | integer  |
  backend_start| timestamp with time zone |
  xact_start   | timestamp with time zone |
  query_start  | timestamp with time zone |
  state_change | timestamp with time zone |
  waiting  | boolean  |
  state| text |
  query| text |

 foodb/postgres
 =#


 
  Will J. Dunn
  willjdunn.com
 
  On Tue, Jun 30, 2015 at 4:33 PM, William Dunn dunn...@gmail.com wrote:
 
  On Tue, Jun 30, 2015 at 8:57 AM, Lukasz Wrobel 
 lukasz.wro...@motorolasolutions.com wrote:
 
  Perhaps I'm missing some indexes on the tables (creating them on
 the columns on which the where clause was used in the long queries seemed
 to halve their
  times). Also how can I monitor my transactions and if they are
 closed properly?
 
  To track transactions that have not been left idle but not committed
 or rolled back you would:
 
  1) Set track_activities true in the config (doc:
 http://www.postgresql.org/docs/current/static/runtime-config-statistics.html#GUC-TRACK-ACTIVITIES
 )
  2) Query the pg_stat_activity view for connections where state =
 'idle in transaction' (doc:
 http://www.postgresql.org/docs/current/static/monitoring-stats.html#
  PG-STAT-ACTIVITY-VIEW)
 
  As you would suspect, transactions that have been left idle in
 transaction prevent vacuum from removing old tuples (because they are
 still in scope for that
  transaction)
 
  Will J. Dunn
  willjdunn.com
 
  On Tue, Jun 30, 2015 at 4:27 PM, William Dunn dunn...@gmail.com
 wrote:
 
  Hello Lukasz,
 
  On Tue, Jun 30, 2015 at 8:57 AM, Lukasz Wrobel 
 lukasz.wro...@motorolasolutions.com wrote:
 
  There doesn't seem to be any issues with disk space, memory
 or CPU, as neither of those is even 50% used (as 

Re: [GENERAL] very slow queries and ineffective vacuum

2015-06-30 Thread Jerry Sievers
William Dunn dunn...@gmail.com writes:

 Sorry I meant to say, To track transactions that have been left idle but not 
 committed or rolled back you would...
 Typo


foodb/postgres
=# \d pg_stat_activity|pg_prepared_xacts
View pg_catalog.pg_prepared_xacts
   Column|   Type   | Modifiers 
-+--+---
 transaction | xid  | 
 gid | text | 
 prepared| timestamp with time zone | 
 owner   | name | 
 database| name | 

   View pg_catalog.pg_stat_activity
  Column  |   Type   | Modifiers 
--+--+---
 datid| oid  | 
 datname  | name | 
 pid  | integer  | 
 usesysid | oid  | 
 usename  | name | 
 application_name | text | 
 client_addr  | inet | 
 client_hostname  | text | 
 client_port  | integer  | 
 backend_start| timestamp with time zone | 
 xact_start   | timestamp with time zone | 
 query_start  | timestamp with time zone | 
 state_change | timestamp with time zone | 
 waiting  | boolean  | 
 state| text | 
 query| text | 

foodb/postgres
=# 



 Will J. Dunn
 willjdunn.com

 On Tue, Jun 30, 2015 at 4:33 PM, William Dunn dunn...@gmail.com wrote:

 On Tue, Jun 30, 2015 at 8:57 AM, Lukasz Wrobel 
 lukasz.wro...@motorolasolutions.com wrote:

 Perhaps I'm missing some indexes on the tables (creating them on the 
 columns on which the where clause was used in the long queries seemed to 
 halve their
 times). Also how can I monitor my transactions and if they are closed 
 properly?

 To track transactions that have not been left idle but not committed or 
 rolled back you would:

 1) Set track_activities true in the config (doc: 
 http://www.postgresql.org/docs/current/static/runtime-config-statistics.html#GUC-TRACK-ACTIVITIES)
 2) Query the pg_stat_activity view for connections where state = 'idle 
 in transaction' (doc: 
 http://www.postgresql.org/docs/current/static/monitoring-stats.html#
 PG-STAT-ACTIVITY-VIEW)

 As you would suspect, transactions that have been left idle in 
 transaction prevent vacuum from removing old tuples (because they are still 
 in scope for that
 transaction)

 Will J. Dunn
 willjdunn.com

 On Tue, Jun 30, 2015 at 4:27 PM, William Dunn dunn...@gmail.com wrote:

 Hello Lukasz,

 On Tue, Jun 30, 2015 at 8:57 AM, Lukasz Wrobel 
 lukasz.wro...@motorolasolutions.com wrote:

 There doesn't seem to be any issues with disk space, memory or 
 CPU, as neither of those is even 50% used (as per df and top).

 Are you using the default PostgreSQL configuration settings, or have 
 you custom tuned them? The default settings are targeted for wide 
 compatibility and are not
 optimized for performance. If PostgreSQL is performing badly and 
 using a small amount of system resources it is likely some tuning is needed. 
 See docs: http://
 www.postgresql.org/docs/current/static/runtime-config.html 

 On Tue, Jun 30, 2015 at 8:57 AM, Lukasz Wrobel 
 lukasz.wro...@motorolasolutions.com wrote:

 For whatever reason there is also no data in pg_stat* tables.

 You can also turn on tracking (for statistics views) by enabling 
 statistics collection in the config 
 http://www.postgresql.org/docs/current/static/
 runtime-config-statistics.html

 Will J. Dunn
 willjdunn.com

 On Tue, Jun 30, 2015 at 8:57 AM, Lukasz Wrobel 
 lukasz.wro...@motorolasolutions.com wrote:

 Hello.

 I have multiple problems with my database, the biggest of which 
 is how to find out what is actually wrong.

 First of all I have a 9.3 postgres database that is running for 
 about a month. Right now the queries on that database are running very slowly 
 (select with a
 simple where on a non-indexed column on a table with about 5000 
 records takes 1,5s, a complicated hibernate select with 7 joins on tables of 
 about 5000
 records takes about 15s, insert or update on a table with 35000 
 records takes up to 20 mins).

 The tables and indexes on those tables are bloated to the point 
 where this query: https://wiki.postgresql.org/wiki/Show_database_bloat shows 
 wasted bytes in
 hundreds of MB.

 For whatever reason there is also no data in pg_stat* tables.

 So due to the 

Re: [GENERAL] very slow queries and ineffective vacuum

2015-06-30 Thread Melvin Davidson
How about your start by giving us a little useful information? Show us
your_longest_query and the output from EXPLAIN your_longest_query;
Although you say you have indexes, they may not be the correct indexes that
you really need.
Also, how many physical disks do you have?
Do you have multiple tablespaces, if so, are your tables and indexes
assigned separate tablespaces?

On Tue, Jun 30, 2015 at 6:16 PM, Alvaro Herrera alvhe...@2ndquadrant.com
wrote:

 Lukasz Wrobel wrote:
  Hello.
 
  I have multiple problems with my database, the biggest of which is how to
  find out what is actually wrong.
 
  First of all I have a 9.3 postgres database that is running for about a
  month. Right now the queries on that database are running very slowly
  (select with a simple where on a non-indexed column on a table with
 about
  5000 records takes 1,5s, a complicated hibernate select with 7 joins on
  tables of about 5000 records takes about 15s, insert or update on a table
  with 35000 records takes up to 20 mins).

 What's your operating system?

 What does pg_stat_user_tables tell you about the vacuum times for the
 bloated tables?  Mainly, is autovacuum processing them at all?  If not,
 are there log entries about autovacuum trouble (those would show up as
 ERROR mentioning automatic vacuuming)?  If not, is autovacuum running at
 all, and is the stats collector working properly?

 I'd recommend setting log_autovacuum_min_duration to a value other than
 the default -1 and see whether it is doing anything.

 Also useful for debugging would be the VACUUM VERBOSE output for
 problematic tables.

 Maybe your tuple death rate is higher than what autovacuum can cope
 with, with default settings.  In that case maybe you need a larger
 autovacuum_max_workers setting and/or a decrease of
 autovacuum_vacuum_cost_delay and/or a change of autovacuum_naptime.
 Sometimes, manual vacuuming of individual problematic tables also helps.

 --
 Álvaro Herrerahttp://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.