Re: [GENERAL] hot standby questions

2017-08-03 Thread Lucas Possamai
2017-08-04 5:58 GMT+12:00 Jeff Janes :

> On Wed, Aug 2, 2017 at 8:19 PM, armand pirvu 
> wrote:
>
>>
>> Hi
>>
>> Just trying to put together the hot_standby setup
>> All docs I read are pointing to use as prefered method to use
>> pg_basebackup to set the base
>> So far so good
>> But
>>
>> psql postgres -c "select pg_start_backup('backup')"
>> pg_basebackup -D /var/lib/pgsql/sample  -Ft -z -P
>> psql postgres -c "select pg_stop_backup()"
>>
>
> pg_basebackup does the equivalent of pg_start_backup and pg_stop_backup
> for you.  It is not helpful, and might even sometimes be harmful, to do
> them yourself when using pg_basebackup.
>
>
>>
>> Pretty much every where I looked at -x is not mentioned to be used
>>
>> So what gives ? What did I miss ? It's gotta be soomething
>>
>
> That certainly isn't my experience.  If you find sites that don't mention
> -x, -X, or --xlog-method, then I would be reluctant to take any of that
> site's other advice seriously.
>
> But note that in version 10, -x will go away and the default will be
> changed so that not specifying anything will be the same as -X stream.
>  perhaps you are reading advice aimed at a future version.
>
> Cheers,
>
> Jeff
>


I use pg_basebackup every day and the way I do is:

pg_basebackup -D /destination --checkpoint=fast --xlog-method=stream


The --xlog-method=stream option will copy the wal_files as well (to
pg_xlogs directory).

That works pretty well for me. You can either add the compress option too.
Hope this was helpful.

Cheers,
Lucas


Re: [GENERAL] Monitoring of a hot standby with a largely idle master

2017-07-15 Thread Lucas Possamai
2017-07-13 20:15 GMT+12:00 Michael Paquier :

> On Thu, Jul 13, 2017 at 5:26 AM, Jeff Janes  wrote:
> >
> > I think that none of the recovery information functions
> > (https://www.postgresql.org/docs/9.6/static/functions-
> admin.html#FUNCTIONS-RECOVERY-INFO-TABLE)
> > can distinguish a hot standby which is connected to an idle master,
> versus
> > one which is disconnected.  For example, because the master has crashed,
> or
> > someone has changed the firewall rules.
> >
> > Is there a way to monitor from SQL the last time the standby was able to
> > contact the master and initiate streaming with it?  Other than trying to
> > write a function that parses it out of pg_log?
>
> Not directly I am afraid. One way I can think about is to poll
> periodically the state of pg_stat_replication on the primary or
> pg_stat_wal_receiver on the standby and save it in a custom table. The
> past information is not persistent as any replication-related data in
> catalogs is based on the shared memory state of the WAL senders and
> the WAL receiver, and those are wiped out at reconnection.
> --
> Michael
>
>
>
That works for me too! I do this way... cron job runs that every X minutes
and if the replication lag is higher than 1 second it sends me an email..

It works pretty well and I used bash.

Lucas


Re: [GENERAL] current postgresql logfile being written to?

2017-06-21 Thread Lucas Possamai
2017-06-22 14:16 GMT+12:00 hvjunk :

>
> On 22 Jun 2017, at 4:06 AM, Lucas Possamai  wrote:
>
>
>
> 2017-06-22 13:54 GMT+12:00 hvjunk :
>
>> Hi there,
>>
>>  I was hoping for a method (like archive_command) to handle logfile
>> processing/archiving/compression, but unless doing it the logrotate way,
>> I don’t see anything that postgresql provides. Is that correct?
>>
>> The closest I could find is: pg_rotate_logfile()… but here my question is
>> where do I find the current active logfile(s) that postgresql is currently
>> writing to?
>> (At least that way I can handle all the files that that postgresql is not
>> writing to :) )
>>
>> Hendrik
>>
>>
>>
> I use logging_collector + log_rotation_age + log_filename + 
> log_min_duration_statement
> [1]
>
> Using those options PG automatically rotates and keep them for a week or
> more if you specified it.
>
> [1] https://www.postgresql.org/docs/9.6/static/runtime-config-logging.html
>
>
>
> That I know, but which file is the postgresql server/cluster writing to
> right now?
>
>
>
On your postgresql.conf check log_directory. If it's the default, then:
/var/log/postgresql


Lucas


Re: [GENERAL] current postgresql logfile being written to?

2017-06-21 Thread Lucas Possamai
2017-06-22 13:54 GMT+12:00 hvjunk :

> Hi there,
>
>  I was hoping for a method (like archive_command) to handle logfile
> processing/archiving/compression, but unless doing it the logrotate way,
> I don’t see anything that postgresql provides. Is that correct?
>
> The closest I could find is: pg_rotate_logfile()… but here my question is
> where do I find the current active logfile(s) that postgresql is currently
> writing to?
> (At least that way I can handle all the files that that postgresql is not
> writing to :) )
>
> Hendrik
>
>
>
I use logging_collector + log_rotation_age + log_filename +
log_min_duration_statement
[1]

Using those options PG automatically rotates and keep them for a week or
more if you specified it.

[1] https://www.postgresql.org/docs/9.6/static/runtime-config-logging.html


Re: [GENERAL] Huge Pages - setting the right value

2017-06-11 Thread Lucas Possamai
2017-06-12 9:52 GMT+12:00 Andrew Kerber :

> Was that transparent hugepages or standard hugepages?  databases commonly
> have problems dealing with transparent hugepages.
>
>


IN my case, it was the Transparent Hugepages


Lucas


Re: [GENERAL] Huge Pages - setting the right value

2017-06-11 Thread Lucas Possamai
2017-06-12 7:52 GMT+12:00 Andrew Kerber :

> I am sure it does not.
>
> Sent from my iPhone
>
> > On Jun 11, 2017, at 10:50 AM, pinker  wrote:
> >
> > Andrew Kerber wrote
> >> I can't give you an absolutely authoritative answer, but because of the
> >> way hugepages are implemented and allocated, I can't think how they
> could
> >> be used for other processes.  Linux hugepages are either 2m or 1g, far
> too
> >> large for any likely processes to require. They cannot be allocated in
> >> partial pages.
> >
> > thank you for your help.
> > My system is using 2MB pages for shared buffers. I have checked and one
> of
> > my processes has used 606788kB of memory, so potentially could use ~ 300
> > huge pages, but does postgres can use it for non shared memory?
> >
> >
> >
> >
> > --
> > View this message in context: http://www.postgresql-archive.
> org/Huge-Pages-setting-the-right-value-tp5952972p5965963.html
> > Sent from the PostgreSQL - general mailing list archive at Nabble.com.
> >
> >
> > --
> > Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> > To make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-general
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



In my case, we had the HugePages enabled but not configured in our Master
DB Server. When we increased the server resources (More RAM & CPU) we had
lots of issues with HugePages. Specially I/O ones. Had to disabled it.

Running Ubuntu 14.04 Server @ Amazon.


Lucas


Re: [GENERAL] the master has two sync slave, which slave can promote to master and the other slave as the new master's slave?

2017-03-18 Thread Lucas Possamai
2017-03-19 17:52 GMT+13:00 lin :

> Hi,
> everyone. I am puzzled
> 
>  about
> the question: the master A has two slave slave_1 , slave_2 , the version of
> postgres is 9.3. I want to promote one slave to master, and the other slave
> as the new master's slave, but which slave should can promote ?
>
> get the master-slave information from old master:
> postgres=# select * from pg_stat_replication;
> -[ RECORD 1 ]+--
> pid  | 15377
> usesysid | 10
> usename  | postgres
> *application_name | slave_1*
> client_addr  | 10.20.16.201
> client_hostname  |
> client_port  | 46086
> backend_start| 2017-03-19 12:42:02.026278+08
> state| streaming
> sent_location| 1/34FB
> write_location   | 1/34EB
> flush_location   | 1/34DB
> replay_location  | 1/34C4FFD8
> sync_priority| 0
> sync_state   | async
> -[ RECORD 2 ]+--
> pid  | 15411
> usesysid | 10
> usename  | postgres
> *application_name | slave_2*
> client_addr  | 10.20.16.200
> client_hostname  |
> client_port  | 51102
> backend_start| 2017-03-19 12:42:12.477886+08
> state| streaming
> sent_location| 1/414FD320
> write_location   | 1/414FD320
> flush_location   | 1/414FD320
> replay_location  | 1/3DD91468
> sync_priority| 0
> sync_state   | async
>
> 1. which one can promote to master ,slave_1 or slave_2 ?
> :: I think it is slave_2, because write_location  slave_2 1/414FD320  >
>  slave_1 (1/34FB), it is right?
>
> 2. If the slave_1 is sync state, which one can promote to master?  I am
> puzzled.
>
>
You can promote any slave server to become a master server. It's up to you
which one to do it; more info can be found here [1].

*Example:*

slave_1 will be the new master and slave_2 will be the slave of that new
master.


Make sure to have all the wal files and history files copied to the
 slave_2 server otherwise it won't be able to switch timeline.

[1] https://www.postgresql.org/docs/9.3/static/warm-standby-failover.html

Lucas


Re: [GENERAL] Custom type column index - Postgres 9.1

2017-01-16 Thread Lucas Possamai
2017-01-17 10:15 GMT+13:00 Patrick B :

> Hi guys,
>
> I've got a custom data type column... The query I'm using its looking over
> 9 million rows.
>
> I've created a BTREE index but it didn't help on the speed. Is there any
> special index for custom types?
>
> Thanks
> Patrick
>

I'm also looking for this. I tried to create GIN indexes but it didn't work
well... Also Btree, not good speed.

Lucas


Re: [GENERAL] Plpgsql - Custom fields Postgres 9.5

2016-12-14 Thread Lucas Possamai
>
> ERROR:  column "date_start" does not exist
>
>
> Patrick
>

Patrick*** - trying on SQL fiddle i got that error when executing what
 Adrian suggested.


Re: [GENERAL] Plpgsql - Custom fields Postgres 9.5

2016-12-14 Thread Lucas Possamai
2016-12-15 14:34 GMT+13:00 Adrian Klaver :

> On 12/14/2016 05:19 PM, Patrick B wrote:
>
> Reading the suggestions might help:)
>
> Another try:
>
> CREATE or REPLACE FUNCTION l_extract(date_start text, date_end text))
>
> RETURNS void AS $$
>
>
> begin
>
>   execute '
>
>   COPY
>
>   (
>
>   SELECT
>
>   uuid,
>
>   clientid,
>
>   *
>
>   FROM
>
>   logging
>
>   WHERE
>
>   logtime
>
>   BETWEEN
>
>  date_start
>
>   AND
>
>  date_end
>
>   )
>
>   TO ''/var/lib/postgresql/'|| date_start ||'_logs.csv''';
>
> end
>
> $$ language 'plpgsql';
>
>
> select l_extract('201611015', '201612015');
>
>
>
>
select l_extract('201611015','201612015');
ERROR:  column "date_start" does not exist


Patrick


Re: [GENERAL] Moving pg_xlog

2016-12-01 Thread Lucas Possamai
2016-12-02 17:10 GMT+13:00 Michael Paquier :

> On Fri, Dec 2, 2016 at 1:04 PM, Melvin Davidson 
> wrote:
> >  Well, while the location of pg_xlog is not currently configurable, on
> Linux system the way to do it is  to:
> >  1. stop PostgreSQL
> >  2. move the pg_xlog directory to a separate partition
> >  3. create a symbolic link to point to the new partition
> >  4. restart PostgreSQL
>
> Similar flow on Windows, just use a junction point for the link.
> --
> Michael
>

I've done this on my Postgres 9.2 DB server running CentOS 6.7...

And it's pretty much what the guys told you already:


>  1. stop PostgreSQL
>  2. move the pg_xlog directory to a separate partition
>  3. create a symbolic link to point to the new partition
>  4. restart PostgreSQL

In my case, it significantly improved I/O performance.

Lucas


Re: [GENERAL] My DB has has 5TB, many operations are very slow (on Google Cloud Compute)

2016-10-11 Thread Lucas Possamai
>
> Instance RAM: 60GB
> Instance CPU: 16Cores



I got a 3TB Postgres Database running with a Dual Socket Octo Core Intel
Xeon E5-2630v3 with 128GB of RAM and SATA disks.

I think yes, you could improve your server's RAM. However Melvin is right,
there is no enough information.

Lucas


Re: [GENERAL] FATAL: could not receive data from WAL stream

2016-09-19 Thread Lucas Possamai
2016-09-20 16:29 GMT+12:00 Lucas Possamai :

>
>
> 2016-09-20 15:14 GMT+12:00 Venkata B Nagothi :
>
>>
>> On Tue, Sep 20, 2016 at 12:38 PM, Patrick B 
>> wrote:
>>
>>> Hi guys,
>>>
>>> I got a slave server running Postgres 9.2 with streaming replication and
>>> wal_archive in an EC2 Instance at Amazon.
>>>
>>> Postgres logs are showing me this error:
>>>
>>>> restored log file "0002179A00F8" from archive
>>>> invalid record length at 179A/F8FFF3D0
>>>> WAL segment `/var/lib/pgsql/9.2/archive/0003.history` not found
>>>> streaming replication successfully connected to primary
>>>> FATAL:  could not receive data from WAL stream: FATAL:  requested WAL
>>>> segment 0002179A00F8 has already been removed
>>>
>>>
>>> However, 0002179A00F8 file is inside
>>> /var/lib/pgsql/9.2/archive directory:
>>>
>>>> postgres@devops:/var/lib/pgsql/9.2/archive$ ls -la | grep
>>>> 0002179A00F8
>>>> -rw--- 1 postgres postgres 16777216 Sep 16 05:16
>>>> 0002179A00F8
>>>
>>>
>>>
>>> It's an UBUNTU instance, so my recovery.conf is:
>>>
>>>
>>> */etc/postgresql/9.2/main/recovery.conf:*
>>>
>>>> restore_command = 'exec /var/lib/pgsql/bin/restore_wal_segment.bash
>>>> "/var/lib/pgsql/9.2/wal_archive/%f" "%p"'
>>>> archive_cleanup_command = '/var/lib/postgresql/bin/pg_ar
>>>> chivecleaup_mv.bash'
>>>> recovery_target_timeline = 'latest'
>>>> standby_mode = on
>>>> primary_conninfo = 'host=IP_MY_SLAVE port=5432 user=replicator
>>>> application_name=devops'
>>>
>>>
>>>
>>> What can be happening, if the file is in there?
>>>
>>
>> Do you mean to say that the WAL file "0002179A00F8" is
>> available @ "/var/lib/pgsql/9.2/archive" location ?
>>
>> Regards,
>> Venkata B N
>>
>> Fujitsu Australia
>>
>>
>
>
>
> Yes.
>
>

Ops.. sorry... sent to the wrong email


Re: [GENERAL] FATAL: could not receive data from WAL stream

2016-09-19 Thread Lucas Possamai
2016-09-20 15:14 GMT+12:00 Venkata B Nagothi :

>
> On Tue, Sep 20, 2016 at 12:38 PM, Patrick B 
> wrote:
>
>> Hi guys,
>>
>> I got a slave server running Postgres 9.2 with streaming replication and
>> wal_archive in an EC2 Instance at Amazon.
>>
>> Postgres logs are showing me this error:
>>
>>> restored log file "0002179A00F8" from archive
>>> invalid record length at 179A/F8FFF3D0
>>> WAL segment `/var/lib/pgsql/9.2/archive/0003.history` not found
>>> streaming replication successfully connected to primary
>>> FATAL:  could not receive data from WAL stream: FATAL:  requested WAL
>>> segment 0002179A00F8 has already been removed
>>
>>
>> However, 0002179A00F8 file is inside
>> /var/lib/pgsql/9.2/archive directory:
>>
>>> postgres@devops:/var/lib/pgsql/9.2/archive$ ls -la | grep
>>> 0002179A00F8
>>> -rw--- 1 postgres postgres 16777216 Sep 16 05:16
>>> 0002179A00F8
>>
>>
>>
>> It's an UBUNTU instance, so my recovery.conf is:
>>
>>
>> */etc/postgresql/9.2/main/recovery.conf:*
>>
>>> restore_command = 'exec /var/lib/pgsql/bin/restore_wal_segment.bash
>>> "/var/lib/pgsql/9.2/wal_archive/%f" "%p"'
>>> archive_cleanup_command = '/var/lib/postgresql/bin/pg_ar
>>> chivecleaup_mv.bash'
>>> recovery_target_timeline = 'latest'
>>> standby_mode = on
>>> primary_conninfo = 'host=IP_MY_SLAVE port=5432 user=replicator
>>> application_name=devops'
>>
>>
>>
>> What can be happening, if the file is in there?
>>
>
> Do you mean to say that the WAL file "0002179A00F8" is
> available @ "/var/lib/pgsql/9.2/archive" location ?
>
> Regards,
> Venkata B N
>
> Fujitsu Australia
>
>



Yes.


Re: [GENERAL] postgres insert + select + values - Pgsql 9.5

2016-09-15 Thread Lucas Possamai
2016-09-16 10:07 GMT+12:00 Patrick B :

>
>>
>> A select can make up columns too, not just what you've got in a table,
>> so you can:
>>
>> select j_id, 'test-1 - comments' as comment from test2 where
>> customer_id=88897;
>>
>> and then you can simply insert that into your other table (you don't
>> need to specify the columns that are getting a default value):
>>
>> insert into test1 (j_id, comments)
>> select j_id, 'test-1 - comments' as comment from test2 where
>> customer_id=88897;
>>
>> https://www.postgresql.org/docs/current/static/sql-insert.html has more
>> info.
>>
>>
> Thanks Chris!
>
> But the problem is that test2 table has 180 rows with different j_id and I
> need to insert each one of them into test1 table.
>
> How can I do that?
> select j_id FROM test2 - will return 180 rows
>
>

Please have a look on this example Patrick:
http://sqlfiddle.com/#!15/1773d/4

Lucas


Re: [GENERAL] Get date timestamp(3) without time zone column - PGSQL 9.5

2016-09-04 Thread Lucas Possamai
2016-09-05 15:17 GMT+12:00 Patrick B :

>
>> You might want to share the version of PostgreSQL you are using.
>>
>> You might want to try date_trunc and AT TIMEZONE function/operators-
>>
>> https://www.postgresql.org/docs/9.4/static/functions-datetim
>> e.html#FUNCTIONS-DATETIME-TRUNC
>>
>> SELECT date_trunc('day', tasks_start at TIME ZONE 'EST')
>>
>> Note: I have not tried this statement
>>
>> Is this something you are going to use often? If that is the case then
>> consider to re-model your query. The moment you use an expression on a
>> column it would not use a normal BTree index.
>>
>>
> Hmm... I see
>
>
> select date_trunc('day', TIMESTAMP '2016-08-10') FROM tasks
>
>
>  And I get:
>
> 2016-08-10 00:00:00
>
>
>
> I actually need just the date 2016-08-10, without 00:00:00...
> Any idea?
>
> Cheers
> Patrick
>


I agree with @Tom, your first sql should work... Based on the table name
"tasks" you provided, try this:


SELECT cast(jtasks_start as date) FROM "tasks" WHERE (date(tasks_start) in
> ('2016-08-11'))



Lucas


Re: [GENERAL] Monitorando WAL Files - PostgreSQL 9.2

2016-07-06 Thread Lucas Possamai
Sorry guys.. wrong email

2016-07-07 9:18 GMT+12:00 Lucas Possamai :

> Olá pessoal, tudo bem?
>
> Sei que no Postgres 9.4 há o pg_stat_archiver para verificar o
> arquivamento e as falhas dos wal_files.
>
> Como posso monitorar isso na versão 9.2?
> Vocês tem alguma dica?
>
> Obrigado!
> Lucas
>


[GENERAL] Monitorando WAL Files - PostgreSQL 9.2

2016-07-06 Thread Lucas Possamai
Olá pessoal, tudo bem?

Sei que no Postgres 9.4 há o pg_stat_archiver para verificar o arquivamento
e as falhas dos wal_files.

Como posso monitorar isso na versão 9.2?
Vocês tem alguma dica?

Obrigado!
Lucas


Re: [GENERAL] gin index postgres 9.2

2016-05-25 Thread Lucas Possamai
yes.. I thought too.

the results r still different


[GENERAL] gin index postgres 9.2

2016-05-25 Thread Lucas Possamai
I've got the following query:

SELECT COUNT(DISTINCT j0_.id) AS sclr0FROM ja_customers j0_WHERE
((LOWER(j0_.name_first) LIKE '%asd%'
OR LOWER(j0_.name_last) LIKE '%asd%'
OR LOWER(j0_.company) LIKE '%asd%'
OR LOWER(j0_.phone) LIKE '%asd%'
OR LOWER(j0_.mobile) LIKE '%asd%')
   AND j0_.deleted = 'f'
   AND j0_.clientid = 2565)
  AND j0_.clientid = 2565

It returns: 3

I created a GIN index;

CREATE INDEX CONCURRENTLY ON public.ja_customers USING gin (name_first
gin_trgm_ops, name_last gin_trgm_ops, company gin_trgm_ops, phone
gin_trgm_ops, mobile gin_trgm_ops);

New query to hit the new index:

SELECT COUNT(DISTINCT j0_.id) AS sclr0FROM ja_customers j0_WHERE
j0_.name_first LIKE '%asd%'
OR j0_.name_last LIKE '%asd%'
OR j0_.company LIKE '%asd%'
OR j0_.phone LIKE '%asd%'
OR j0_.mobile LIKE '%asd%'
   AND j0_.deleted = 'f'
   AND j0_.clientid = 2565
  AND j0_.clientid = 2565

It returns: 532


The problem is that the new query returns different results...

Please, what am I missing guys?


Cheers


Re: [GENERAL] Unused indexes - PostgreSQL 9.2

2016-05-23 Thread Lucas Possamai
On 24 May 2016 at 12:18, Jeff Janes  wrote:

> On Mon, May 23, 2016 at 4:23 PM, Lucas Possamai 
> wrote:
> >
> > That index has been added just 3 hours ago, and you can see that the
> > times_used goes over 41000 How is that possible??
>
> Well, that is what it is there for, right , to be used?  My ancient
> laptop can use an index that many times in less than 20 seconds,
> running flat out.
>
>
Yeah!
But I mean... over 70.000 times in 4 hours? that's a lot of usage! ahhahaha


> >
> > Don't think the query is right.
> >
> > Can you please check ?
>
> The query seems right to me.
>
> Cheers,
>
> Jeff
>

I think the query is ok.. just wanna understand if that value is correct :O

If it is.. I'm happy with that. Just shows the work of finding and creating
the index worthed it.


Re: [GENERAL] Unused indexes - PostgreSQL 9.2

2016-05-23 Thread Lucas Possamai
Hello Melvin, how are you doing?


>>
> And what happens if you run this query?
>
> SELECT idstat.schemaname AS schema,
>idstat.relname AS table_name,
>indexrelname AS index_name,
>idstat.idx_scan AS times_used,
>pg_size_pretty(pg_relation_size(quote_ident(idstat.schemaname) ||
> '.' || quote_ident(idstat.relname))) AS table_size,
>pg_size_pretty(pg_relation_size(quote_ident(idstat.schemaname) ||
> '.' || quote_ident(indexrelname))) AS index_size,
>n_tup_upd + n_tup_ins + n_tup_del as num_writes,
>indexdef AS definition
> FROM pg_stat_user_indexes AS idstat
> JOIN pg_indexes ON indexrelname = indexname
> JOIN pg_stat_user_tables AS tabstat ON idstat.relname = tabstat.relname
> WHERE indexrelname = ' {YOUR QUERY NAME } ';
>


I've added some new indexes this week into my prod environment, and I used
your query to see if they're being used or not.


Query:

> SELECT idstat.schemaname AS schema,
>idstat.relname AS table_name,
>indexrelname AS index_name,
>idstat.idx_scan AS times_used,
>pg_size_pretty(pg_relation_size(quote_ident(idstat.schemaname) ||
> '.' || quote_ident(idstat.relname))) AS table_size,
>pg_size_pretty(pg_relation_size(quote_ident(idstat.schemaname) ||
> '.' || quote_ident(indexrelname))) AS index_size,
>n_tup_upd + n_tup_ins + n_tup_del as num_writes,
>indexdef AS definition
> FROM pg_stat_user_indexes AS idstat
> JOIN pg_indexes ON indexrelname = indexname
> JOIN pg_stat_user_tables AS tabstat ON idstat.relname = tabstat.relname
> WHERE indexrelname = 'ix_ja_jobs_clientid_title_time_job';



Returns:

> schema table_name index_name times_used table_size
> index_size num_writes definition
>
>
> -- -- -- -- --
> -- --
> ---
> public ja_jobsix_ja_jobs_clientid_title_time_job 41536  3526 MB
>  484 MB 38266927   CREATE INDEX "ix_ja_jobs_clientid_title_time_job" ON
> "ja_jobs" USING "btree" ("clientid", "lower"(("title")::"text")
> "varchar_pattern_ops", "time_job")


That index has been added just 3 hours ago, and you can see that the
times_used goes over 41000 How is that possible??

Don't think the query is right.

Can you please check ?

Thank you.


Re: [GENERAL] Increased I/O / Writes

2016-05-23 Thread Lucas Possamai
On 24 May 2016 at 09:40, Kevin Grittner  wrote:

> On Mon, May 23, 2016 at 4:32 PM, Lucas Possamai 
> wrote:
>
> > The spikes were happening because, after the reboot, the HUGE PAGES were
> > enabled.
> >
> > After disabling them, all got back to normal.
>
> Since you said earlier you were on PostgreSQL version 9.2, you must
> be talking about transparent huge pages.  Yeah, be sure those are
> configured to be disabled in a way that "sticks" on your OS.  When
> you get to version 9.4 you will notice that we support huge pages
> directly.  That would be expected to work without problems even
> though TRANSPARENT huge pages are debilitating.
>


Yep.. you're right, Kevin.
Transparent huge pages...

We do have a script that disables it when the system is rebooted, but that
did not work.

Anyway, the problem was solved..
Cheers
Lucas


Re: [GENERAL] Function PostgreSQL 9.2

2016-05-23 Thread Lucas Possamai
Just an update here:

IF (TG_OP = 'INSERT') THEN
> UPDATE public.companies SET client_code_increment =
> (client_code_increment + 1) WHERE id = NEW.company_id;


The line above was updating the client_code_increment even if the customer
was inserting data by hiimself, which is wrong.
The client_code_increment must be updated IF is an insert AND if the
customer did not insert data into the code column.

*Correction:*

 IF (TG_OP = 'INSERT') AND NEW.code IS NULL THEN
> UPDATE public.companies SET client_code_increment =
> (client_code_increment + 1) WHERE id = NEW.company_id;



Cheers
Lucas


Re: [GENERAL] index on ILIKE/LIKE - PostgreSQL 9.2

2016-05-23 Thread Lucas Possamai
Hi there,

The problem was solved by using lowercase

*New index:*

> CREATE INDEX CONCURRENTLY  ON public.ja_jobs (clientid, lower(title)
> varchar_pattern_ops, time_job);

*New query:*

> SELECT DISTINCT title
> FROM public.ja_jobs WHERE lower(title) LIKE lower('RYAN
> WER')
> AND clientid = 31239
> AND time_job > 1457826264
> order BY title
> limit 10;


- Improvement of 1400%


Thanks

Lucas


Re: [GENERAL] Connections - Postgres 9.2

2016-05-23 Thread Lucas Possamai
On 17 May 2016 at 22:24, Sridhar N Bamandlapally 
wrote:

> Hi
>
> I control this way
>
> if "state_change" is from longtime and "state" is idle then I use
>
> function:
>   *pg_terminate_backend ( integer ) * ==> return TRUE if
> killed-successful else FALSE
>
> example:
> # select pg_terminate_backend ( pid ) from pg_stat_activity where
> state='idle' and state_change < (current_timestamp - interval '1 hour');
>
> may be helpful
>
> NOTE: we come to decision to kill sessions in idle state more then "1
> hour" after lot of discussion with application / implementation /
> stake-holders team
>
> *removed history as thrown error due to mail length
>
>
> Thanks
> Sridhar
> OpenText
>
>
>
Hi Sridhar!

Thanks for your help...

I also control the same way.. But was wondering if a pooler would bring me
performance improvements...


cheers


Re: [GENERAL] Increased I/O / Writes

2016-05-23 Thread Lucas Possamai
Just an update about this...


As you know, the server was rebooted.

The spikes were happening because, after the reboot, the HUGE PAGES were
enabled.

After disabling them, all got back to normal.

Cheers


Re: [GENERAL] Increased I/O / Writes

2016-05-17 Thread Lucas Possamai
Following
https://www.keithf4.com/a-large-database-does-not-mean-large-shared_buffers/

and http://pgtune.leopard.in.ua/ and
https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
I changed the shared_buffer from 51GB to 35GB.

Now, I'm getting spikes every 15 minutes.

*FATAL*:  sorry, too many clients already

The change have been made 3 hours ago.

- we rebooted the server as well

Might be the cache warming up again?


Re: [GENERAL] Increased I/O / Writes

2016-05-16 Thread Lucas Possamai
This is my postgresql.conf at the moment:

shared_buffer(51605MB) +
effective_cache_size(96760MB) +
work_mem(32MB) +
max_connections(200)

*= 148397.08 MB*

My server has 128GB of RAM

So, I'm using more RAM that I have. (not good at all)
I'm glad that it wasn't me who put those confs in there :)


Anyway...
I was thinking about decrease the shared_buffer to something like 1/8 of
total RAM = 16GB

What do u guys think about it?

Cheers
Lucas


Re: [GENERAL] Connections - Postgres 9.2

2016-05-16 Thread Lucas Possamai
On 17 May 2016 at 08:56, Venkata Balaji N  wrote:

>
> On Tue, May 17, 2016 at 6:28 AM, Lucas Possamai 
> wrote:
>
>> hmm.. thanks for all the answers guys...
>>
>>
>> One more question: Those IDLE connections.. are using the server's
>> resources?
>> To solve that problem I would need a Pool connection, right?
>>
>> Would the pool connection solve that IDLE connections? But more important
>> than that, are the IDLE connections using the machine's resources ?
>>
>
> Yes. There is no straight way to terminate the IDLE connections from the
> database end. You would need a connection pooler like pgBouncer which can
> help you terminate IDLE connections after a certain time.
>
> It would be much better if your Application can terminate connections to
> the database.
>
> Regards,
> Venkata B N
>
> Fujitsu Australia
>
>


Ok awesome.. thanks a lot!

Lucas


Re: [GENERAL] Connections - Postgres 9.2

2016-05-16 Thread Lucas Possamai
hmm.. thanks for all the answers guys...


One more question: Those IDLE connections.. are using the server's
resources?
To solve that problem I would need a Pool connection, right?

Would the pool connection solve that IDLE connections? But more important
than that, are the IDLE connections using the machine's resources ?


cheers


Re: [GENERAL] index on ILIKE/LIKE - PostgreSQL 9.2

2016-05-12 Thread Lucas Possamai
>
> With those sizes, the gin index will probably be naturally kept mostly
> in the file-system cache, if it is used regularly.  So the original
> slowness of your first query is likely just a cold-cache problem.  Can
> you generate a stream of realistic queries and see what it stabilizes
> at?
>
>
> > I just wanted to understand why the GIN index is not working, but it
> works
> > here:
> https://www.depesz.com/2011/02/19/waiting-for-9-1-faster-likeilike/
>
> In your first email, the gin index did "work", according to the
> execution plan.  It just wasn't as fast as you wanted.  In general,
> the longer the query string is between the %%, the worse it will
> perform (until version 9.6, and to a smaller degree even with 9.6).
> But it still seems oddly slow to me, unless you have a cold-cache and
> really bad (or overloaded) IO.
>
> >
> >>
> >>
> >> It would be interesting to see the output of explain (analyze,
> >> buffers) with track_io_timing turned on.
> >
> >
> > explain analyze buffer with track_io_timing turned on:
> ...
>
> That is the wrong query.  The CTE (i.e. the WITH part) is an
> optimization fence, so it can't use the gin index, simply because of
> the way you query is written.  (I think Melvin suggested it because he
> noticed that using the gin index actually slowed down the query, so he
> wanted to force it to not be used.)
>


Oh ok.

- Here is the explain analyze buffer with the original query I posted here
with the gin index:

Query:

>  explain (analyze, buffers)
>  SELECT title
> FROM ja_jobs WHERE title ILIKE '%RYAN WER%'
> and clientid = 31239  AND time_job > 1457826264
> order BY title
> limit 10


Explain analyze:

> Limit  (cost=390.07..390.08 rows=1 width=20) (actual
> time=3945.263..3945.280 rows=4 loops=1)
>   Buffers: shared hit=5956 read=10
>   I/O Timings: read=60.323
>   ->  Sort  (cost=390.07..390.08 rows=1 width=20) (actual
> time=3945.256..3945.260 rows=4 loops=1)
> Sort Key: "title"
> Sort Method: quicksort  Memory: 25kB
> Buffers: shared hit=5956 read=10
> I/O Timings: read=60.323
> ->  Bitmap Heap Scan on "ja_jobs"  (cost=386.05..390.06 rows=1
> width=20) (actual time=3944.857..3945.127 rows=4 loops=1)
>   Recheck Cond: (("clientid" = 31239) AND ("time_job" >
> 1457826264) AND (("title")::"text" ~~* '% WER%'::"text"))
>   Buffers: shared hit=5951 read=10
>   I/O Timings: read=60.323
>   ->  BitmapAnd  (cost=386.05..386.05 rows=1 width=0) (actual
> time=3929.540..3929.540 rows=0 loops=1)
> Buffers: shared hit=5950 read=7
> I/O Timings: read=45.021
> ->  Bitmap Index Scan on "ix_jobs_client_times"
>  (cost=0.00..50.16 rows=1660 width=0) (actual time=45.536..45.536 rows=795
> loops=1)
>   Index Cond: (("clientid" = 31239) AND
> ("time_job" > 1457826264))
>   Buffers: shared hit=5 read=7
>   I/O Timings: read=45.021
> ->  Bitmap Index Scan on "ix_jobs_trgm_gin"
>  (cost=0.00..335.64 rows=485 width=0) (actual time=3883.886..3883.886
> rows=32 loops=1)
>   Index Cond: (("title")::"text" ~~* '%RYAN WER
> %'::"text")
>   Buffers: shared hit=5945
> Total runtime: 3945.554 ms


Re: [GENERAL] index on ILIKE/LIKE - PostgreSQL 9.2

2016-05-12 Thread Lucas Possamai
> How big is the table?  The gin index?  shared_buffers?  RAM?  What
> kind of IO system do you have, and how many other things were going on
> with it?
>

- Just a reminder that I'm not running these tests on my prod server.. I'm
running on my test server. So the confs will be different

The table is 9GB big
> The gin index is 400MB big
> shared_buffers = 1536MB
> RAM = 8 GB


I just wanted to understand why the GIN index is not working, but it works
here: https://www.depesz.com/2011/02/19/waiting-for-9-1-faster-likeilike/


>
> It would be interesting to see the output of explain (analyze,
> buffers) with track_io_timing turned on.
>

explain analyze buffer with track_io_timing turned on:


Limit  (cost=93466.83..93466.83 rows=1 width=218) (actual
> time=24025.463..24025.478 rows=5 loops=1)
>   Buffers: shared hit=8 read=42285
>   I/O Timings: read=23599.672
>   CTE ja_jobs
> ->  HashAggregate  (cost=93451.05..93455.90 rows=485 width=20) (actual
> time=23946.801..23967.660 rows=16320 loops=1)
>   Buffers: shared hit=3 read=42285
>   I/O Timings: read=23599.672
>   ->  Bitmap Heap Scan on "ja_jobs"  (cost=877.70..93374.92
> rows=30453 width=20) (actual time=161.372..23835.632 rows=48472 loops=1)
> Recheck Cond: (("clientid" = 14635) AND ("time_job" >
> 1436731799))
> Buffers: shared hit=3 read=42285
> I/O Timings: read=23599.672
> ->  Bitmap Index Scan on "ix_jobs_client_times"
>  (cost=0.00..870.09 rows=30453 width=0) (actual time=133.920..133.920
> rows=48472 loops=1)
>   Index Cond: (("clientid" = 14635) AND ("time_job" >
> 1436731799))
>   Buffers: shared hit=3 read=244
>   I/O Timings: read=120.137
>   ->  Sort  (cost=10.92..10.93 rows=1 width=218) (actual
> time=24025.457..24025.462 rows=5 loops=1)
> Sort Key: "ja_jobs"."title"
> Sort Method: quicksort  Memory: 25kB
> Buffers: shared hit=8 read=42285
> I/O Timings: read=23599.672
> ->  CTE Scan on "ja_jobs"  (cost=0.00..10.91 rows=1 width=218)
> (actual time=23977.095..24025.325 rows=5 loops=1)
>   Filter: (("title")::"text" ~~* '%To Electrical%'::"text")
>   Rows Removed by Filter: 16315
>   Buffers: shared hit=3 read=42285
>   I/O Timings: read=23599.672
> Total runtime: 24028.551 ms




>
> There have been improvements in this area since 9.2, you should
> consider upgrading to at least 9.4.
>
>
Yep I know. The upgrade will happen, but I don't know when.


Re: [GENERAL] index on ILIKE/LIKE - PostgreSQL 9.2

2016-05-11 Thread Lucas Possamai
>
>
>>
> Trying redoing the query with CTE as below:
>
> WITH ja_jobs as
>   (SELECT DISTINCT title
>  FROM ja_jobs
> WHERE clientid = 31239  AND time_job > 1457826264
>   )
> SELECT title
>   FROM ja_jobs
>  WHERE title ILIKE 'RYAN WER%'
>  ORDER BY title
>  LIMIT 10;
>

hmm.. still slow =(


and it's not hitting the index: (i had to change the clientid because the
previous one was in cache)

Limit  (cost=93790.08..93790.09 rows=1 width=218) (actual
> time=284.293..284.308 rows=5 loops=1)
>   Buffers: shared hit=42284
>   CTE ja_jobs
> ->  HashAggregate  (cost=93774.31..93779.16 rows=485 width=20) (actual
> time=207.235..228.141 rows=16320 loops=1)
>   Buffers: shared hit=42284
>   ->  Bitmap Heap Scan on "ja_jobs"  (cost=882.98..93697.86
> rows=30578 width=20) (actual time=21.942..133.380 rows=48472 loops=1)
> Recheck Cond: (("clientid" = 14635) AND ("time_job" >
> 1436731799))
> Buffers: shared hit=42284
> ->  Bitmap Index Scan on "ix_jobs_client_times"
>  (cost=0.00..875.34 rows=30578 width=0) (actual time=12.389..12.389
> rows=48472 loops=1)
>   Index Cond: (("clientid" = 14635) AND ("time_job" >
> 1436731799))
>   Buffers: shared hit=243
>   ->  Sort  (cost=10.92..10.93 rows=1 width=218) (actual
> time=284.289..284.293 rows=5 loops=1)
> Sort Key: "ja_jobs"."title"
> Sort Method: quicksort  Memory: 25kB
> Buffers: shared hit=42284
> ->  CTE Scan on "ja_jobs"  (cost=0.00..10.91 rows=1 width=218)
> (actual time=236.248..284.263 rows=5 loops=1)
>   Filter: (("title")::"text" ~~* '%To Electrical%'::"text")
>   Rows Removed by Filter: 16315
>   Buffers: shared hit=42284
> Total runtime: 287.633 ms


Re: [GENERAL] index on ILIKE/LIKE - PostgreSQL 9.2

2016-05-11 Thread Lucas Possamai
>
>
>>
> The main problem is WHERE title ILIKE '%RYAN WER%'
> When you put a % on the left of the text, there is no way to optimize
> that, so yes, it will be slow.
>
> If you can eliminate the leading percent and just have trailing, it will
> be much faster.
>
>

Hmm.. yep.. I suppose I can do that.

But, taking the left % off, the query is still slow:

Limit  (cost=418.57..418.58 rows=1 width=20) (actual
> time=4439.367..4439.381 rows=1 loops=1)
>   Buffers: shared hit=6847
>   ->  Unique  (cost=418.57..418.58 rows=1 width=20) (actual
> time=4439.363..4439.374 rows=1 loops=1)
> Buffers: shared hit=6847
> ->  Sort  (cost=418.57..418.58 rows=1 width=20) (actual
> time=4439.360..4439.365 rows=4 loops=1)
>   Sort Key: "title"
>   Sort Method: quicksort  Memory: 25kB
>   Buffers: shared hit=6847
>   ->  Bitmap Heap Scan on "ja_jobs"  (cost=414.55..418.56
> rows=1 width=20) (actual time=4439.312..4439.329 rows=4 loops=1)
> Recheck Cond: (("clientid" = 31239) AND ("time_job" >
> 1457826264) AND (("title")::"text" ~~* 'RYAN SHOWER%'::"text"))
> Buffers: shared hit=6847
> ->  BitmapAnd  (cost=414.55..414.55 rows=1 width=0)
> (actual time=4439.280..4439.280 rows=0 loops=1)
>   Buffers: shared hit=6843
>   ->  Bitmap Index Scan on "ix_jobs_client_times"
>  (cost=0.00..50.67 rows=1711 width=0) (actual time=0.142..0.142 rows=795
> loops=1)
> Index Cond: (("clientid" = 31239) AND
> ("time_job" > 1457826264))
> Buffers: shared hit=8
>   ->  Bitmap Index Scan on "ix_ja_jobs_trgm_gin"
>  (cost=0.00..363.62 rows=483 width=0) (actual time=4439.014..4439.014
> rows=32 loops=1)
> Index Cond: (("title")::"text" ~~* 'RYAN
> SHOWER%'::"text")
> Buffers: shared hit=6835
> Total runtime: 4439.427 ms


Here [1] it appears to be working even with two %.. But it's not for me

[1] https://www.depesz.com/2011/02/19/waiting-for-9-1-faster-likeilike/


Any ideia? lol


[GENERAL] index on ILIKE/LIKE - PostgreSQL 9.2

2016-05-11 Thread Lucas Possamai
Hi there!

I've got a simple but slow query:

 SELECT DISTINCT title
> FROM ja_jobs WHERE title ILIKE '%RYAN WER%'
> and clientid = 31239  AND time_job > 1457826264
> order BY title
> limit 10


Explain analyze:

Limit  (cost=5946.40..5946.41 rows=1 width=19) (actual
> time=2746.759..2746.772 rows=1 loops=1)
>   ->  Unique  (cost=5946.40..5946.41 rows=1 width=19) (actual
> time=2746.753..2746.763 rows=1 loops=1)
> ->  Sort  (cost=5946.40..5946.41 rows=1 width=19) (actual
> time=2746.750..2746.754 rows=4 loops=1)
>   Sort Key: "title"
>   Sort Method: quicksort  Memory: 25kB
>   ->  Bitmap Heap Scan on "ja_jobs"  (cost=49.02..5946.39
> rows=1 width=19) (actual time=576.275..2746.609 rows=4 loops=1)
> Recheck Cond: (("clientid" = 31239) AND ("time_job" >
> 1457826264))
> Filter: (("title")::"text" ~~* '%RYAN WER%'::"text")
> Rows Removed by Filter: 791
> ->  Bitmap Index Scan on "ix_jobs_client_times"
>  (cost=0.00..49.02 rows=1546 width=0) (actual time=100.870..100.870
> rows=795 loops=1)
>   Index Cond: (("clientid" = 31239) AND
> ("time_job" > 1457826264))
> Total runtime: 2746.879 ms


Then, I created a trgm index:

CREATE INDEX ix_ja_jobs_trgm_gin ON public.ja_jobs USING gin (title
gin_trgm_ops);

Explain analyze after the index: (Yes, I ran the analyze)

Limit  (cost=389.91..389.91 rows=1 width=20) (actual
> time=3720.511..3720.511 rows=0 loops=1)
>   ->  Unique  (cost=389.91..389.91 rows=1 width=20) (actual
> time=3720.507..3720.507 rows=0 loops=1)
> ->  Sort  (cost=389.91..389.91 rows=1 width=20) (actual
> time=3720.505..3720.505 rows=0 loops=1)
>   Sort Key: "title"
>   Sort Method: quicksort  Memory: 25kB
>   ->  Bitmap Heap Scan on "ja_jobs"  (cost=385.88..389.90
> rows=1 width=20) (actual time=3720.497..3720.497 rows=0 loops=1)
> Recheck Cond: (("clientid" = 31239) AND ("time_job" >
> 1457826264) AND (("title")::"text" ~~ '%RYAN WER%'::"text"))
> Rows Removed by Index Recheck: 4
> ->  BitmapAnd  (cost=385.88..385.88 rows=1 width=0)
> (actual time=3720.469..3720.469 rows=0 loops=1)
>   ->  Bitmap Index Scan on "ix_jobs_client_times"
>  (cost=0.00..50.00 rows=1644 width=0) (actual time=0.142..0.142 rows=795
> loops=1)
> Index Cond: (("clientid" = 31239) AND
> ("time_job" > 1457826264))
>   ->  Bitmap Index Scan on "ix_ja_jobs_trgm_gin"
>  (cost=0.00..335.63 rows=484 width=0) (actual time=3720.213..3720.213
> rows=32 loops=1)
> Index Cond: (("title")::"text" ~~ '%RYAN
> WER%'::"text")
> Total runtime: 3720.653 ms



so.. the query is still slow..
Do you guys  know what can be done ? related to the ILIKE?

cheers
Lucas


Re: [GENERAL] Unused indexes - PostgreSQL 9.2

2016-05-10 Thread Lucas Possamai
On 11 May 2016 at 09:50, Melvin Davidson  wrote:

> My bad,  WHERE indexrelname = ' {YOUR INDEX NAME } ';
>
>

Oh! ok...

public ja_feedlog ix_ja_feedlog_visitids 1 94 GB 1939 MB 0 CREATE INDEX
"ix_ja_feedlog_visitids" ON "ja_feedlog" USING "btree" ("visitid")

public ja_feedlog ix_ja_feedlog_visitids 1 94 GB 1939 MB 0 CREATE INDEX
"ix_ja_feedlog_visitids" ON "ja_feedlog" USING "btree" ("visitid")


Two rows for the same index.


Re: [GENERAL] Unused indexes - PostgreSQL 9.2

2016-05-10 Thread Lucas Possamai
>
>
> And what happens if you run this query?
>
> SELECT idstat.schemaname AS schema,
>idstat.relname AS table_name,
>indexrelname AS index_name,
>idstat.idx_scan AS times_used,
>pg_size_pretty(pg_relation_size(quote_ident(idstat.schemaname) ||
> '.' || quote_ident(idstat.relname))) AS table_size,
>pg_size_pretty(pg_relation_size(quote_ident(idstat.schemaname) ||
> '.' || quote_ident(indexrelname))) AS index_size,
>n_tup_upd + n_tup_ins + n_tup_del as num_writes,
>indexdef AS definition
> FROM pg_stat_user_indexes AS idstat
> JOIN pg_indexes ON indexrelname = indexname
> JOIN pg_stat_user_tables AS tabstat ON idstat.relname = tabstat.relname
> WHERE indexrelname = ' {YOUR QUERY NAME } ';
>
>
Sorry.. not sure what I should put into the WHERE clause  ..

But, taking off the WHERE it returns me 600 rows


Re: [GENERAL] Unused indexes - PostgreSQL 9.2

2016-05-10 Thread Lucas Possamai
>
>
>>
> >Some time ago I changed the pg_stat_temp directory from
> /var/lib/pgsq/whatever to /tmp
> Have you checked the postgres log to see if there are any errors about it
> not being able to write to the pg_stat_temp dir?
>
>
Yep no errors =\


Re: [GENERAL] Unused indexes - PostgreSQL 9.2

2016-05-10 Thread Lucas Possamai
>
>
> Sorry, I was too busy looking at the content.
>
> Has the size / # rows changed recently? If the planner thinks it can load
> all the rows faster, it will use a seqscan  regardless if you have an index.
>
> If that is the case, you can force index use by doing a
>
> SET enable_seqscan = off
>
> before executing the query.
>

Hmm... ok... but the situation is:

1 - I dropped the index
2 - Found a very slow query
3 - The "WHERE" clause was using the index that I've just dropped
4 - I ran the query in my test environment (Same DB as prod) with explain
analyze to see if the query was indeed using the index I've dropped
5 - Yes, the query was using the index
6 - re-created the index

7 - The total time went from 2000ms to 200ms

So, I don't think the index was indeed not being used.
I believe the stats are not working, just don't know how to confirm that,
as I have nothing on my logs


Re: [GENERAL] Unused indexes - PostgreSQL 9.2

2016-05-10 Thread Lucas Possamai
On 11 May 2016 at 09:06, Lucas Possamai  wrote:

>
>>>
>> My crystal ball is not working,  you have a PostgreSQL version?
>>
>
> Maybe you should have a look on the subject of this email...
>
>
>>
>> in postgresql.conf are track_activities and track_counts both on?
>>
>
> yes
>
>
>>
>> Did you ANALYZE the table after you re-added the index?
>>
>
> Yes
>


Also.. just to let you guys know:

Some time ago I changed the pg_stat_temp directory from
/var/lib/pgsq/whatever to /tmp


postgresql.conf:

> stats_temp_directory = '/tmp/pg_stat_tmp'

I tested it and it's "working":

> #su - postgres
> #cd /tmp/pg_stat_tmp
> #touch test.html


ls -la /tmp/pg_stat_tmp:

> -rw---  1 postgres postgres 263110 May 10 21:12 pgstat.stat


Re: [GENERAL] Unused indexes - PostgreSQL 9.2

2016-05-10 Thread Lucas Possamai
>
>
>>
> My crystal ball is not working,  you have a PostgreSQL version?
>

Maybe you should have a look on the subject of this email...


>
> in postgresql.conf are track_activities and track_counts both on?
>

yes


>
> Did you ANALYZE the table after you re-added the index?
>

Yes


[GENERAL] Unused indexes - PostgreSQL 9.2

2016-05-10 Thread Lucas Possamai
Hi all,

I ran a query to search for unused indexes, and get some free space in my
DB:

SELECT
> --*,
> relid::regclass AS table,
> indexrelid::regclass AS index,
> --pg_size_pretty(pg_relation_size(indexrelid::regclass)) AS
> index_size,
> pg_relation_size(indexrelid::regclass) AS index_size,
> idx_tup_read,
> idx_tup_fetch,
> idx_scan
> FROM
> pg_stat_user_indexes
> JOIN pg_index USING (indexrelid)
> WHERE
> idx_scan = 0
> AND indisunique IS FALSE;


The query returns the columns:
idx_tup_read,
 idx_tup_fetch,
 idx_scan

*What I did was:*

1 - Run the query above
2 - select one index and drop it
3 - Found some slow queries... When I saw it, the query was using one of
the index I've dropped.
4 - Re-created the index
5 - Ran the query with explain analyze (The query was indeed hitting the
index)
6 - re-ran the first query above, and still.. the index wasn't being used
from those statistics
7 - ?


So, my problem is: the statistics are not running? What happened to the
statistics?

Do you guys know how can I update the stats?