Re: Do we need yet another IDE (SQL development assistant) for PostgreSQL?

2018-07-18 Thread Chris
Hi Dmitry,

I think this a wonderful idea, but it will be tough. Share my experience:
—dbeaver: 
It is for multi-platform so it is just for use, no particular function, 
also there is more bugs(our company had changed some of them).
dbeaver is likely the most open source app form pg now, I know more 
people use it.
—pgadmin4:
I don’t like web client for database, I used it and it is good for 
there is simple monitor-windows.
For now, I used jetbrains’s product:datagrip, it is also coded by java,but is 
better for dbeaver.

Best Wishes,
Chris


> 在 2018年7月17日,上午6:21,Tim Cross  写道:
> 
> 
> Dmitry Igrishin mailto:dmit...@gmail.com>> writes:
> 
>> пн, 16 июл. 2018 г. в 1:14, Tim Cross :
>> 
>>> 
>>> Your idea to make it integrate with user's preferred editor is a good
>>> idea as editors are like opinions and certain anatomical parts -
>>> everyone has one! Finding an appropriate API to do this will be a
>>> challenge.
>>> 
>> I see two options here: the core of the tool acts as a long-lived server or
>> as a short-lived
>> console application which communicates with the editor's plugin via
>> stdin/stdout.
>> Btw, what the text editor do you prefer? :-)
>> 
> 
> Most of the time, I use Emacs on either Linux or macOS. With the support
> it has for running a psql process, it works pretty well for most
> things. There are pretty reasonable packages for writing SQL and
> 'static' completion. Getting things setup can take a bit of effort, but
> once it is working, it tends to work pretty well.
> 
> The two areas where it lacks are dynamic completion i.e. completing on
> objects the user has created such as table names and column
> names/function names etc. and decent result formatting. 
> 
>>> 
>>> I seem to remember reading somewhere that Oracle was going to remove
>>> swing from the core java library. I've always been a little disappointed
>>> with Java UIs and found they don't give the cross-platform support that
>>> Java originally promised, plus OSX/macOS has not made Java as welcome as
>>> it use to be. If you do choose Java, it will need to work under openJDK
>>> as this is what most Linux users will have installed.
>>> 
>> For now, the possible options for the GUI part are Qt, wxWidgets or FLTK,
>> or even Electron.
> 
> I would look at either Qt or even Electron (I believe visual code is
> written using Electron, which is the other editor I use from time to
> time).
> 
> There was an Emacs project called Eclaim (I think) which interfaced with
> Eclipse services in order to provide dynamic completion when doing
> Java. That could be worth checking out for ideas to borrow.
> 
> Tim
> 
> -- 
> Tim Cross



cache lookup failed for attribute 1 of relation XXXXXX

2018-07-18 Thread Alessandro Aste
Hi, we have a logical backup process that runs every night since 5+ years.
It is a logical backup we use to restore a non production environment.  We
use pg_dump in parallel mode in directory format.
Postgres version is 9.6.6


Tonight schedule failed with the following error:


pg_dump: [archiver (db)] query failed: ERROR:  cache lookup failed for
attribute 1 of relation 2223152859

pg_dump: [archiver (db)] query was: SELECT t.tableoid, t.oid, t.relname
AS indexname, pg_catalog.pg_get_indexdef(i.indexrelid) AS indexdef,
t.relnatts AS indnkeys, i.indkey, i.indisclustered, i.indisreplident,
t.relpages, c.contype, c.conname, c.condeferrable, c.condeferred,
c.tableoid AS contableoid, c.oid AS conoid,
pg_catalog.pg_get_constraintdef(c.oid, false) AS condef, (SELECT spcname
FROM pg_catalog.pg_tablespace s WHERE s.oid = t.reltablespace) AS
tablespace, t.reloptions AS indreloptions FROM pg_catalog.pg_index i JOIN
pg_catalog.pg_class t ON (t.oid = i.indexrelid) LEFT JOIN
pg_catalog.pg_constraint c ON (i.indrelid = c.conrelid AND i.indexrelid =
c.conindid AND c.contype IN ('p','u','x')) WHERE i.indrelid =
'2223152859'::pg_catalog.oid AND i.indisvalid AND i.indisready ORDER BY
indexname

We attempted to run the backup manually the 2nd time just after a couple of
minutes and it suceeded with no issues, the restore of the non production
env suceeded too.  Not sure what this error is though. Have never seen it
before.

Index with that indexrelid does not exists

SELECT * FROM pg_catalog.pg_index WHERE indexrelid = '2223152859';
(0 rows)

Any idea about what happened? In the server logs I see only one occurrence
of that error and it is related to the failed pg_dump:

2018-07-19 01:04:26 GMT [127.0.0.1(52498)] [50816]: [13-1]
db=cmdv3,user=postgres ERROR:  cache lookup failed for attribute 1 of
relation 2223152859


Thanks much in advance


Re: User documentation vs Official Docs

2018-07-18 Thread Stephen Frost
Greetings Vick,

* Vick Khera (vi...@khera.org) wrote:
> I didn't know it existed either, mostly because I know how to ask google to
> do things, and the things I need to know are not covered here (yet). This
> does seem to me to be the ideal place to add more how to documentation to
> augment all the reference docs we have.

Agreed.  It'd be great to have more tutorials in our official
documentation.

> As for some "strong SEO" I think already the top hit for almost everything
> I seek postgres related is the official manual, so it seems to have good
> SEO. The only big improvement would be somehow to tell google to only show
> me the newest version of the manual, not all of the older ones too, for the
> same page.

Agreed, and there's ongoing work to improve the situation regarding the
different versions of the manual and getting Google to show the
"current" URL in preference to the other versions.

Thanks!

Stephen


signature.asc
Description: PGP signature


Re: Shared buffers increased but cache hit ratio is still 85%

2018-07-18 Thread Tomas Vondra
On 07/18/2018 10:43 AM, Andreas Kretschmer wrote:
> 
> 
> Am 18.07.2018 um 10:26 schrieb Hans Schou:
>> Am I doing something wrong or should some history be cleared?
> 
> Reset the stats for that database. You can check the date of last reset
> with:
> 
> select stats_reset from pg_stat_database where datname = 'database_name';
> 
> and reset it with:
> 
> ||pg_stat_reset()
> ||
> ||Reset all statistics counters for the current database to zero
> (requires superuser privileges by default, but EXECUTE for this function
> can be granted to others.)||
> 

It might be better to note current values of the counters somewhere, and
compute a delta later (and use that to compute the cache hit ratio). The
issue is that pg_stat_reset() throws away all sorts of interesting and
important stats, including those driving autovacuum/autoanalyze.


regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Slow WAL recovery for DROP TABLE

2018-07-18 Thread Sherrylyn Branchaw
> Hi, I have also reported a similar problem in the hackers mailing list,
but particularly on TRUNCATE TABLE.
https://www.postgresql.org/message-id/flat/D09B13F772D2274BB348A310EE3027C62FD6E6%40g01jpexmbkw24

Ooh, interesting. I admit I did not include TRUNCATE in my testing.

> The problem lies with the standby server’s replay as it does separate
scanning of the whole shared buffer for each DROP/TRUNCATE TABLE in order
to check if the table-to-delete is cached in shared buffer. Therefore, it
will take a long recovery time and sometimes fail for large tables
depending on shared_buffer size.

Also very interesting. We only (?) have 8 GB of shared buffers, and I see
from your message that you had 300 GB. All of our tables, both in prod and
in my reproduction, were empty, but there were hundreds of thousands of
them.

> The main problem here is the scanning of shared_buffers, which not only
affects drop/truncate table, but also drop database and vacuum as well.

I wondered about that. I didn't have any problem with a single drop
database, but the database dropped was a small one (albeit one with a few
hundred thousand empty tables), and I neither tested dropping a large
database nor dropping 100,000 databases. I didn't test vacuuming, but we do
heavy vacuuming on all our primaries frequently, and...hmm. Regular
vacuuming doesn't cause any problems that have made it onto my radar, but
VACUUM FULLs can cause WAL files to pile up on the primary's pg_xlog before
getting archived. I never investigated that, just throttled my VACUUM
FULLs, because they're only ever run manually. I will keep an eye on the
recovery time of individual files the next time I have to do this, which
will probably be soon.

> But I think any working minor solutions/fixes from developers are also
welcome, such as the recent committed patch for the multiple dropped tables
per transaction with large shared_buffers.

Agreed. Should I have sent or should I still send this to pgsql-hackers? I
wasn't sure, so I erred on the side of not bothering the developers until
I'd gotten some feedback here.

Best,
Sherrylyn


Re: Slow WAL recovery for DROP TABLE

2018-07-18 Thread Sherrylyn Branchaw
> There was a recent commit for a similar performance problem, which will
appear in 9.6.10.  But that was specifically for cases where there were
multiple dropped tables per transaction, and large shared_buffers.

Interesting, and good to know, thanks! I'm not sure we fall under either
(is 8 GB large? It's larger than the default, but I always thought large
was defined as "more than 8GB" for this setting), but it sounds like this
sort of problem is on the developers' radars. It's possible up to 4 tables
were dropped per transaction in prod, but I don't know if that's enough to
count as "multiple", and in testing, I reproduced the problem with 1 drop
per transaction.

> I can't reproduce your single-drop-per-transaction problem.  The replica
has no problem keeping up with the master.

It's possible that the problem only occurs when the replica is on inferior
hardware. I was unable to test equal servers in the time I had. I noticed
that when the superior server was the replica, it was able to keep up with
the inferior replica, but that dropping tables was the only action for
which the inferior server wasn't able to keep up with as a standby, and the
only action for which the standalone outperformed the replica. I did not
test truncates; it's possible I would have seen the same problem with it.

> Can you share the reproduction scripts

For the table drops, I prepped by running these:

CREATE TABLE IF NOT EXISTS test1 (id int);
CREATE TABLE IF NOT EXISTS test2 (id int);
CREATE TABLE IF NOT EXISTS test3 (id int);
...
CREATE TABLE IF NOT EXISTS test10 (id int);

Then I dropped with these:

DROP TABLE IF EXISTS test1;
DROP TABLE IF EXISTS test2;
DROP TABLE IF EXISTS test3;
...
DROP TABLE IF EXISTS test10;

For the inserts, I did a

CREATE TABLE test (content1 TEXT, content2 TEXT, content3 TEXT);

followed by 300 of these statements, which insert a random string into each
of the three columns, in batches of 1 rows:

INSERT INTO test SELECT array_to_string(ARRAY(SELECT chr((48 +
round(random() * 9)) :: integer) FROM generate_series(1,135)), ''),
array_to_string(ARRAY(SELECT chr((48 + round(random() * 9)) :: integer)
FROM generate_series(1,135)), ''), array_to_string(ARRAY(SELECT chr((48 +
round(random() * 9)) :: integer) FROM generate_series(1,135)), '') FROM
(SELECT generate_series(1,1)) foo;

I loaded and timed each script as a file using

time psql -d mydb -f drop_tables.sql

> any non-default config settings?  Especially the setting of
shared_buffers (on both master and replica, if different)

Our shared_buffers setting is 8 GB on all nodes.

In prod, we had a primary and two standbys. One standby was very similar to
the primary in hardware, but older and with fewer CPUs. The other standby
was far inferior in hardware. For testing, I used the two standbys, and was
able to reproduce when I made the superior standby the primary.

These are the non-default settings on the primary and the standby of
comparable hardware (other than changing file and directory paths, which
shouldn't affect anything):

listen_addresses = '*'
log_destination = 'stderr, syslog'
log_filename = 'postgresql-%Y-%m-%d.log'
log_line_prefix = '%p [%m]: %u %d %q %h %a %v %x'
syslog_facility = 'local1'
log_checkpoints = on
log_connections = on
log_disconnections = on
log_lock_waits = on
log_temp_files = 0
log_timezone = 'US/Central'
log_statement = 'ddl'
track_functions = pl
track_activity_query_size = 4096
timezone = US/Central
client_encoding = UTF8
lc_messages = 'C'
lc_monetary = 'C'
lc_numeric = 'C'
lc_time = 'C'
max_connections = 1024
tcp_keepalives_idle = 7200
tcp_keepalives_interval = 75
tcp_keepalives_count = 9
work_mem= '48 MB'
maintenance_work_mem= '1 GB'
max_locks_per_transaction = 8192
random_page_cost = 2.0
effective_cache_size = '94GB'
log_autovacuum_min_duration = 10s
autovacuum_naptime = 2min
autovacuum_vacuum_threshold = 500
autovacuum_analyze_threshold = 250
autovacuum_freeze_max_age = 10
autovacuum_vacuum_cost_delay = 10ms
autovacuum_vacuum_scale_factor = 0.01
autovacuum_analyze_scale_factor = 0.01
wal_buffers = 16MB
bgwriter_lru_maxpages = 250
max_wal_senders = 5
wal_keep_segments = 256
hot_standby = on
log_min_duration_statement = 2s
wal_level = replica
wal_log_hints = on
archive_mode = on
archive_command = 'ssh postgres@backupserver "test ! -f /opt/backup/db1/%f"
&& rsync -q %p postgres@backupserver:/opt/backup/db1/%f'
max_parallel_workers_per_gather = 4
max_worker_processes = 16

The only different setting on the second standby is effective_cache_size,
which is 24 GB instead of 94 GB.

To rule out fetching from the remote archive as the bottleneck, I tried
scp-ing a bunch of WAL files to the standby and resetting the restore
command to replay from the local archive. Same performance problem, and
only when dropping tables.

Best,
Sherrylyn


RES: Can't compile postgresql 11 on FreeBSD 11.1

2018-07-18 Thread Márcio Antônio Sepp


Now to the list...  


> >> I'm trying to compile PostgreSQL 11beta2 but this errors occur:
> >> checking readline.h usability... no
> 
> > Looks like you need whatever is the FreeBSD equivalent of readline-
> dev(el).
> 
> AFAICT FreeBSD doesn't do things that way.  On a nearby machine, I see
> 
> $ pkg which /usr/local/include/readline/readline.h
> /usr/local/include/readline/readline.h was installed by package
> readline-6.3.8
> 
> What's more likely the problem is that FreeBSD insists on installing 
> packages under /usr/local, but it does *not* set that up to be part of 
> gcc's default search paths.  (Security 1, usability 0.)  You need 
> these configure flags to do much of anything on that platform:
> 
>   --with-includes=/usr/local/include --with-libs=/usr/local/lib


That is right. Including lines above everything works fine.

Thanks so much.





Re: Postgresql & PGPool packages minor version different on Standby server

2018-07-18 Thread Ibrahim Edib Kokdemir
Hi Vikas,
For the postgres service, we have the same case (each cluster members has
different minor version of 9.6) and there is no problem with that.
But be careful about extensions. Because sometimes an extension can require
an upgrade in its db structure otherwise it won't work. So if there is a
case like what I describe, you must upgrade the extension before put the
server in production.

I don't know about pgpool.

Regards,
İbrahim.

On Wed, 18 Jul 2018, 17:42 Vikas Sharma,  wrote:

> Hi All,
>
> We have Postgresql 9.5 Cluster with streaming replication and pgpool. The
> version of Postgres is 9.5.5 and Pgpool-II version 3.2.15.
>
> There is now hardware issue with the Standby Machine and it won't startup
> so we are building new Standby machine.
>
> My question is about the minor version of postgresql 9.5 available now. On
> postgresql repository site, I can't see postgres 9.5.5 and Pgpool-II-3.2.15
> instead there are 9.5.13 and PGpool-II-3.5.15
>
> Is it safe and fine to have postgres 9.5.5 and pgpool-II-3.2.15 on Master
> & postgres 9.5.13 and Pgpool-II-3.5.15 on Standby ?
> Please let me know.
>
> I know the postgres don't keep minor version in PG_VERSION file in data
> directory so should It be ok to use different minor versions on master &
> standby?
>
> Thanks in advance.
>
> Regards
> Vikas
>


Postgresql & PGPool packages minor version different on Standby server

2018-07-18 Thread Vikas Sharma
Hi All,

We have Postgresql 9.5 Cluster with streaming replication and pgpool. The
version of Postgres is 9.5.5 and Pgpool-II version 3.2.15.

There is now hardware issue with the Standby Machine and it won't startup
so we are building new Standby machine.

My question is about the minor version of postgresql 9.5 available now. On
postgresql repository site, I can't see postgres 9.5.5 and Pgpool-II-3.2.15
instead there are 9.5.13 and PGpool-II-3.5.15

Is it safe and fine to have postgres 9.5.5 and pgpool-II-3.2.15 on Master &
postgres 9.5.13 and Pgpool-II-3.5.15 on Standby ?
Please let me know.

I know the postgres don't keep minor version in PG_VERSION file in data
directory so should It be ok to use different minor versions on master &
standby?

Thanks in advance.

Regards
Vikas


Re: A bit confused about "pgsql_tmp" vs "temp tablespace"

2018-07-18 Thread Adrian Klaver

On 07/18/2018 06:57 AM, Thomas Kellerer wrote:

Adrian Klaver schrieb am 18.07.2018 um 15:06:

In the chapter "Database File layout" the pgsql_tmp is explained as follows:

     Temporary files (for operations such as sorting more data than can fit in 
memory)
     are created within PGDATA/base/pgsql_tmp, or within a pgsql_tmp 
subdirectory of
     a tablespace directory

However the documentation for "temp_tablespaces" states:

     Temporary files for purposes such as sorting large data sets are also 
created
     in these tablespaces.


How do these two things related to each other?

Does this mean that if I do not explicitly create a dedicated "temp tablespace" 
then the pgsql_tmp subdirectory is used.
But _if_ I do create a temp tablespace (by creating one, and adding it to 
temp_tablespaces) then the sorting is done *there*?


Yes, for those objects that do not have a tablespace specified in their CREATE 
statement.



So far I thought that a temp tablespace is only used for temporary tables (and 
indexes on them) but that paragraph in the
temp_tablespaces documentation seems to indicate otherwise.


The Database File Layout section you quoted above says the same
thing. Basically setting temp_tablespaces just overrides where temp
objects and operation files are placed when a tablespace is not
specified in their creation.


Thanks.

I understand the relation between explicitly CREATEd objects and the temp 
tablespace(s).

But what about the (temp) space needed for e.g. sorting, grouping or 
intermediate results from CTEs or derived tables?
Is that also controlled through the temp_tablespaces?


Yes, all setting temp_tablespace from '' to some_tablespace(s) does is 
redirect the creation of unspecified temp files from the db 
default_namespace/pgsql_tmp to the named some_tablespace(s)/pgsql_tmp.


To verify this create a tablespace and add it to temp_tablespace and 
then do temp operations and look at the_tablespace/pgsql_tmp/.





  






--
Adrian Klaver
adrian.kla...@aklaver.com



Re: A bit confused about "pgsql_tmp" vs "temp tablespace"

2018-07-18 Thread Tom Lane
Thomas Kellerer  writes:
> But what about the (temp) space needed for e.g. sorting, grouping or 
> intermediate results from CTEs or derived tables? 
> Is that also controlled through the temp_tablespaces? 

Yes.

regards, tom lane



Re: A bit confused about "pgsql_tmp" vs "temp tablespace"

2018-07-18 Thread Thomas Kellerer
Adrian Klaver schrieb am 18.07.2018 um 15:06:
>> In the chapter "Database File layout" the pgsql_tmp is explained as follows:
>>
>>     Temporary files (for operations such as sorting more data than can fit 
>> in memory)
>>     are created within PGDATA/base/pgsql_tmp, or within a pgsql_tmp 
>> subdirectory of
>>     a tablespace directory
>>
>> However the documentation for "temp_tablespaces" states:
>>
>>     Temporary files for purposes such as sorting large data sets are also 
>> created
>>     in these tablespaces.
>>
>>
>> How do these two things related to each other?
>>
>> Does this mean that if I do not explicitly create a dedicated "temp 
>> tablespace" then the pgsql_tmp subdirectory is used.
>> But _if_ I do create a temp tablespace (by creating one, and adding it to 
>> temp_tablespaces) then the sorting is done *there*?
> 
> Yes, for those objects that do not have a tablespace specified in their 
> CREATE statement.
>
>>
>> So far I thought that a temp tablespace is only used for temporary tables 
>> (and indexes on them) but that paragraph in the
>> temp_tablespaces documentation seems to indicate otherwise.
> 
> The Database File Layout section you quoted above says the same
> thing. Basically setting temp_tablespaces just overrides where temp
> objects and operation files are placed when a tablespace is not
> specified in their creation.

Thanks.

I understand the relation between explicitly CREATEd objects and the temp 
tablespace(s).

But what about the (temp) space needed for e.g. sorting, grouping or 
intermediate results from CTEs or derived tables? 
Is that also controlled through the temp_tablespaces? 


 



Re: Is it ok to run vacuum full verbose command for live database for the tables which has more dead tuples?

2018-07-18 Thread Adrian Klaver

On 07/18/2018 02:13 AM, Raghavendra Rao J S V wrote:
We have thousands of tables. But out of those tables, around 20 to 40 
tables are always busy due to that those tables are bloating.


Define bloating?



In order to avoid this we are running a shell script which performs 
vacuum full on the tables which has more than ten thousand dead tuples. 


Out of how many live tuples?

While running this we are stopping all application processors and 
running vacuum full on the tables which has more dead tuples.


 1. Is it ok to run *vacuum full verbose* command for live database for
the tables which has more dead tuples(greater than)?
 2. Does it cause any *adverse *effect?


https://www.postgresql.org/docs/10/static/sql-vacuum.html

"FULL

Selects “full” vacuum, which can reclaim more space, but takes much 
longer and exclusively locks the table. This method also requires extra 
disk space, since it writes a new copy of the table and doesn't release 
the old copy until the operation is complete. Usually this should only 
be used when a significant amount of space needs to be reclaimed from 
within the table.

"




Please clarify me. Thanks in advance.

--
Regards,
Raghavendra Rao J S V




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: A bit confused about "pgsql_tmp" vs "temp tablespace"

2018-07-18 Thread Adrian Klaver

On 07/18/2018 12:53 AM, Thomas Kellerer wrote:

In the chapter "Database File layout" the pgsql_tmp is explained as follows:

Temporary files (for operations such as sorting more data than can fit in 
memory)
are created within PGDATA/base/pgsql_tmp, or within a pgsql_tmp 
subdirectory of
a tablespace directory

However the documentation for "temp_tablespaces" states:

Temporary files for purposes such as sorting large data sets are also 
created
in these tablespaces.


How do these two things related to each other?




Does this mean that if I do not explicitly create a dedicated "temp tablespace" 
then the pgsql_tmp subdirectory is used.
But _if_ I do create a temp tablespace (by creating one, and adding it to 
temp_tablespaces) then the sorting is done *there*?


Yes, for those objects that do not have a tablespace specified in their 
CREATE statement.




So far I thought that a temp tablespace is only used for temporary tables (and 
indexes on them) but that paragraph in the
temp_tablespaces documentation seems to indicate otherwise.


The Database File Layout section you quoted above says the same thing. 
Basically setting temp_tablespaces just overrides where temp objects and 
operation files are placed when a tablespace is not specified in their 
creation.




Background: we are setting up a new server that has a regular (large) SSD and 
very fast NVMe SSD (which is too small to hold all tables).
So we would like to put anything that is "temporary" onto the NVMe drive.

But I'm unsure if that is better done through a symlink for pgsql_tmp or a temp 
tablespace.
Currently no temporary tables are used (but that might change in the future), 
so only intermediate results (e.g. CTEs, sorting etc) would wind up there.













--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Is it ok to run vacuum full verbose command for live database for the tables which has more dead tuples?

2018-07-18 Thread Fabio Pardi
Hi Raghavendra


answers in line here below:


On 18/07/18 11:13, Raghavendra Rao J S V wrote:
> We have thousands of tables. But out of those tables, around 20 to 40 tables 
> are always busy due to that those tables are bloating.
>
> In order to avoid this we are running a shell script which performs vacuum 
> full on the tables which has more than ten thousand dead tuples. While 
> running this we are stopping all application processors and running vacuum 
> full on the tables which has more dead tuples.
>
>  1. Is it ok to run *vacuum full verbose* command for live database for the 
> tables which has more dead tuples(greater than)?
>
nope

>  1. Does it cause any *adverse *effect?
>
>
exclusively locks the table. Recipe for disaster.



What's wrong with the normal operations of autovacuum?


regards,

fabio pardi



> Please clarify me. Thanks in advance.
>
> -- 
> Regards,
> Raghavendra Rao J S V
>



Re: Shared buffers increased but cache hit ratio is still 85%

2018-07-18 Thread Hans Schou
On Wed, Jul 18, 2018 at 10:44 AM Andreas Kretschmer 
wrote:

>
> ||pg_stat_reset()
>

Thanks, I guess we can see the result in a few days.

BTW, strang command: it only reset current database and it can't take db as
parameter.


Is it ok to run vacuum full verbose command for live database for the tables which has more dead tuples?

2018-07-18 Thread Raghavendra Rao J S V
We have thousands of tables. But out of those tables, around 20 to 40
tables are always busy due to that those tables are bloating.

In order to avoid this we are running a shell script which performs vacuum
full on the tables which has more than ten thousand dead tuples. While
running this we are stopping all application processors and running vacuum
full on the tables which has more dead tuples.


   1. Is it ok to run *vacuum full verbose* command for live database for
   the tables which has more dead tuples(greater than)?
   2. Does it cause any *adverse *effect?


Please clarify me. Thanks in advance.

-- 
Regards,
Raghavendra Rao J S V


Re: Shared buffers increased but cache hit ratio is still 85%

2018-07-18 Thread Andreas Kretschmer




Am 18.07.2018 um 10:26 schrieb Hans Schou:

Am I doing something wrong or should some history be cleared?


Reset the stats for that database. You can check the date of last reset 
with:


select stats_reset from pg_stat_database where datname = 'database_name';

and reset it with:

||pg_stat_reset()
||
||Reset all statistics counters for the current database to zero 
(requires superuser privileges by default, but EXECUTE for this function 
can be granted to others.)||



Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Shared buffers increased but cache hit ratio is still 85%

2018-07-18 Thread Hans Schou
Hi

I have this system with some databases and I have run the
cache_hit_ratio.sql script on it. It showed that the db acme777booking had
a ratio on 85%. I then changed shared_buffer size from 0.5GB to 4GB as the
server has 16GB of physical RAM. After 6 days of running I checked the
ratio again and it is still 85%.
Am I doing something wrong or should some history be cleared?


cache_hit_ratio.sql
datname | blks_read  |   blks_hit   | cachehitratio
++--+---
 acme777web |   50225009 |   3157586919 | 98.43
 acmelog| 462198 | 14332508 | 96.88
 acme777domain  | 7540616252 | 119574349075 | 94.07
 acme777booking |  337915568 |   1902310783 | 84.92
(4 rows)

pg_runtime.sql
   pg_start|runtime
---+
 2018-07-12 06:49:48.654252+02 | 6 days 02:44:09.488978
(1 row)

get_version_num.sql
 Version text |  Num
--+---
 9.1.9| 90109
(1 row)

SELECT pg_stat_database.datname,
   pg_stat_database.blks_read,
   pg_stat_database.blks_hit,
   round((pg_stat_database.blks_hit::double precision
  / (pg_stat_database.blks_read
 + pg_stat_database.blks_hit
 +1)::double precision * 100::double precision)::numeric,
2) AS cachehitratio
   FROM pg_stat_database
  WHERE pg_stat_database.datname !~ '^(template(0|1)|postgres)$'::text
  ORDER BY round((pg_stat_database.blks_hit::double precision
 / (pg_stat_database.blks_read
+ pg_stat_database.blks_hit
+ 1)::double precision * 100::double
precision)::numeric, 2) DESC;

OS: Red Hat Enterprise Linux Server release 6.4 (Santiago)


A bit confused about "pgsql_tmp" vs "temp tablespace"

2018-07-18 Thread Thomas Kellerer
In the chapter "Database File layout" the pgsql_tmp is explained as follows:

   Temporary files (for operations such as sorting more data than can fit in 
memory) 
   are created within PGDATA/base/pgsql_tmp, or within a pgsql_tmp subdirectory 
of 
   a tablespace directory

However the documentation for "temp_tablespaces" states: 

   Temporary files for purposes such as sorting large data sets are also 
created 
   in these tablespaces.


How do these two things related to each other? 

Does this mean that if I do not explicitly create a dedicated "temp tablespace" 
then the pgsql_tmp subdirectory is used. 
But _if_ I do create a temp tablespace (by creating one, and adding it to 
temp_tablespaces) then the sorting is done *there*? 

So far I thought that a temp tablespace is only used for temporary tables (and 
indexes on them) but that paragraph in the 
temp_tablespaces documentation seems to indicate otherwise. 

Background: we are setting up a new server that has a regular (large) SSD and 
very fast NVMe SSD (which is too small to hold all tables). 
So we would like to put anything that is "temporary" onto the NVMe drive. 

But I'm unsure if that is better done through a symlink for pgsql_tmp or a temp 
tablespace. 
Currently no temporary tables are used (but that might change in the future), 
so only intermediate results (e.g. CTEs, sorting etc) would wind up there.