Re: ERROR: too many dynamic shared memory segments

2017-11-27 Thread Dilip Kumar
On Tue, Nov 28, 2017 at 4:18 AM, Thomas Munro  wrote:

> On Tue, Nov 28, 2017 at 10:05 AM, Jakub Glapa 
> wrote:
> > As for the crash. I dug up the initial log and it looks like a
> segmentation
> > fault...
> >
> > 2017-11-23 07:26:53 CET:192.168.10.83(35238):user@db:[30003]: ERROR:
> too
> > many dynamic shared memory segments
>
> I think there are two failure modes: one of your sessions showed the
> "too many ..." error (that's good, ran out of slots and said so and
> our error machinery worked as it should), and another crashed with a
> segfault, because it tried to use a NULL "area" pointer (bad).  I
> think this is a degenerate case where we completely failed to launch
> parallel query, but we ran the parallel query plan anyway and this
> code thinks that the DSA is available.  Oops.
>

 I think BitmapHeapScan check whether dsa is valid or not if DSA is not
valid then it should assume it's non-parallel plan.

Attached patch should fix the issue.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com


bug_fix_in_pbhs_when_dsa_not_initialized.patch
Description: Binary data


Re: Table and Index bloating

2017-11-27 Thread stevenchang1213
After reading the web pages, index part should be succeeded, but tables may not 
fit if they got foreign key relations. 
steven 
 原始訊息 自: Carlos Martinez  日期: 2017/11/27  
23:13  (GMT+08:00) 至: pgsql-gene...@postgresql.org 主旨: Re: Table and Index 
bloating 
Hi.

Can we use pg_repack with standard community edition of postgresql? Yes.
and can it be used on the databases with streaming replication? Yes.

Best regards.

Carlos Martinez



On Mon, Nov 27, 2017 at 10:08 AM, Vikas Sharma  wrote:
> Thank you Stefan for the queries.
>
> Thanks you Carlos for pg_repack suggestion, Can we use pg_repack with
> standard community edition of postgresql? and can it be used on the
> databases with streaming replication?
>
> Regards
> Vikas Sharma
>
> On 27 November 2017 at 14:58, Carlos Martinez  wrote:
>>
>> Hi.
>>
>> You can consider use pg_repack to remove bloat
>> (http://reorg.github.io/pg_repack/)
>>
>> Best regards.
>>
>> Carlos Martinez
>>
>>
>> On Mon, Nov 27, 2017 at 4:17 AM, Stefan Fercot 
>> wrote:
>> > Hi,
>> >
>> > You can track that with some monitoring tools like check_pgactivity. The
>> > queries can be found here :
>> > https://github.com/ioguix/pgsql-bloat-estimation.
>> >
>> > Tables :
>> >
>> > https://github.com/ioguix/pgsql-bloat-estimation/blob/master/table/table_bloat.sql
>> > Indexes :
>> >
>> > https://github.com/ioguix/pgsql-bloat-estimation/blob/master/btree/btree_bloat.sql
>> >
>> > Backups should also be part of your maintenance plan.
>> > Vacuum/Analyse/Reindex is already a good basis.
>> >
>> >
>> > Kind regards
>> >
>> >
>> > On 11/27/2017 10:10 AM, Vikas Sharma wrote:
>> >> Hi All,
>> >>
>> >> Could someone please provide the query/queries  to find table and
>> >> Index bloating in PgSql 9.3 onwards?
>> >> and are there any other maintenance tasks inPgSql except vacuum &
>> >> analyse.
>> >>
>> >> Regards
>> >> Vikas
>> >
>> > --
>> > Stefan FERCOT
>> > http://dalibo.com - http://dalibo.org
>> >
>> >
>>
>



Setting a serial column with serial object that has a name that is built dynamically

2017-11-27 Thread Robert Lakes
How do I set a serial column with the next value when the serial object has
a name that is built dynamically?

EXECUTE 'CREATE SEQUENCE '|| *t_name *|| *'id_seq'* || ' OWNED BY '||
t_name || '_cdc'||'.table_id';

EXECUTE 'ALTER TABLE ' || quote_ident(t_name || '_cdc') || ' ALTER
COLUMN table_id SET DEFAULT nextval('*'tab_id_seq'*');';


Re: ERROR: too many dynamic shared memory segments

2017-11-27 Thread Thomas Munro
On Tue, Nov 28, 2017 at 10:05 AM, Jakub Glapa  wrote:
> As for the crash. I dug up the initial log and it looks like a segmentation
> fault...
>
> 2017-11-23 07:26:53 CET:192.168.10.83(35238):user@db:[30003]: ERROR:  too
> many dynamic shared memory segments

Hmm.  Well this error can only occur in dsm_create() called without
DSM_CREATE_NULL_IF_MAXSEGMENTS.  parallel.c calls it with that flag
and dsa.c doesn't (perhaps it should, not sure, but that'd just change
the error message), so that means this the error arose from dsa.c
trying to get more segments.  That would be when Parallel Bitmap Heap
Scan tried to allocate memory.

I hacked my copy of PostgreSQL so that it allows only 5 DSM slots and
managed to reproduce a segv crash by trying to run concurrent Parallel
Bitmap Heap Scans.  The stack looks like this:

  * frame #0: 0x0001083ace29
postgres`alloc_object(area=0x, size_class=10) + 25 at
dsa.c:1433
frame #1: 0x0001083acd14
postgres`dsa_allocate_extended(area=0x, size=72,
flags=4) + 1076 at dsa.c:785
frame #2: 0x000108059c33
postgres`tbm_prepare_shared_iterate(tbm=0x7f9743027660) + 67 at
tidbitmap.c:780
frame #3: 0x000108000d57
postgres`BitmapHeapNext(node=0x7f9743019c88) + 503 at
nodeBitmapHeapscan.c:156
frame #4: 0x000107fefc5b
postgres`ExecScanFetch(node=0x7f9743019c88,
accessMtd=(postgres`BitmapHeapNext at nodeBitmapHeapscan.c:77),
recheckMtd=(postgres`BitmapHeapRecheck at nodeBitmapHeapscan.c:710)) +
459 at execScan.c:95
frame #5: 0x000107fef983
postgres`ExecScan(node=0x7f9743019c88,
accessMtd=(postgres`BitmapHeapNext at nodeBitmapHeapscan.c:77),
recheckMtd=(postgres`BitmapHeapRecheck at nodeBitmapHeapscan.c:710)) +
147 at execScan.c:162
frame #6: 0x0001080008d1
postgres`ExecBitmapHeapScan(pstate=0x7f9743019c88) + 49 at
nodeBitmapHeapscan.c:735

(lldb) f 3
frame #3: 0x000108000d57
postgres`BitmapHeapNext(node=0x7f9743019c88) + 503 at
nodeBitmapHeapscan.c:156
   153 * dsa_pointer of the iterator state which will be used by
   154 * multiple processes to iterate jointly.
   155 */
-> 156 pstate->tbmiterator = tbm_prepare_shared_iterate(tbm);
   157 #ifdef USE_PREFETCH
   158 if (node->prefetch_maximum > 0)
   159
(lldb) print tbm->dsa
(dsa_area *) $3 = 0x
(lldb) print node->ss.ps.state->es_query_dsa
(dsa_area *) $5 = 0x
(lldb) f 17
frame #17: 0x00010800363b
postgres`ExecGather(pstate=0x7f9743019320) + 635 at
nodeGather.c:220
   217 * Get next tuple, either from one of our workers, or by running the plan
   218 * ourselves.
   219 */
-> 220 slot = gather_getnext(node);
   221 if (TupIsNull(slot))
   222 return NULL;
   223
(lldb) print *node->pei
(ParallelExecutorInfo) $8 = {
  planstate = 0x7f9743019640
  pcxt = 0x7f97450001b8
  buffer_usage = 0x000108b7e218
  instrumentation = 0x000108b7da38
  area = 0x
  param_exec = 0
  finished = '\0'
  tqueue = 0x
  reader = 0x
}
(lldb) print *node->pei->pcxt
warning: could not load any Objective-C class information. This will
significantly reduce the quality of type information available.
(ParallelContext) $9 = {
  node = {
prev = 0x00010855fb60
next = 0x00010855fb60
  }
  subid = 1
  nworkers = 0
  nworkers_launched = 0
  library_name = 0x7f9745000248 "postgres"
  function_name = 0x7f9745000268 "ParallelQueryMain"
  error_context_stack = 0x
  estimator = (space_for_chunks = 180352, number_of_keys = 19)
  seg = 0x
  private_memory = 0x000108b53038
  toc = 0x000108b53038
  worker = 0x
}

I think there are two failure modes: one of your sessions showed the
"too many ..." error (that's good, ran out of slots and said so and
our error machinery worked as it should), and another crashed with a
segfault, because it tried to use a NULL "area" pointer (bad).  I
think this is a degenerate case where we completely failed to launch
parallel query, but we ran the parallel query plan anyway and this
code thinks that the DSA is available.  Oops.

-- 
Thomas Munro
http://www.enterprisedb.com



Re: ERROR: too many dynamic shared memory segments

2017-11-27 Thread Jakub Glapa
Hi Thomas,
doubling the max_connection has the problem gone away for now! Yay!

As for the crash. I dug up the initial log and it looks like a segmentation
fault...

2017-11-23 07:26:53 CET:192.168.10.83(35238):user@db:[30003]: ERROR:  too
many dynamic shared memory segments
2017-11-23 07:26:53 CET:192.168.10.83(35238):user@db:[30003]: STATEMENT:
 SELECT 
2017-11-23 07:26:55 CET::@:[29398]: LOG:  server process (PID 26992) was
terminated by signal 11: Segmentation fault
2017-11-23 07:26:55 CET::@:[29398]: DETAIL:  Failed process was running:
SELECT .
2017-11-23 07:26:55 CET::@:[29398]: LOG:  terminating any other active
server processes
2017-11-23 07:26:55 CET:192.168.10.83(35242):user@db:[30009]: WARNING:
 terminating connection because of crash of another server process
2017-11-23 07:26:55 CET:192.168.10.83(35242):user@db:[30009]: DETAIL:  The
postmaster has commanded this server process to roll back the current
transaction and exit, because another server process exited abnormally and
possibly corrupted shared memory.
2017-11-23 07:26:55 CET:192.168.10.83(35242):user@db:[30009]: HINT:  In a
moment you should be able to reconnect to the database and repeat your
command.
2017-11-23 07:26:55 CET:192.168.10.83(35268):user@db:[30097]: WARNING:
 terminating connection because of crash of another server process
2017-11-23 07:26:55 CET:192.168.10.83(35268):user@db:[30097]: DETAIL:  The
postmaster has commanded this server process to roll back the current
transaction and exit, because another server process exited abnormally and
possibly corrupted shared memory.
2017-11-23 07:26:55 CET:192.168.10.83(35268):user@db:[30097]: HINT:  In a
moment you should be able to reconnect to the database and repeat your
command.
2017-11-23 07:26:55 CET:192.168.10.83(46614):user@db:[31080]: WARNING:
 terminating connection because of crash of another server process
2017-11-23 07:26:55 CET:192.168.10.83(46614):user@db:[31080]: DETAIL:  The
postmaster has commanded this server process to roll back the current
transaction and exit, because another server process exited abnormally and
possibly corrupted shared memory.
2017-11-23 07:26:55 CET:192.168.10.83(46614):user@db: HINT:  In a moment
you should be able to reconnect to the database and repeat your command.
2017-11-23 07:26:55 CET:192.168.10.83(35238):user@db:[30003]: WARNING:
 terminating connection because of crash of another server process
2017-11-23 07:26:55 CET:192.168.10.83(35238):user@db:[30003]: DETAIL:  The
postmaster has commanded this server process to roll back the current
transaction and exit, because another server process exited abnormally and
possibly corrupted shared memory
2017-11-23 07:26:55 CET:192.168.10.83(35300):user@db:[30152]: FATAL:  the
database system is in recovery mode






--
regards,
pozdrawiam,
Jakub Glapa

On Mon, Nov 27, 2017 at 7:53 PM, Thomas Munro  wrote:

> On Tue, Nov 28, 2017 at 1:13 AM, Jakub Glapa 
> wrote:
> > The queries are somehow special.
> > We are still using the old style partitioning (list type) but we abuse
> it a
> > bit when querying.
> > When querying a set of partitions instead of doing it via parent table we
> > stitch together the required tables with UNION ALL (this requires less
> > locking) and was more performant in our benchmark (the new native
> > partitioning might improve this but we didn't research that yet).
> >
> > The queries are in form of
> > SELECT col1,col2,col3 FROM
> > (SELECT *
> >   FROM par1
> >   WHERE rangestart >= '2017-01-01 00:00:00.0' AND rangeend <=
> > '2017-11-26 23:59:59.999'
> >   UNION ALL SELECT *
> > FROM par2
> > WHERE rangestart >= '2017-01-01 00:00:00.0' AND rangeend
> <=
> > '2017-11-26 23:59:59.999'
> >   UNION ALL SELECT *
> > FROM  par2
> > WHERE rangestart >= '2017-01-01 00:00:00.0' AND rangeend
> <=
> > '2017-11-26 23:59:59.999'
> >
> >   WHERE rangestart >= '2017-01-01 00:00:00.0' AND rangeend <=
> > '2017-11-26 23:59:59.999'
> >
> > UNION ALL
> > ...
> > ) unionalias
> > WHERE 
> >
> >
> >
> > and indeed the query planner shows the usage of Bitmap Heap Scan
> > 
> >  ->  Subquery Scan on "*SELECT* 2"
> > (cost=3068.58..19793.94 rows=1 width=1284)
> >->  Gather
> (cost=3068.58..19793.93
> > rows=1 width=5014)
> >  Workers Planned: 2
> >  ->  Parallel Bitmap Heap
> Scan
> > on par_6  (cost=2068.58..18793.83 rows=1 width=5014)
> >Recheck Cond:
> > <>
> >Filter: <>
> >->  BitmapAnd
> > (cost=2068.58..2068.58 rows=30955 width=0)
> >  ->  BitmapOr
> > (cost=999.30..999.30 rows=42989 width=0)
> >   

Re: Setting a default for nextval sequence

2017-11-27 Thread Melvin Davidson
On Mon, Nov 27, 2017 at 2:24 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Mon, Nov 27, 2017 at 12:13 PM, Robert Lakes 
> wrote:
>
>> I'm attempting to set the default value for a serial column.  I created a
>> generic function that I am passing a table name as the only parameter.  I
>> had it working correctly, however, it does not seem to like the sequence
>> name being the same name for each audit table that is created through the
>> function.
>> So I changed the code to include the table name as part of the naming
>> convention.  Now, I need help on how to alter the serial column the new
>> value
>>
>> EXECUTE 'CREATE SEQUENCE '|| t_name ||'tab_id_seq'|| ' OWNED BY '||
>> t_name || '_cdc'||'.table_id';
>>
>> EXECUTE 'ALTER TABLE ' || quote_ident(t_name || '_cdc') || ' ALTER
>> COLUMN table_id SET DEFAULT nextval(''tab_id_seq'');';
>>
>
> ​Not tested but:​
>
> ​EXECUTE format('CREATE SEQUENCE %I OWNED BY %I.table_id',
> t_name || 'tab_id_seq',
> t_name ||​ '_cdc');
>
> EXECUTE format('ALTER TABLE %I ALTER COLUMN table_id SET DEFAULT
> nextval(%L);',
> t_name || '_cdc',
> t_name || 'tab_id_seq');
>
> ​David J.
>
>

*Do you mean ?*


*https://www.postgresql.org/docs/9.6/static/functions-sequence.html
*

*SELECT setval(regclass, bigint)  --- Set sequence's current value*


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: Setting a default for nextval sequence

2017-11-27 Thread David G. Johnston
On Mon, Nov 27, 2017 at 12:13 PM, Robert Lakes  wrote:

> I'm attempting to set the default value for a serial column.  I created a
> generic function that I am passing a table name as the only parameter.  I
> had it working correctly, however, it does not seem to like the sequence
> name being the same name for each audit table that is created through the
> function.
> So I changed the code to include the table name as part of the naming
> convention.  Now, I need help on how to alter the serial column the new
> value
>
> EXECUTE 'CREATE SEQUENCE '|| t_name ||'tab_id_seq'|| ' OWNED BY '|| t_name
> || '_cdc'||'.table_id';
>
> EXECUTE 'ALTER TABLE ' || quote_ident(t_name || '_cdc') || ' ALTER
> COLUMN table_id SET DEFAULT nextval(''tab_id_seq'');';
>

​Not tested but:​

​EXECUTE format('CREATE SEQUENCE %I OWNED BY %I.table_id',
t_name || 'tab_id_seq',
t_name ||​ '_cdc');

EXECUTE format('ALTER TABLE %I ALTER COLUMN table_id SET DEFAULT
nextval(%L);',
t_name || '_cdc',
t_name || 'tab_id_seq');

​David J.


Re: ERROR: too many dynamic shared memory segments

2017-11-27 Thread Tom Lane
Thomas Munro  writes:
> Ah, so you have many Gather nodes under Append?  That's one way to eat
> arbitrarily many DSM slots.  We allow for 64 + 2 * max_backends.  Does
> it help if you increase max_connections?  I am concerned about the
> crash failure mode you mentioned in the first email though: we should
> always be able to handle that condition gracefully.

I suspect this is an instance of the issue I complained about before [1]
that parallel query is unable to cope with worker start failure.

regards, tom lane

[1] https://www.postgresql.org/message-id/4905.1492813...@sss.pgh.pa.us



Setting a default for nextval sequence

2017-11-27 Thread Robert Lakes
I'm attempting to set the default value for a serial column.  I created a
generic function that I am passing a table name as the only parameter.  I
had it working correctly, however, it does not seem to like the sequence
name being the same name for each audit table that is created through the
function.
So I changed the code to include the table name as part of the naming
convention.  Now, I need help on how to alter the serial column the new
value

EXECUTE 'CREATE SEQUENCE '|| t_name ||'tab_id_seq'|| ' OWNED BY '|| t_name
|| '_cdc'||'.table_id';

EXECUTE 'ALTER TABLE ' || quote_ident(t_name || '_cdc') || ' ALTER
COLUMN table_id SET DEFAULT nextval(''tab_id_seq'');';


Re: ERROR: too many dynamic shared memory segments

2017-11-27 Thread Thomas Munro
On Tue, Nov 28, 2017 at 1:13 AM, Jakub Glapa  wrote:
> The queries are somehow special.
> We are still using the old style partitioning (list type) but we abuse it a
> bit when querying.
> When querying a set of partitions instead of doing it via parent table we
> stitch together the required tables with UNION ALL (this requires less
> locking) and was more performant in our benchmark (the new native
> partitioning might improve this but we didn't research that yet).
>
> The queries are in form of
> SELECT col1,col2,col3 FROM
> (SELECT *
>   FROM par1
>   WHERE rangestart >= '2017-01-01 00:00:00.0' AND rangeend <=
> '2017-11-26 23:59:59.999'
>   UNION ALL SELECT *
> FROM par2
> WHERE rangestart >= '2017-01-01 00:00:00.0' AND rangeend <=
> '2017-11-26 23:59:59.999'
>   UNION ALL SELECT *
> FROM  par2
> WHERE rangestart >= '2017-01-01 00:00:00.0' AND rangeend <=
> '2017-11-26 23:59:59.999'
>
>   WHERE rangestart >= '2017-01-01 00:00:00.0' AND rangeend <=
> '2017-11-26 23:59:59.999'
>
> UNION ALL
> ...
> ) unionalias
> WHERE 
>
>
>
> and indeed the query planner shows the usage of Bitmap Heap Scan
> 
>  ->  Subquery Scan on "*SELECT* 2"
> (cost=3068.58..19793.94 rows=1 width=1284)
>->  Gather  (cost=3068.58..19793.93
> rows=1 width=5014)
>  Workers Planned: 2
>  ->  Parallel Bitmap Heap Scan
> on par_6  (cost=2068.58..18793.83 rows=1 width=5014)
>Recheck Cond:
> <>
>Filter: <>
>->  BitmapAnd
> (cost=2068.58..2068.58 rows=30955 width=0)
>  ->  BitmapOr
> (cost=999.30..999.30 rows=42989 width=0)
>->  Bitmap
> Index Scan on par_6_datasource  (cost=0.00..990.21 rows=42922 width=0)
>  Index
> Cond: ((datasource)::text = 'one'::text)
>->  Bitmap
> Index Scan on par_6_datasource  (cost=0.00..4.30 rows=1 width=0)
>  Index
> Cond: ((datasource)::text = 'two'::text)
>->  Bitmap
> Index Scan on par_6_datasource  (cost=0.00..4.79 rows=67 width=0)
>  Index
> Cond: ((datasource)::text = 'three'::text)
>  ->  Bitmap Index
> Scan on par_6_rangestart  (cost=0.00..1069.02 rows=47564 width=0)
>Index Cond:
> (rangestart >= '2017-01-01 00:00:00+01'::timestamp with time zone)
>  ->  Subquery Scan on "*SELECT* 3"
> (cost=761.33..7944.99 rows=1 width=1086)
>->  Bitmap Heap Scan on par_7
> (cost=761.33..7944.98 rows=1 width=4816)
>Recheck Cond:
> <>
>Filter: <>
>  ->  BitmapAnd
> (cost=761.33..761.33 rows=7045 width=0)
>->  Bitmap Index Scan on
> par_7_rangestart  (cost=0.00..380.35 rows=14942 width=0)
>  Index Cond:
> (rangestart >= '2017-01-01 00:00:00+01'::timestamp with time zone)
>->  BitmapOr
> (cost=380.72..380.72 rows=12248 width=0)
>  ->  Bitmap Index
> Scan on par_7_datasource  (cost=0.00..372.00 rows=12228 width=0)
>Index Cond:
> ((datasource)::text = 'one'::text)
>  ->  Bitmap Index
> Scan on par_7_datasource  (cost=0.00..4.36 rows=10 width=0)
>Index Cond:
> ((datasource)::text = 'two'::text)
>  ->  Bitmap Index
> Scan on par_7_datasource  (cost=0.00..4.36 rows=10 width=0)
>Index Cond:
> ((datasource)::text = 'three'::text)
>
> 
>
>
> In this particular query there were over _100_ partitions connected with the
> UNION ALL operator.

Ah, so you have many Gather nodes under Append?  That's one way to eat
arbitrarily many DSM slots.  We allow for 64 + 2 * max_backends.  Does
it help if you increase max_connections?  I am concerned abo

Re: equalant of msdb in sql server

2017-11-27 Thread John R Pierce

On 11/26/2017 8:10 PM, chandra sekhar wrote:
What is the query to get list of the archive log files  to capture 
start date and time of each archive log



ls -l nfsserver:/path/to/archive



--
john r pierce, recycling bits in santa cruz




Re: How clear the cache on postgresql?

2017-11-27 Thread Jérôme Étévé
Hi,

A way to trick EXPLAIN/EXPLAIN ANALYZE to do what you mean is to play
with the optimisation variables:

https://www.postgresql.org/docs/9.6/static/runtime-config-query.html

J.

On 25 November 2017 at 15:01, Adam Tauno Williams
 wrote:
> On Fri, 2017-11-24 at 11:58 -0800, John R Pierce wrote:
>> On 11/24/2017 11:43 AM, Michael Nolan wrote:
>> > There are so many different levels of caching going on--within
>> > Postgresql, within the OS, within a disk array or SAN, and at the
>> > individual drive--that there may no longer be a meaningful way to
>> > perform this measurement.
>> generally, power cycling the server will flush all the hardware
>> caches AND the OS cache.
>
> Given that a real-world application will almost never experience an
> empty-cache scenario I question the usefulness of clearing the
> cache(s).   I would capture transactions from a production database in
> order to create a load test that mimics real-world load.
>
> --
> Meetings Coordinator, Michigan Association of Railroad Passengers
> 537 Shirley St NE Grand Rapids, MI 49503-1754 Phone: 616.581.8010
> E-mail: awill...@whitemice.org GPG#D95ED383 Web: http://www.marp.org
>



-- 
Jerome Eteve
http://www.justgiving.com/jetevesober
+44(0)7738864546
http://www.eteve.net/



Re: Table and Index bloating

2017-11-27 Thread stevenchang1213
hello,
https://www.postgresql.org/docs/9.3/static/pgstattuple.html
some enhancements onward
you can try create index concurrent to maintain your index to achieve the goal 
of reindex. 
Steven 
 原始訊息 自: Stefan Fercot  日期: 
2017/11/27  17:17  (GMT+08:00) 至: Vikas Sharma  副本: 
pgsql-gene...@postgresql.org 主旨: Re: Table and Index bloating 
Hi,

You can track that with some monitoring tools like check_pgactivity. The
queries can be found here :
https://github.com/ioguix/pgsql-bloat-estimation.

Tables :
https://github.com/ioguix/pgsql-bloat-estimation/blob/master/table/table_bloat.sql
Indexes :
https://github.com/ioguix/pgsql-bloat-estimation/blob/master/btree/btree_bloat.sql

Backups should also be part of your maintenance plan.
Vacuum/Analyse/Reindex is already a good basis.


Kind regards


On 11/27/2017 10:10 AM, Vikas Sharma wrote:
> Hi All,
>
> Could someone please provide the query/queries  to find table and
> Index bloating in PgSql 9.3 onwards? 
> and are there any other maintenance tasks inPgSql except vacuum & analyse.
>
> Regards
> Vikas

-- 
Stefan FERCOT
http://dalibo.com - http://dalibo.org




Re: a question about oddities in the data directory

2017-11-27 Thread Melvin Davidson
The directory "base" contains filenames(numbers) that correspond to the
oid of the postgreSQL databases. Those databases have subdirectories
that contain the filenames(numbers) that correspond to the oid of objects
(sequences, tables, etc) in those databases.

You may find the attached queries helpul in navigating those directories


On Mon, Nov 27, 2017 at 10:27 AM, Tomas Vondra  wrote:

> Hi,
>
> I think you need to read this:
>
> https://www.postgresql.org/docs/current/static/storage-file-layout.html
>
> On 11/27/2017 04:19 PM, Martin Mueller wrote:
> > Apologies if I asked this question before.
> >
> > I’m a newcomer to Postgres, having migrated from Mysql.  Information
> > about tables seems harder to get at in Postgres. That data directory
> > on my machine is suspiciously large—well over 100 GB.  The directory
> > Postgres/var-9.5/base/ contains a number of subdirectories with
> > Arabic numerals.
>
> Each of those directories stores data for a single database. Each object
> (including databases) has a 32-bit identifier, mapping it to object on
> the filesystem.
>
> databases: SELECT oid, datname FROM pg_database;
> tables etc.: SELECT relfilenode, relname FROM pg_class;
>
> > Directory 16385 has a subdirectory 17463 with a size of 1.07 GB.
> That's not a subdirectory, but a datafile segment.
>
> > But there are also 17 subdirectories with names from 17463.1 to
> > 17.463.17.  There are also other entries with similar forms of
> > duplication and suspiciously identical file sizes of 1.07GB.
> >
>
> Again, those are files, not subdirectories. Large datafiles are split
> into 1GB segments, so for example 10GB table with relfilenode 17463 will
> be stored in files 17463, 17463.1, 17463.2, ..., 17463.9
>
> > Is this normal behavior?  Where in the postgres documentation do I
> > read up on this? Postgres strikes me as superior to MySQl, especially
> > with regard to string functions and regular expressions, but it’s
> > harder to look under the hood.
>
> https://www.postgresql.org/docs/current/static/storage-file-layout.html
>
> > How, for instance, do I figure out what number corresponds to the
> > table that I know as ‘earlyprinttuples
> >
>
> SELECT relfilenode FROM pg_class WHERE relname = 'earlyprinttuples';
>
>
> regards
>
> --
> Tomas Vondra  http://www.2ndQuadrant.com
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>
>


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.
SELECT c.oid,
   n.nspname as schema,
   c.relname as table, 
   (SELECT oid FROM pg_database WHERE datname = current_database() ) as 
db_dir,
   c.relfilenode as filename
  FROM pg_class c
  JOIN pg_namespace n ON (n.oid = c.relnamespace)
WHERE relname NOT LIKE 'pg_%' AND
  relname NOT LIKE 'information%' AND
  relname NOT LIKE 'sql_%' AND
  relkind = 'r'
ORDER BY 2, relname;


SELECT c.oid, 
   n.nspname as schema,
   c.relname as table, 
   pg_stat_get_last_vacuum_time(c.oid) as last_vacuum,
   pg_stat_get_tuples_inserted(c.oid) as inserted,
   pg_stat_get_tuples_updated(c.oid) as updated,
   pg_stat_get_tuples_deleted(c.oid) as deleted
  FROM pg_class c
  JOIN pg_namespace n ON (n.oid = c.relnamespace)
 WHERE nspname NOT IN ('information_schema', 'pg_toast', 'pg_catalog')
   and relkind = 'r'
 ORDER BY 2, 3;
SELECT db.oid, 
   db.datname,
   au.rolname as datdba,
   pg_encoding_to_char(db.encoding) as encoding,
   db.datallowconn,
   db.datconnlimit,
   db.datfrozenxid,
   tb.spcname as tblspc,
--   db.datconfig,
   db.datacl
  FROM pg_database db
  JOIN pg_authid au ON au.oid = db.datdba
  JOIN pg_tablespace tb ON tb.oid = db.dattablespace
 ORDER BY 2; 
SELECT oid, 
   datname, 
   pg_size_pretty(pg_database_size(datname))as size_pretty, 
   pg_database_size(datname) as size,
   (SELECT pg_size_pretty (SUM( pg_database_size(datname))::bigint)  
  FROM pg_database)  AS total,
   ((pg_database_size(datname) / (SELECT SUM( pg_database_size(datname))  
   FROM pg_database) ) * 100)::numeric(6,3) 
AS pct
  FROM pg_database 
  ORDER BY datname;

Re: a question about oddities in the data directory

2017-11-27 Thread Adam Tauno Williams
On Mon, 2017-11-27 at 15:19 +, Martin Mueller wrote:
>  Apologies if I asked this question before.
> Is this normal behavior? 

Yes.

>  Where in the postgres documentation do I read up on this? 

Start here - https://www.postgresql.org/docs/10/static/admin.html

> string functions and regular expressions, but it’s harder to look
> under the hood.

I would strongly recommend staying out from under the hood.  PostgreSQL
has really great tools from visualizing what is going on in the
database(s);  use the tools.

-- 
Meetings Coordinator, Michigan Association of Railroad Passengers
537 Shirley St NE Grand Rapids, MI 49503-1754 Phone: 616.581.8010
E-mail: awill...@whitemice.org GPG#D95ED383 Web: http://www.marp.org



Re: a question about oddities in the data directory

2017-11-27 Thread Tomas Vondra
Hi,

I think you need to read this:

https://www.postgresql.org/docs/current/static/storage-file-layout.html

On 11/27/2017 04:19 PM, Martin Mueller wrote:
> Apologies if I asked this question before.
> 
> I’m a newcomer to Postgres, having migrated from Mysql.  Information
> about tables seems harder to get at in Postgres. That data directory
> on my machine is suspiciously large—well over 100 GB.  The directory
> Postgres/var-9.5/base/ contains a number of subdirectories with
> Arabic numerals.

Each of those directories stores data for a single database. Each object
(including databases) has a 32-bit identifier, mapping it to object on
the filesystem.

databases: SELECT oid, datname FROM pg_database;
tables etc.: SELECT relfilenode, relname FROM pg_class;

> Directory 16385 has a subdirectory 17463 with a size of 1.07 GB.
That's not a subdirectory, but a datafile segment.

> But there are also 17 subdirectories with names from 17463.1 to
> 17.463.17.  There are also other entries with similar forms of
> duplication and suspiciously identical file sizes of 1.07GB.
> 

Again, those are files, not subdirectories. Large datafiles are split
into 1GB segments, so for example 10GB table with relfilenode 17463 will
be stored in files 17463, 17463.1, 17463.2, ..., 17463.9

> Is this normal behavior?  Where in the postgres documentation do I
> read up on this? Postgres strikes me as superior to MySQl, especially
> with regard to string functions and regular expressions, but it’s
> harder to look under the hood.

https://www.postgresql.org/docs/current/static/storage-file-layout.html

> How, for instance, do I figure out what number corresponds to the
> table that I know as ‘earlyprinttuples
> 

SELECT relfilenode FROM pg_class WHERE relname = 'earlyprinttuples';


regards

-- 
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



a question about oddities in the data directory

2017-11-27 Thread Martin Mueller
 Apologies if I asked this question before.

I’m a newcomer to Postgres, having migrated from Mysql.  Information about 
tables seems harder to get at in Postgres. That data directory on my machine is 
suspiciously large—well over 100 GB.  The directory Postgres/var-9.5/base/ 
contains a number of subdirectories with Arabic numerals. Directory 16385 has a 
subdirectory 17463 with a size of 1.07 GB. But there are also 17 subdirectories 
with names from 17463.1 to 17.463.17.  There are also other entries with 
similar forms of duplication and suspiciously identical file sizes of 1.07GB.

Is this normal behavior?  Where in the postgres documentation do I read up on 
this? Postgres strikes me as superior to MySQl, especially with regard to 
string functions and regular expressions, but it’s harder to look under the 
hood. How, for instance, do I figure out what number corresponds to the table 
that I know as ‘earlyprinttuples

With thanks in advance for any advice

Martin Mueller
Professor emeritus of English and Classics



Re: Table and Index bloating

2017-11-27 Thread Carlos Martinez
Hi.

Can we use pg_repack with standard community edition of postgresql? Yes.
and can it be used on the databases with streaming replication? Yes.

Best regards.

Carlos Martinez



On Mon, Nov 27, 2017 at 10:08 AM, Vikas Sharma  wrote:
> Thank you Stefan for the queries.
>
> Thanks you Carlos for pg_repack suggestion, Can we use pg_repack with
> standard community edition of postgresql? and can it be used on the
> databases with streaming replication?
>
> Regards
> Vikas Sharma
>
> On 27 November 2017 at 14:58, Carlos Martinez  wrote:
>>
>> Hi.
>>
>> You can consider use pg_repack to remove bloat
>> (http://reorg.github.io/pg_repack/)
>>
>> Best regards.
>>
>> Carlos Martinez
>>
>>
>> On Mon, Nov 27, 2017 at 4:17 AM, Stefan Fercot 
>> wrote:
>> > Hi,
>> >
>> > You can track that with some monitoring tools like check_pgactivity. The
>> > queries can be found here :
>> > https://github.com/ioguix/pgsql-bloat-estimation.
>> >
>> > Tables :
>> >
>> > https://github.com/ioguix/pgsql-bloat-estimation/blob/master/table/table_bloat.sql
>> > Indexes :
>> >
>> > https://github.com/ioguix/pgsql-bloat-estimation/blob/master/btree/btree_bloat.sql
>> >
>> > Backups should also be part of your maintenance plan.
>> > Vacuum/Analyse/Reindex is already a good basis.
>> >
>> >
>> > Kind regards
>> >
>> >
>> > On 11/27/2017 10:10 AM, Vikas Sharma wrote:
>> >> Hi All,
>> >>
>> >> Could someone please provide the query/queries  to find table and
>> >> Index bloating in PgSql 9.3 onwards?
>> >> and are there any other maintenance tasks inPgSql except vacuum &
>> >> analyse.
>> >>
>> >> Regards
>> >> Vikas
>> >
>> > --
>> > Stefan FERCOT
>> > http://dalibo.com - http://dalibo.org
>> >
>> >
>>
>



Re: Table and Index bloating

2017-11-27 Thread Vikas Sharma
Thank you Stefan for the queries.

Thanks you Carlos for pg_repack suggestion, Can we use pg_repack with
standard community edition of postgresql? and can it be used on the
databases with streaming replication?

Regards
Vikas Sharma

On 27 November 2017 at 14:58, Carlos Martinez  wrote:

> Hi.
>
> You can consider use pg_repack to remove bloat
> (http://reorg.github.io/pg_repack/)
>
> Best regards.
>
> Carlos Martinez
>
>
> On Mon, Nov 27, 2017 at 4:17 AM, Stefan Fercot 
> wrote:
> > Hi,
> >
> > You can track that with some monitoring tools like check_pgactivity. The
> > queries can be found here :
> > https://github.com/ioguix/pgsql-bloat-estimation.
> >
> > Tables :
> > https://github.com/ioguix/pgsql-bloat-estimation/blob/
> master/table/table_bloat.sql
> > Indexes :
> > https://github.com/ioguix/pgsql-bloat-estimation/blob/
> master/btree/btree_bloat.sql
> >
> > Backups should also be part of your maintenance plan.
> > Vacuum/Analyse/Reindex is already a good basis.
> >
> >
> > Kind regards
> >
> >
> > On 11/27/2017 10:10 AM, Vikas Sharma wrote:
> >> Hi All,
> >>
> >> Could someone please provide the query/queries  to find table and
> >> Index bloating in PgSql 9.3 onwards?
> >> and are there any other maintenance tasks inPgSql except vacuum &
> analyse.
> >>
> >> Regards
> >> Vikas
> >
> > --
> > Stefan FERCOT
> > http://dalibo.com - http://dalibo.org
> >
> >
>
>


Re: Table and Index bloating

2017-11-27 Thread Carlos Martinez
Hi.

You can consider use pg_repack to remove bloat
(http://reorg.github.io/pg_repack/)

Best regards.

Carlos Martinez


On Mon, Nov 27, 2017 at 4:17 AM, Stefan Fercot  wrote:
> Hi,
>
> You can track that with some monitoring tools like check_pgactivity. The
> queries can be found here :
> https://github.com/ioguix/pgsql-bloat-estimation.
>
> Tables :
> https://github.com/ioguix/pgsql-bloat-estimation/blob/master/table/table_bloat.sql
> Indexes :
> https://github.com/ioguix/pgsql-bloat-estimation/blob/master/btree/btree_bloat.sql
>
> Backups should also be part of your maintenance plan.
> Vacuum/Analyse/Reindex is already a good basis.
>
>
> Kind regards
>
>
> On 11/27/2017 10:10 AM, Vikas Sharma wrote:
>> Hi All,
>>
>> Could someone please provide the query/queries  to find table and
>> Index bloating in PgSql 9.3 onwards?
>> and are there any other maintenance tasks inPgSql except vacuum & analyse.
>>
>> Regards
>> Vikas
>
> --
> Stefan FERCOT
> http://dalibo.com - http://dalibo.org
>
>



Install numpy to use within plpython3u in postgres 9.6 for Windows 2012 (64 bit)

2017-11-27 Thread PeterS
I am trying to install numpy to use within plpython3u in postgres 9.6 for
Windows 2012 (64 bit) but I run into dependency problems.

1) I have installed Postgres 9.6 from EnterpriseDB and the language pack,
which installs Python 3.3.  <- is there a way to install a newer version of
Python?

I have done CREATE EXTENSION plpython3u; and it works correctly.

I tried to install python modules with:   pip3.exe install numpy   and
python -m pip install --user numpy   but numpy-1.12.1 returns an error:
raise RuntimeError("Python version 2.7 or >= 3.4 required.")

If I try to install an older version of numpy==1.10.4, there are a number
of errors about blas and atlas and fortran compilers not being available -
so I didn't pursue that avenue much further.

2) If I install Python 3.6 separately, when CREATing EXTENSION plpython3u,
I get the ERROR:  could not load library "C:/Program
Files/PostgreSQL/9.6/lib/plpython3.dll": The specified module could not be
found.
The DLL *is* in that location, but the utility "depends" shows that it is
compiled for python33 and not python36, and those dependencies are
missing.  Could I replace the plpython3.dll file somehow to get it to work
with Python 3.6?


Surely someone has installed numpy for plpython3u in postgres for Windows,
but I could not find any solutions so far, so I'm reaching out to the
experts.

Thanks

Peter


Re: ERROR: too many dynamic shared memory segments

2017-11-27 Thread Jakub Glapa
Hi Thomas,
log excerpt:

...
2017-11-27 12:21:14 CET:192.168.10.83(33424):user@db:[27291]: ERROR:  too
many dynamic shared memory segments
2017-11-27 12:21:14 CET:192.168.10.83(33424):user@db:[27291]: STATEMENT:
 SELECT << REMOVED>>
2017-11-27 12:21:14 CET:192.168.10.83(35182):user@db:[28281]: ERROR:  too
many dynamic shared memory segments
2017-11-27 12:21:14 CET:192.168.10.83(35182):user@db:[28281]: STATEMENT:
 SELECT <
2017-11-27 12:21:14 CET::@:[28618]: ERROR:  could not map dynamic shared
memory segment
2017-11-27 12:21:14 CET::@:[28619]: ERROR:  could not map dynamic shared
memory segment
2017-11-27 12:21:14 CET::@:[25645]: LOG:  worker process: parallel worker
for PID 27291 (PID 28618) exited with exit code 1
2017-11-27 12:21:14 CET::@:[25645]: LOG:  worker process: parallel worker
for PID 27291 (PID 28619) exited with exit code 1
...

this time the db didn't crash but the queries failed to execute.

The queries are somehow special.
We are still using the old style partitioning (list type) but we abuse it a
bit when querying.
When querying a set of partitions instead of doing it via parent table we
stitch together the required tables with UNION ALL (this requires less
locking) and was more performant in our benchmark (the new native
partitioning might improve this but we didn't research that yet).

The queries are in form of
SELECT col1,col2,col3 FROM
(SELECT *
  FROM par1
  WHERE rangestart >= '2017-01-01 00:00:00.0' AND rangeend <=
'2017-11-26 23:59:59.999'
  UNION ALL SELECT *
FROM par2
WHERE rangestart >= '2017-01-01 00:00:00.0' AND rangeend <=
'2017-11-26 23:59:59.999'
  UNION ALL SELECT *
FROM  par2
WHERE rangestart >= '2017-01-01 00:00:00.0' AND rangeend <=
'2017-11-26 23:59:59.999'

  WHERE rangestart >= '2017-01-01 00:00:00.0' AND rangeend <=
'2017-11-26 23:59:59.999'

UNION ALL
...
) unionalias
WHERE 



and indeed the query planner shows the usage of Bitmap Heap Scan

 ->  Subquery Scan on "*SELECT* 2"
 (cost=3068.58..19793.94 rows=1 width=1284)
   ->  Gather  (cost=3068.58..19793.93
rows=1 width=5014)
 Workers Planned: 2
 ->  Parallel Bitmap Heap Scan
on par_6  (cost=2068.58..18793.83 rows=1 width=5014)
   Recheck Cond:
<>
   Filter: <>
   ->  BitmapAnd
 (cost=2068.58..2068.58 rows=30955 width=0)
 ->  BitmapOr
 (cost=999.30..999.30 rows=42989 width=0)
   ->  Bitmap
Index Scan on par_6_datasource  (cost=0.00..990.21 rows=42922 width=0)
 Index
Cond: ((datasource)::text = 'one'::text)
   ->  Bitmap
Index Scan on par_6_datasource  (cost=0.00..4.30 rows=1 width=0)
 Index
Cond: ((datasource)::text = 'two'::text)
   ->  Bitmap
Index Scan on par_6_datasource  (cost=0.00..4.79 rows=67 width=0)
 Index
Cond: ((datasource)::text = 'three'::text)
 ->  Bitmap Index
Scan on par_6_rangestart  (cost=0.00..1069.02 rows=47564 width=0)
   Index Cond:
(rangestart >= '2017-01-01 00:00:00+01'::timestamp with time zone)
 ->  Subquery Scan on "*SELECT* 3"
 (cost=761.33..7944.99 rows=1 width=1086)
   ->  Bitmap Heap Scan on par_7
 (cost=761.33..7944.98 rows=1 width=4816)
   Recheck Cond:
<>
   Filter: <>
 ->  BitmapAnd
 (cost=761.33..761.33 rows=7045 width=0)
   ->  Bitmap Index Scan on
par_7_rangestart  (cost=0.00..380.35 rows=14942 width=0)
 Index Cond:
(rangestart >= '2017-01-01 00:00:00+01'::timestamp with time zone)
   ->  BitmapOr
 (cost=380.72..380.72 rows=12248 width=0)
 ->  Bitmap Index
Scan on par_7_datasource  (cost=0.00..372.00 rows=12228 width=0)
   Index Cond:
((datasource)::text = 'one'::text)
 ->  Bitmap Index
Scan on par_7_datasource  (cost=0.00

Re: ERROR: too many dynamic shared memory segments

2017-11-27 Thread Thomas Munro
On Mon, Nov 27, 2017 at 10:54 PM, Jakub Glapa  wrote:
> The DB enters recovery mode after that.

That's not good.  So it actually crashes?  Can you please show the
full error messages?

> 2017-11-23 07:20:39 CET::@:[24823]: ERROR:  could not attach to dynamic
> shared area

>From src/backend/utils/mmgr/dsa.c.  The same message is used for
dsa_attach() and for attach_internal(), but in this case it must be
the latter because we use in-place DSA areas for parallel query.  This
means that when the worker tried to attach it found that
control->refcnt == 0, meaning 'too late, every other backend including
the leader has already detached'.

> 2017-11-23 07:20:39 CET::@:[24822]: ERROR:  could not map dynamic shared
> memory segment

>From src/backend/access/transam/parallel.c when dsm_attach returns
null.  I think this also means 'too late'.

So those are both workers that have started up and found that the
leader has abandoned the parallel query already, but they discovered
it at different stages.  PID 24823 didn't manage to attach to the DSM
segment, while PID 24822 did but found that the other(s) had already
detached from the per-query DSA area inside it.

> 2017-11-23 07:20:40 CET:192.168.xx,xx(33974):u(at)db:[24209]: ERROR:  too
> many
> dynamic shared memory segments
>
> The errors happen when the parallel execution is enabled and multiple
> queries are executed simultaneously.
> If I set the max_parallel_workers_per_gather = 0 the error doesn't occur.
> The queries are rather big, each executes over 10sec.
> I see the error when 4 or more queries are started at the same time.
>
> my postgresql.conf:
>
> max_connections = 100

So the two questions we need to figure out are: (1) how do we manage
to use up all 64 + 2 * 100 DSM slots (or somehow corrupt things so it
looks that way) by running only 4 queries, and (2) why would be be
crashing rather than failing and reporting an error?  I'm not able to
reproduce the problem from your description running lots of parallel
queries running at the same time.  Can you provide reproduction steps?
 Does your query plan happen to include a Parallel Bitmap Heap Scan?

-- 
Thomas Munro
http://www.enterprisedb.com



ERROR: too many dynamic shared memory segments

2017-11-27 Thread Jakub Glapa
Hi,
I started seeing those errors on Postgresql 10.1 running on Ubuntu 16.04.3
(64GB/8CPU) server.
The DB enters recovery mode after that.

2017-11-23 07:20:39 CET::@:[24823]: ERROR:  could not attach to dynamic
shared area
2017-11-23 07:20:39 CET::@:[24822]: ERROR:  could not map dynamic shared
memory segment
2017-11-23 07:20:40 CET:192.168.xx,xx(33974):u(at)db:[24209]: ERROR:  too
many
dynamic shared memory segments



The errors happen when the parallel execution is enabled and multiple
queries are executed simultaneously.
If I set the max_parallel_workers_per_gather = 0 the error doesn't occur.
The queries are rather big, each executes over 10sec.
I see the error when 4 or more queries are started at the same time.

my postgresql.conf:

max_connections = 100
shared_buffers = 12GB
effective_cache_size = 51GB
work_mem = 83886kB
maintenance_work_mem = 2GB
max_wal_size = 8GB
min_wal_size = 4GB
checkpoint_completion_target = 0.9
wal_buffers = 32MB
default_statistics_target = 1000
max_locks_per_transaction = 128
#max_parallel_workers_per_gather = 0




PS. I've posted this question on pgsql-admin distro but didn't get any
feedback.


--
regards,
Jakub Glapa


Re: Table and Index bloating

2017-11-27 Thread Stefan Fercot
Hi,

You can track that with some monitoring tools like check_pgactivity. The
queries can be found here :
https://github.com/ioguix/pgsql-bloat-estimation.

Tables :
https://github.com/ioguix/pgsql-bloat-estimation/blob/master/table/table_bloat.sql
Indexes :
https://github.com/ioguix/pgsql-bloat-estimation/blob/master/btree/btree_bloat.sql

Backups should also be part of your maintenance plan.
Vacuum/Analyse/Reindex is already a good basis.


Kind regards


On 11/27/2017 10:10 AM, Vikas Sharma wrote:
> Hi All,
>
> Could someone please provide the query/queries  to find table and
> Index bloating in PgSql 9.3 onwards? 
> and are there any other maintenance tasks inPgSql except vacuum & analyse.
>
> Regards
> Vikas

-- 
Stefan FERCOT
http://dalibo.com - http://dalibo.org




signature.asc
Description: OpenPGP digital signature


Table and Index bloating

2017-11-27 Thread Vikas Sharma
Hi All,

Could someone please provide the query/queries  to find table and Index
bloating in PgSql 9.3 onwards?
and are there any other maintenance tasks inPgSql except vacuum & analyse.

Regards
Vikas


Re: equalant of msdb in sql server

2017-11-27 Thread chandra sekhar
What is the query to get list of the archive log files  to capture start
date and time of each archive log

On Sun, Nov 26, 2017 at 5:31 PM Craig Ringer  wrote:

> On 27 November 2017 at 04:59, Daniel Gustafsson  wrote:
>
>> > On 26 Nov 2017, at 17:54, chandra sekhar 
>> wrote:
>> >
>> > sql server has msdb database which stores the history of backups taken.
>> > is there any table in postgres which keep track of when the archive log
>> files are archived ?
>>
>
> No, there isn't.
>
> --
>  Craig Ringer   http://www.2ndQuadrant.com/
>  PostgreSQL Development, 24x7 Support, Training & Services
>


Re: equalant of msdb in sql server

2017-11-27 Thread Craig Ringer
On 27 November 2017 at 12:10, chandra sekhar 
wrote:

> What is the query to get list of the archive log files  to capture start
> date and time of each archive log
>

There isn't one.

You may be looking for a tool like PgBarman or other archive and backup
managers.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services