On Tue, Dec 1, 2015 at 2:25 AM, Kyotaro HORIGUCHI
wrote:
> Yeah, it is actually restricted in that length. But if we allow
> the buffer to store whole the qualified names, it will need 64 *
> 2 + 1 +1 = 130 bytes * 10 1300 bytes for each beentry... It might
> be
On Mon, Nov 30, 2015 at 9:10 PM, Vinayak wrote:
> Thanks for the v7.
> Please check the comment below.
> -Table name in the vacuum progress
>
> + snprintf(progress_message[0], PROGRESS_MESSAGE_LENGTH, "%s.%s",
> schemaname,relname);
Uh, I hope that line doesn't appear in the
Vinayak wrote:
> In the vacuum progress, column table_name is showing first 30 characters of
> table name.
> postgres=# create table test_vacuum_progress_in_postgresql(c1 int,c2 text);
> postgres=# select * from pg_stat_vacuum_progress ;
> -[ RECORD 1 ]---+--
> pid
Hello,
At Thu, 3 Dec 2015 14:18:50 +0900, Amit Langote
wrote in <565fd0ba.5020...@lab.ntt.co.jp>
> On 2015/12/03 13:47, Kyotaro HORIGUCHI wrote:
> > At Wed, 2 Dec 2015 15:48:20 -0300, Alvaro Herrera
> > wrote
> >>
> >> Actually, do we
Hello,
At Wed, 2 Dec 2015 15:48:20 -0300, Alvaro Herrera
wrote in <20151202184820.GL2763@alvherre.pgsql>
> Vinayak wrote:
>
> > In the vacuum progress, column table_name is showing first 30 characters of
> > table name.
> > postgres=# create table
Hello, sorry for the cloberred CC list.
# I restored it manually from upthread..
At Wed, 2 Dec 2015 13:42:01 -0500, Robert Haas wrote in
> On Tue, Dec 1, 2015 at 2:25 AM, Kyotaro HORIGUCHI
>
Hi,
On 2015/12/03 15:27, Kyotaro HORIGUCHI wrote:
> At Thu, 3 Dec 2015 14:18:50 +0900, Amit Langote wrote
>> On 2015/12/03 13:47, Kyotaro HORIGUCHI wrote:
>>>
>>> Apart from the representation of the relation, OID would be
>>> better as a field in beentry.
>>
>> I wonder if the field should be a
On 2015/12/03 13:47, Kyotaro HORIGUCHI wrote:
> At Wed, 2 Dec 2015 15:48:20 -0300, Alvaro Herrera
> wrote
>>
>> Actually, do we really need to have the table name as a string at all
>> here? Why not just report the table OID? Surely whoever wants to check
>> the
On 2015/12/01 16:25, Kyotaro HORIGUCHI wrote:
> At Mon, 30 Nov 2015 19:10:44 -0700 (MST), Vinayak wrote
>> Thanks for the v7.
>> Please check the comment below.
>> -Table name in the vacuum progress
>>
>> + snprintf(progress_message[0], PROGRESS_MESSAGE_LENGTH, "%s.%s",
>>
Thanks for the v7.
Please check the comment below.
-Table name in the vacuum progress
+ snprintf(progress_message[0], PROGRESS_MESSAGE_LENGTH, "%s.%s",
schemaname,relname);
In the vacuum progress, column table_name is showing first 30 characters of
table name.
postgres=# create table
Sorry for the confusing description and the chopped sentsnce.
At Tue, 01 Dec 2015 16:25:57 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI
wrote in
<20151201.162557.184519961.horiguchi.kyot...@lab.ntt.co.jp>
> Hello,
>
> At Mon, 30 Nov 2015 19:10:44 -0700
Hello,
At Mon, 30 Nov 2015 19:10:44 -0700 (MST), Vinayak wrote
in <1448935844520-5875614.p...@n5.nabble.com>
> Thanks for the v7.
> Please check the comment below.
> -Table name in the vacuum progress
>
> + snprintf(progress_message[0], PROGRESS_MESSAGE_LENGTH, "%s.%s",
>
Hello,
Thank you for your comments.
Please find attached patch addressing following comments ,
>- duplicate_oids error in HEAD.
Check.
>- a compiler warning:
>pgstat.c:2898: warning: no previous prototype for
‘pgstat_reset_activityflag’
Check.
>One more change you could do is
On 11/24/15 2:02 AM, Amit Langote wrote:
It just occurred to me that we could do the instrumentation in
>lazy_tid_reaped(). It might seem bad to do in increment for every tuple in
>an index, but we're already doing a bsearch over the dead tuple list.
>Presumably that's going to be a lot more
On 2015/11/25 9:32, Jim Nasby wrote:
> On 11/24/15 2:02 AM, Amit Langote wrote:
>> Just to clarify, does this mean we report index vacuum progress in terms
>> of index items processed (not pages)? If so, how do we get total number of
>> index items to process (presumably across all indexes) for a
On 11/24/15 7:02 PM, Amit Langote wrote:
You'd get it from pg_class.reltuples for each index. Since all index
>vacuuming is done strictly on a per-index-tuple basis, that's probably the
>most accurate way to do it anyway.
Important to remember though that the reltuples would be latest as of the
On 2015/11/21 14:38, Jim Nasby wrote:
> On 11/19/15 7:29 PM, Amit Langote wrote:
>>> Another option is to provide the means for the index scan routines to
>>> >report their progress. Maybe every index AM won't use it, but it'd
>>> >certainly be a lot better than staring at a long_running boolean.
On 2015/11/21 5:46, Robert Haas wrote:
> On Thu, Nov 19, 2015 at 2:18 AM, Amit Langote
> wrote:
>> As someone pointed out upthread, the final heap truncate phase can take
>> arbitrarily long and is outside the scope of lazy_scan_heap() to
>> instrument. Perhaps a
On Sat, Nov 21, 2015 at 12:38 AM, Jim Nasby wrote:
> On 11/19/15 7:29 PM, Amit Langote wrote:
>>>
>>> Another option is to provide the means for the index scan routines to
>>> >report their progress. Maybe every index AM won't use it, but it'd
>>> >certainly be a lot
On Thu, Nov 19, 2015 at 2:18 AM, Amit Langote
wrote:
> As someone pointed out upthread, the final heap truncate phase can take
> arbitrarily long and is outside the scope of lazy_scan_heap() to
> instrument. Perhaps a bool, say, waiting_heap_trunc could be reported
On 11/19/15 7:29 PM, Amit Langote wrote:
Another option is to provide the means for the index scan routines to
>report their progress. Maybe every index AM won't use it, but it'd
>certainly be a lot better than staring at a long_running boolean.
The boolean would be a workaround for sure. I'm
Hello Michael,
I am planning to continue contributing to this feature in any way be it by
reviewing the patch or making one. Though I haven't been able to reply to
the comments or post an updated patch lately. I plan to do that soon.
Thank you,
Rahila
On Thu, Nov 19, 2015 at 1:09 PM, Michael
On 2015/11/20 0:57, Jim Nasby wrote:
> On 11/19/15 1:18 AM, Amit Langote wrote:
>> 1) General purpose interface for (maintenance?) commands to report a set
>
> I'm surprised no one has picked up on using this for DML. Certainly anyone
> works with ETL processes would love to be able to get some
On 11/19/15 1:18 AM, Amit Langote wrote:
1) General purpose interface for (maintenance?) commands to report a set
I'm surprised no one has picked up on using this for DML. Certainly
anyone works with ETL processes would love to be able to get some clue
on the status of a long running
On 2015/11/10 17:02, Amit Langote wrote:
> On 2015/10/29 23:22, Syed, Rahila wrote:
>> Please find attached an updated patch.
>
> A few more comments on v6:
I backed up a little, studied the proposal and the patch in little some
more detail. Here are still more comments -
Going through the
On 2015/11/19 16:18, Amit Langote wrote:
> I'm marking this as "Waiting on author" in the commitfest app. Also, let's
> hear from more people.
Well, it seems Michael beat me to it.
Thanks,
Amit
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your
On Thu, Nov 19, 2015 at 4:30 PM, Amit Langote
wrote:
> On 2015/11/19 16:18, Amit Langote wrote:
>> I'm marking this as "Waiting on author" in the commitfest app. Also, let's
>> hear from more people.
>
> Well, it seems Michael beat me to it.
Yeah, some other folks
On Tue, Nov 10, 2015 at 5:02 PM, Amit Langote
wrote:
> On 2015/10/29 23:22, Syed, Rahila wrote:
> How about the following instead -
>
> + snprintf(progress_message[0], PROGRESS_MESSAGE_LENGTH, "%s",
> +
On 2015/10/29 23:22, Syed, Rahila wrote:
> Please find attached an updated patch.
A few more comments on v6:
> relname = RelationGetRelationName(onerel);
> + schemaname = get_namespace_name(RelationGetNamespace(onerel));
> ereport(elevel,
>
On 2015/10/29 23:22, Syed, Rahila wrote:
>
> Please find attached an updated patch.
>
Thanks for the v6. A few quick comments:
- duplicate_oids error in HEAD.
- a compiler warning:
pgstat.c:2898: warning: no previous prototype for ‘pgstat_reset_activityflag’
To fix that use void for empty
Hello, I have some random comments on this patch addition to
Amit's comments.
- Type of the flag of vacuum activity.
ACTIVITY_IS_VACUUM is the alone entry in the enum, and the
variable to store it is named as *flag. If you don't have any
plan to extend this information, the name of this variable
Hello,
Please find attached an updated patch.
>Flag isn't reset on error.
Corrected in the attached.
> + pgstat_reset_activityflag;
>Does this actually compile?
It does compile but with no effect. It has been corrected.
>snprintf()? I don't think you need to keep track of schemaname_len
On Tue, Oct 20, 2015 at 4:58 AM, Syed, Rahila wrote:
>>I think that you should add the flag or something which indicates whether
>>this backend is running VACUUM or not, into PgBackendStatus.
>>pg_stat_vacuum_progress should display the entries of only backends with that
Syed, Rahila wrote:
> @@ -355,6 +356,7 @@ vacuum(int options, RangeVar *relation, Oid relid,
> VacuumParams *params,
> vac_update_datfrozenxid();
> }
>
> + pgstat_reset_activityflag;
> /*
>* Clean up working storage --- note we must do this after
>
Hello,
>I think that you should add the flag or something which indicates whether this
>backend is running VACUUM or not, into PgBackendStatus.
>pg_stat_vacuum_progress should display the entries of only backends with that
>flag set true. This design means that you need to set the flag to true
Hello Fujii-san,
>Here are another review comments
Thank you for review. Please find attached an updated patch.
> You removed some empty lines, for example, in vacuum.h.
>Which seems useless to me.
Has been corrected in the attached.
>Why did you use an array to store the progress information
Hello,
Please check the attached patch as the earlier one had typo in regression test
output.
>+#define PG_STAT_GET_PROGRESS_COLS30
>Why did you use 30?
That has come from N_PROGRESS_PARAM * 3 where N_PROGRESS_PARAM = 10 is the
number of progress parameters of each type stored in shared
Hi!
On 2015-09-22 15:24:38 +, Syed, Rahila wrote:
> Please find attached patch with bugs reported by Thom and Sawada-san solved.
This thread has seen a bunch of reviews and new patch versions, but
doesnt yet seem to have arrived in a committable state. As the
commitfest ended and this patch
On Fri, Sep 25, 2015 at 2:03 AM, Fujii Masao wrote:
> On Wed, Sep 23, 2015 at 12:24 AM, Syed, Rahila
> wrote:
>> Hello,
>>
>> Please find attached patch with bugs reported by Thom and Sawada-san solved.
>
> The regression test failed on my
On Fri, Oct 2, 2015 at 3:14 AM, Amit Langote
wrote:
> On 2015/10/02 15:38, Fujii Masao wrote:
>>
>> +uint32 progress_param[N_PROGRESS_PARAM];
>>
>> Why did you use an array to store the progress information of VACUUM?
>> I think that it's better to use separate
On 2015/10/02 15:38, Fujii Masao wrote:
>
> +uint32 progress_param[N_PROGRESS_PARAM];
>
> Why did you use an array to store the progress information of VACUUM?
> I think that it's better to use separate specific variables for them for
> better code readability, for example, variables
On Thu, Sep 24, 2015 at 8:37 AM, Masahiko Sawada wrote:
> When we run "VACUUM;", the all tables of current database will be vacuumed.
> So pg_stat_vacuum_progress should have these oid in order to show
> which table is vacuumed now.
Hmm, I would tend to instead show the
On Mon, Sep 28, 2015 at 11:03 PM, Robert Haas wrote:
> On Thu, Sep 24, 2015 at 8:37 AM, Masahiko Sawada
> wrote:
>> When we run "VACUUM;", the all tables of current database will be vacuumed.
>> So pg_stat_vacuum_progress should have these oid in
On Wed, Sep 23, 2015 at 12:24 AM, Syed, Rahila wrote:
> Hello,
>
> Please find attached patch with bugs reported by Thom and Sawada-san solved.
>
>>* The progress of vacuum by autovacuum seems not to be displayed.
> The progress is stored in shared variables during
On 9/24/15 7:37 AM, Masahiko Sawada wrote:
* The progress of VACUUM FULL seems wrong.
When I run VACUUM FULL for a table, I got following progress.
It never occurred to me that this patch was attempting to measure the
progress of a CLUSTER (aka VACUUM FULL). I'm not sure that's such a
great
On Wed, Sep 23, 2015 at 12:24 AM, Syed, Rahila wrote:
> Hello,
>
> Please find attached patch with bugs reported by Thom and Sawada-san solved.
The regression test failed on my machine, so you need to update the
regression test,
I think.
Regards,
--
Fujii Masao
--
Hello,
Please find attached patch with bugs reported by Thom and Sawada-san solved.
>* The progress of vacuum by autovacuum seems not to be displayed.
The progress is stored in shared variables during autovacuum. I guess the
reason they are not visible is that the entries are deleted as soon as
On Tue, Sep 15, 2015 at 11:35 PM, Syed, Rahila wrote:
>
> Hello Thom,
>
>>Okay, I've just tested this with a newly-loaded table (1,252,973 of jsonb
>>data),
> Thanks a lot!
>
>>but after it's finished, I end up with this:
>>json=# select * from pg_stat_vacuum_progress;
Hello Thom,
>Okay, I've just tested this with a newly-loaded table (1,252,973 of jsonb
>data),
Thanks a lot!
>but after it's finished, I end up with this:
>json=# select * from pg_stat_vacuum_progress;
>-[ RECORD 1 ]---+---
>pid | 5569
>total_pages | 217941
On 11 September 2015 at 15:43, Syed, Rahila wrote:
>
> Hello,
>
> Please find attached updated VACUUM progress checker patch.
> Following have been accomplished in the patch
>
> 1. Accounts for index pages count while calculating total progress of
> VACUUM.
> 2. Common
On 11 September 2015 at 15:43, Syed, Rahila wrote:
>
> Hello,
>
> Please find attached updated VACUUM progress checker patch.
> Following have been accomplished in the patch
>
> 1. Accounts for index pages count while calculating total progress of
> VACUUM.
> 2. Common
>This doesn't seem to compile
Oh. It compiled successfully when applied on HEAD on my machine. Anyways,
the OID is changed to 3309 in the attached patch. 3308 / 3309 both are part
of OIDs in unused OID list.
Thank you,
Rahila Syed
Vacuum_progress_checker_v2.patch
Description: Binary data
--
Rahila Syed wrote:
> >This doesn't seem to compile
> Oh. It compiled successfully when applied on HEAD on my machine. Anyways,
> the OID is changed to 3309 in the attached patch. 3308 / 3309 both are part
> of OIDs in unused OID list.
I think Thom may have patched on top of some other patch.
--
On 11 September 2015 at 22:34, Alvaro Herrera
wrote:
> Rahila Syed wrote:
> > >This doesn't seem to compile
> > Oh. It compiled successfully when applied on HEAD on my machine. Anyways,
> > the OID is changed to 3309 in the attached patch. 3308 / 3309 both are
> part
>
Hello,
Please find attached updated VACUUM progress checker patch.
Following have been accomplished in the patch
1. Accounts for index pages count while calculating total progress of VACUUM.
2. Common location for storing progress parameters for any command. Idea is
every command which needs
Hello,
On Jul 16, 2015 1:48 AM, "Rahila Syed" wrote:
>
> Hello,
>
> Please find attached updated patch >with an interface to calculate
command progress in pgstat.c. This interface currently
implements VACUUM progress tracking .
I have added this patch to CommitFest
On Mon, Aug 10, 2015 at 12:36 AM, Rahila Syed rahilasye...@gmail.com wrote:
Hello,
Say, 6 bigint counters, 6 float8
counters, and 3 strings up to 80 characters each. So we have a
fixed-size chunk of shared memory per backend, and each backend that
wants to expose progress information can fill
In case of vacuum, I think we need to track the number of scanned heap
pages at least, and the information about index scan is the additional
information
Actually the progress of heap pages scan depend on index scans. So complete
VACUUM progress
needs to have a count of index pages scanned too.
Hello,
Autovacuum knows what % of a table needs to be cleaned - that is how it is
triggered.
When a vacuum runs we should calculate how many TIDs we will collect and
therefore how many trips to the indexes we need for given memory.
We can use the VM to find out how many blocks we'll need to
On 10 August 2015 at 17:50, Alvaro Herrera alvhe...@2ndquadrant.com wrote:
Masahiko Sawada wrote:
This topic may have been already discussed but, why don't we use just
total scanned pages and total pages?
Because those numbers don't extrapolate nicely. If the density of dead
tuples is
On Tue, Aug 11, 2015 at 1:50 AM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
Masahiko Sawada wrote:
This topic may have been already discussed but, why don't we use just
total scanned pages and total pages?
Because those numbers don't extrapolate nicely. If the density of dead
tuples is
Masahiko Sawada wrote:
This topic may have been already discussed but, why don't we use just
total scanned pages and total pages?
Because those numbers don't extrapolate nicely. If the density of dead
tuples is irregular across the table, such absolute numbers might be
completely meaningless:
Hello,
When we're in Phase2 or 3, don't we need to report the number of total page
scanned or percentage of how many table pages scanned, as well?
The total heap pages scanned need to be reported with phase 2 or 3. Complete
progress report need to have numbers from each phase when applicable.
On Tue, Aug 11, 2015 at 12:20 AM, Simon Riggs si...@2ndquadrant.com wrote:
On 10 August 2015 at 15:59, Syed, Rahila rahila.s...@nttdata.com wrote:
Hello,
When we're in Phase2 or 3, don't we need to report the number of total
page scanned or percentage of how many table pages scanned, as
On 10 August 2015 at 15:59, Syed, Rahila rahila.s...@nttdata.com wrote:
Hello,
When we're in Phase2 or 3, don't we need to report the number of total
page scanned or percentage of how many table pages scanned, as well?
The total heap pages scanned need to be reported with phase 2 or 3.
On Mon, Aug 10, 2015 at 1:36 PM, Rahila Syed rahilasye...@gmail.com wrote:
Hello,
Say, 6 bigint counters, 6 float8
counters, and 3 strings up to 80 characters each. So we have a
fixed-size chunk of shared memory per backend, and each backend that
wants to expose progress information can fill in
Hello,
Say, 6 bigint counters, 6 float8
counters, and 3 strings up to 80 characters each. So we have a
fixed-size chunk of shared memory per backend, and each backend that
wants to expose progress information can fill in those fields however
it likes, and we expose the results.
This would be
The total number of heap pages is known, and the total number of index
pages is also known, so it's possible to derive a percentage out of
this part.
The total number of index pages scanned during entire vacuum will depend on
number
of index scans that happens.
In order to extrapolate percent
On 07/31/2015 11:21 AM, Alvaro Herrera wrote:
This looks pretty complicated to understand from the user POV, but
anything other than this seems to me too simplistic to be of any use.
I would agree and I don't think it is all that complicated. This is an
RDBMS not a web browser downloading
I think the only way to produce usable estimates is to report more than
one number. And in the particular case of lazy vacuuming, ISTM the way
to do it is to consider heap scanning as one phase, index cleanup as
another phase; these two phases can be interleaved. And there's a final
heap scan
On Fri, Jul 31, 2015 at 2:21 PM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
I think the only way to produce usable estimates is to report more than
one number. And in the particular case of lazy vacuuming, ISTM the way
to do it is to consider heap scanning as one phase, index cleanup as
Robert Haas wrote:
On Fri, Jul 31, 2015 at 2:21 PM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
I think the only way to produce usable estimates is to report more than
one number. And in the particular case of lazy vacuuming, ISTM the way
to do it is to consider heap scanning as one
I think it's related to the problem of figuring out how many dead tuples
you expect to find in the overall heap, which you need to do to have any
hope of this being a comprehensive estimate.
An estimate of number of index scans while vacuuming can be done using
estimate of total dead tuples in
2015-07-25 0:41 GMT+02:00 Josh Berkus j...@agliodbs.com:
On 07/24/2015 11:06 AM, Jim Nasby wrote:
On 7/23/15 5:18 AM, Thakur, Sameer wrote:
Hello,
logged 25 times
Sorry, it is much lower at 7 times. Does not change overall point though
I think it's related to the problem of figuring
On 7/23/15 5:18 AM, Thakur, Sameer wrote:
Hello,
logged 25 times
Sorry, it is much lower at 7 times. Does not change overall point though
I think it's related to the problem of figuring out how many dead tuples
you expect to find in the overall heap, which you need to do to have any
hope
On Fri, Jul 24, 2015 at 2:00 PM, Jim Nasby jim.na...@bluetreble.com wrote:
You need to restrict the amount of info, because you've got to
preallocate enough shared memory to store all the data that somebody
might report.
I was thinking your DSM stuff would come into play here. We wouldn't
On 7/23/15 2:43 PM, Robert Haas wrote:
On Wed, Jul 22, 2015 at 11:28 AM, Jim Nasby jim.na...@bluetreble.com wrote:
If we want to expose that level of detail, I think either JSON or arrays
would make more sense, so we're not stuck with a limited amount of info.
Perhaps DDL would be OK with the
On 07/24/2015 11:06 AM, Jim Nasby wrote:
On 7/23/15 5:18 AM, Thakur, Sameer wrote:
Hello,
logged 25 times
Sorry, it is much lower at 7 times. Does not change overall point though
I think it's related to the problem of figuring out how many dead tuples
you expect to find in the overall
Something like that would be helpful. I just had to stop one after an
hour and have no idea how much longer it would have taken.
__
*Mike Blackwell | Technical Analyst, Distribution Services/Rollout
Management | RR
On Wed, Jul 22, 2015 at 11:28 AM, Jim Nasby jim.na...@bluetreble.com wrote:
If we want to expose that level of detail, I think either JSON or arrays
would make more sense, so we're not stuck with a limited amount of info.
Perhaps DDL would be OK with the numbers you suggested, but
Hello,
Yes. Any percent completion calculation will have to account for the case of
needing multiple passes through all the indexes.
Each dead tuple requires 6 bytes (IIRC) of maintenance work mem. So if you're
deleting 5M rows with m_w_m=1MB you should be getting many passes through the
Hello,
logged 25 times
Sorry, it is much lower at 7 times. Does not change overall point though
regards
Sameer Thakur | Senior Software Specialist | NTTDATA Global Delivery Services
Private Ltd | w. +91.20.6641.7146 | VoIP: 8834.8146 | m. +91 989.016.6656 |
sameer.tha...@nttdata.com | Follow us
On 7/22/15 6:58 AM, Amit Langote wrote:
On Wed, Jul 22, 2015 at 8:19 PM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
Not sure what Jim meant. Maybe he meant to be aware of when spilling to
disk happens? Obviously, things become slower, so maybe you need to
consider it for progress
On 7/22/15 9:15 AM, Robert Haas wrote:
I'm not proposing this feature, I'm merely asking for it to be defined in a
way that makes it work for more than just VACUUM. Once we have a way of
reporting useful information, other processes can be made to follow that
mechanism, like REINDEX, ALTER TABLE
On 21 July 2015 at 21:24, Robert Haas robertmh...@gmail.com wrote:
On Tue, Jun 30, 2015 at 4:32 AM, Simon Riggs si...@2ndquadrant.com
wrote:
Yes, I suggest just a single column on pg_stat_activity called
pct_complete
trace_completion_interval = 5s (default)
Every interval, we report
Hello,
I think it'd be better to combine both numbers into one report:
It'd also be good to standardize on where the * 100 is happening.
Done
can be replaced by
(itemptr-ipblkid != vacrelstats-last_scanned_page)
Get compiler error : invalid operands to binary != (have ‘BlockIdData’ and
Thakur, Sameer wrote:
Hello,
I think it'd be better to combine both numbers into one report:
It'd also be good to standardize on where the * 100 is happening.
Done
can be replaced by
(itemptr-ipblkid != vacrelstats-last_scanned_page)
Get compiler error : invalid operands to binary != (have
On Wed, Jul 22, 2015 at 8:19 PM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
Not sure what Jim meant. Maybe he meant to be aware of when spilling to
disk happens? Obviously, things become slower, so maybe you need to
consider it for progress reporting purposes.
Perhaps the m_w_m
On Wed, Jul 22, 2015 at 3:02 AM, Simon Riggs si...@2ndquadrant.com wrote:
For me, the user workflow looks like these
Worried: Task X is taking ages? When is it expected to finish?
Ops: 13:50
sometime later, about 14:00
Worried: Task X is still running? But I thought its ETA was 13:50?
On 22 July 2015 at 13:00, Robert Haas robertmh...@gmail.com wrote:
On Wed, Jul 22, 2015 at 3:02 AM, Simon Riggs si...@2ndquadrant.com
wrote:
For me, the user workflow looks like these
Worried: Task X is taking ages? When is it expected to finish?
Ops: 13:50
sometime later, about
On Wed, Jul 22, 2015 at 8:24 AM, Simon Riggs si...@2ndquadrant.com wrote:
* An estimate of the estimated time of completion - I liked your view that
this prediction may be costly to request
I'm saying it may be massively unreliable, not that it may be costly.
(Someone else may have said that it
On 7/20/15 4:32 AM, Thakur, Sameer wrote:
Hello,
Does this actually handle multiple indexes? It doesn't appear so, which I'd
think is a significant problem... :/
Please find v2 attached which does this.
I think it'd be better to combine both numbers into one report:
On Tue, Jun 30, 2015 at 4:32 AM, Simon Riggs si...@2ndquadrant.com wrote:
Yes, I suggest just a single column on pg_stat_activity called pct_complete
trace_completion_interval = 5s (default)
Every interval, we report the current % complete for any operation that
supports it. We just show
Hello,
Does this actually handle multiple indexes? It doesn't appear so, which I'd
think is a significant problem... :/
Please find v2 attached which does this.
I'm also not seeing how this will deal with exhausting maintenance_work_mem.
ISTM that when that happens you'd definitely want a better
On 7/15/15 11:38 PM, Thakur, Sameer wrote:
Hello,
I am not really willing to show up as the picky guy here, but could it be possible
to receive those patches as attached to emails instead of having them referenced
by URL? I imagine that you are directly using the nabble interface.
Just
Hello,
Naming the GUC pgstat* seems a little inconsistent.
Sorry, there is a typo in the mail. The GUC name is 'track_activity_progress'.
Also, adding the new GUC to src/backend/utils/misc/postgresql.conf.sample
might be helpful
Yes. I will update.
Thank you,
Rahila Syed
Hi
On Wed, Jul 15, 2015 at 9:27 PM, Fujii Masao masao.fu...@gmail.com wrote:
On Thu, Jul 16, 2015 at 5:18 AM, Rahila Syed rahilasye...@gmail.com
wrote:
Hello,
Please find attached updated patch with an interface to calculate command
progress in pgstat.c.
Thanks for updating the
On Thu, Jul 16, 2015 at 5:18 AM, Rahila Syed rahilasye...@gmail.com wrote:
Hello,
Please find attached updated patch with an interface to calculate command
progress in pgstat.c.
Thanks for updating the patch!
I got the following compiler warning.
guc.c:2316: warning: initialization makes
Hello,
Your current design completely misses the time taken to scan indexes, which
is significant.
I tried to address this issue in the attached patch.
The patch calculates index scan progress by measuring against scanned pages
in LVRelStats. It checks for a change current page being scanned and
On Thu, Jul 16, 2015 at 1:30 PM, Sameer Thakur-2 wrote:
Thoughts?
regards
Sameer IndexScanProgress.patch
http://postgresql.nabble.com/file/n5858109/IndexScanProgress.patch
I am not really willing to show up as the picky guy here, but could it
be possible to receive those patches as attached
101 - 200 of 217 matches
Mail list logo