On Sat, 30 Nov 2019 at 22:11, Mahendra Singh <mahi6...@gmail.com> wrote:
>
> On Sat, 30 Nov 2019 at 19:18, Sergei Kornilov <s...@zsrv.org> wrote:
>>
>> Hello
>>
>> Its possible to change order of index processing by parallel leader? In v35 
>> patchset I see following order:
>> - start parallel processes
>> - leader and parallel workers processed index lixt and possible skip some 
>> entries
>> - after that parallel leader recheck index list and process the skipped 
>> indexes
>> - WaitForParallelWorkersToFinish
>>
>> I think it would be better to:
>> - start parallel processes
>> - parallel leader goes through index list and process only indexes which are 
>> skip_parallel_index_vacuum = true
>> - parallel workers processes indexes with skip_parallel_index_vacuum = false
>> - parallel leader start participate with remainings parallel-safe index 
>> processing
>> - WaitForParallelWorkersToFinish
>>
>> This would be less running time and better load balance across leader and 
>> workers in case of few non-parallel and few parallel indexes.
>> (if this is expected and required by some reason, we need a comment in code)
>>
>> Also few notes to vacuumdb:
>> Seems we need version check at least in vacuum_one_database and 
>> prepare_vacuum_command. Similar to SKIP_LOCKED or DISABLE_PAGE_SKIPPING 
>> features.
>> discussion question: difference between --parallel and --jobs parameters 
>> will be confusing? We need more description for this options
>
>
> While doing testing with different server configuration settings, I am 
> getting error (ERROR:  no unpinned buffers available) in parallel vacuum but 
> normal vacuum is working fine.
>
> Test Setup:
> max_worker_processes = 40
> autovacuum = off
> shared_buffers = 128kB
> max_parallel_workers = 40
> max_parallel_maintenance_workers = 40
> vacuum_cost_limit = 2000
> vacuum_cost_delay = 10
>
> Table description: table have 16 indexes(14 btree, 1 hash, 1 BRIN ) and total 
> 10,00,000 tuples and I am deleting all the tuples, then firing vacuum command.
> Run attached .sql file (test_16_indexes.sql)
> $ ./psql postgres
> postgres=# \i test_16_indexes.sql
>
> Re-start the server and do vacuum.
> Case 1) normal vacuum:
> postgres=# vacuum test ;
> VACUUM
> Time: 115174.470 ms (01:55.174)
>
> Case 2) parallel vacuum using 10 parallel workers:
> postgres=# vacuum (parallel 10)test ;
> ERROR:  no unpinned buffers available
> CONTEXT:  parallel worker
> postgres=#
>
> This error is coming due to 128kB shared buffer. I think, I launched 10 
> parallel workers and all are working paralleling so due to less shared 
> buffer, I am getting this error.
>

Thank you for testing!

> Is this expected behavior with small shared buffer size or we should try to 
> come with a solution for this.  Please let me know your thoughts.

I think it's normal behavior when the shared buffer is not enough.
Since the total 10 processes were processing different pages at the
same time and you set a small value to shared_buffers the shared
buffer gets full easily. And you got the proper error. So I think in
this case we should consider either to increase the shared buffer size
or to decrease the parallel degree. I guess you can get this error
even when you vacuum 10 different tables concurrently instead.

Regards,

-- 
Masahiko Sawada            http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Reply via email to