Looking at the size of the persistence files isn’t necessarily a good
metric, as Ilya notes. -> but still files physically occupy disk space, in
this case i don't know if i need to increase disk space or not.

On Tue, 9 Nov 2021 at 14:17, Stephen Darlington <
stephen.darling...@gridgain.com> wrote:

> It shows how the data is distributed.
>
> Looking at the size of the persistence files isn’t necessarily a good
> metric, as Ilya notes.
>
> On 9 Nov 2021, at 10:21, Ibrahim Altun <ibrahim.al...@segmentify.com>
> wrote:
>
> @ilya We did not force system to remove any keys, just add a new node to
> the cluster, so my expectation was system to distribute ~120GB data
> across the nodes, and expect to see ~80GB of data on each node (have total
> ~60GB(without backup size) primary data, divided into 3 nodes ~40GB primary
> ~40GB backup)
>
> @Stephen what will distribution indicate? is there a way force cluster to
> redistribute all data?
>
> On Tue, 9 Nov 2021 at 12:31, Stephen Darlington <
> stephen.darling...@gridgain.com> wrote:
>
>> You can check the distribution with the control script:
>>
>> ./control.sh —cache distribution null
>>
>> This displays the number of records per partition for all caches and
>> nodes.
>>
>> On 8 Nov 2021, at 20:24, Shishkov Ilya <shishkovi...@gmail.com> wrote:
>>
>> Hi!
>> As I know, if you remove some cache entries (keys), their corresponding
>> segments in the data pages remain empty in the persistent store until you
>> insert these keys back into Ignite. So, it looks like voids in data pages.
>> But the rebalanced data is put into persistent storage without any kind
>> of "voids", i.e. it is already compressed during the rebalance routine.
>>
>> пн, 8 нояб. 2021 г. в 17:36, Ibrahim Altun <ibrahim.al...@segmentify.com
>> >:
>>
>>> Hi,
>>>
>>> We had a 2 node cluster with persistence enabled with backup(1) enabled,
>>> this morning we've added a new node to the cluster.
>>> Although rebalancing is finished Ignite Persistence values are not
>>> evenly distributed;
>>>
>>> server-1:
>>> [2021-11-08T14:31:29,371][INFO ][grid-timeout-worker-#13][IgniteKernal]
>>> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
>>>     ^-- Node [id=d9a3fb2f, uptime=01:35:00.491]
>>>     ^-- Cluster [hosts=8, CPUs=32, servers=3, clients=7, topVer=36,
>>> minorTopVer=3]
>>>     ^-- Network [addrs=[10.156.0.112, 127.0.0.1], discoPort=47500,
>>> commPort=47100]
>>>     ^-- CPU [CPUs=4, curLoad=0.33%, avgLoad=3.48%, GC=0%]
>>>     ^-- Heap [used=5418MB, free=33.85%, comm=8192MB]
>>>     ^-- Off-heap memory [used=6025MB, free=7.92%, allocated=6344MB]
>>>     ^-- Page memory [pages=1524682]
>>>     ^--   sysMemPlc region [type=internal, persistence=true,
>>> lazyAlloc=false,
>>>       ...  initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=99.99%,
>>> allocRam=100MB, allocTotal=0MB]
>>>     ^--   default region [type=default, persistence=true,
>>> lazyAlloc=true,
>>>       ...  initCfg=256MB, maxCfg=6144MB, usedRam=6025MB, freeRam=1.93%,
>>> allocRam=6144MB, allocTotal=80891MB]
>>>     ^--   metastoreMemPlc region [type=internal, persistence=true,
>>> lazyAlloc=false,
>>>       ...  initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=99.81%,
>>> allocRam=0MB, allocTotal=0MB]
>>>     ^--   TxLog region [type=internal, persistence=true,
>>> lazyAlloc=false,
>>>       ...  initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=100%,
>>> allocRam=100MB, allocTotal=0MB]
>>>     ^--   volatileDsMemPlc region [type=user, persistence=false,
>>> lazyAlloc=true,
>>>       ...  initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=100%,
>>> allocRam=0MB]
>>>     ^-- Ignite persistence [used=80891MB]
>>>     ^-- Outbound messages queue [size=0]
>>>     ^-- Public thread pool [active=0, idle=0, qSize=0]
>>>     ^-- System thread pool [active=0, idle=4, qSize=0]
>>>
>>> server-2:
>>> [2021-11-08T14:31:20,475][INFO ][grid-timeout-worker-#13][IgniteKernal]
>>> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
>>>     ^-- Node [id=d001436d, uptime=00:46:00.231]
>>>     ^-- Cluster [hosts=8, CPUs=32, servers=3, clients=7, topVer=36,
>>> minorTopVer=3]
>>>     ^-- Network [addrs=[10.156.0.113, 127.0.0.1], discoPort=47500,
>>> commPort=47100]
>>>     ^-- CPU [CPUs=4, curLoad=4.43%, avgLoad=5.11%, GC=0%]
>>>     ^-- Heap [used=6468MB, free=21.04%, comm=8192MB]
>>>     ^-- Off-heap memory [used=6025MB, free=7.92%, allocated=6344MB]
>>>     ^-- Page memory [pages=1524684]
>>>     ^--   sysMemPlc region [type=internal, persistence=true,
>>> lazyAlloc=false,
>>>       ...  initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=99.99%,
>>> allocRam=100MB, allocTotal=0MB]
>>>     ^--   default region [type=default, persistence=true,
>>> lazyAlloc=true,
>>>       ...  initCfg=256MB, maxCfg=6144MB, usedRam=6025MB, freeRam=1.93%,
>>> allocRam=6144MB, allocTotal=82852MB]
>>>     ^--   metastoreMemPlc region [type=internal, persistence=true,
>>> lazyAlloc=false,
>>>       ...  initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=99.8%,
>>> allocRam=0MB, allocTotal=0MB]
>>>     ^--   TxLog region [type=internal, persistence=true,
>>> lazyAlloc=false,
>>>       ...  initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=100%,
>>> allocRam=100MB, allocTotal=0MB]
>>>     ^--   volatileDsMemPlc region [type=user, persistence=false,
>>> lazyAlloc=true,
>>>       ...  initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=100%,
>>> allocRam=0MB]
>>>     ^-- Ignite persistence [used=82852MB]
>>>     ^-- Outbound messages queue [size=0]
>>>     ^-- Public thread pool [active=0, idle=0, qSize=0]
>>>     ^-- System thread pool [active=0, idle=4, qSize=0]
>>>
>>> server-3:
>>> [2021-11-08T14:31:21,364][INFO ][grid-timeout-worker-#13][IgniteKernal]
>>> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
>>>     ^-- Node [id=186395d1, uptime=03:36:01.279]
>>>     ^-- Cluster [hosts=8, CPUs=32, servers=3, clients=7, topVer=36,
>>> minorTopVer=3]
>>>     ^-- Network [addrs=[10.156.0.10, 127.0.0.1], discoPort=47500,
>>> commPort=47100]
>>>     ^-- CPU [CPUs=4, curLoad=0.4%, avgLoad=8.55%, GC=0%]
>>>     ^-- Heap [used=6153MB, free=24.89%, comm=8192MB]
>>>     ^-- Off-heap memory [used=6025MB, free=7.92%, allocated=6344MB]
>>>     ^-- Page memory [pages=1524749]
>>>     ^--   sysMemPlc region [type=internal, persistence=true,
>>> lazyAlloc=false,
>>>       ...  initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=99.98%,
>>> allocRam=100MB, allocTotal=0MB]
>>>     ^--   default region [type=default, persistence=true,
>>> lazyAlloc=true,
>>>       ...  initCfg=256MB, maxCfg=6144MB, usedRam=6025MB, freeRam=1.93%,
>>> allocRam=6144MB, allocTotal=16164MB]
>>>     ^--   metastoreMemPlc region [type=internal, persistence=true,
>>> lazyAlloc=false,
>>>       ...  initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=99.56%,
>>> allocRam=0MB, allocTotal=0MB]
>>>     ^--   TxLog region [type=internal, persistence=true,
>>> lazyAlloc=false,
>>>       ...  initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=100%,
>>> allocRam=100MB, allocTotal=0MB]
>>>     ^--   volatileDsMemPlc region [type=user, persistence=false,
>>> lazyAlloc=true,
>>>       ...  initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=100%,
>>> allocRam=0MB]
>>>     ^-- Ignite persistence [used=16165MB]
>>>     ^-- Outbound messages queue [size=0]
>>>     ^-- Public thread pool [active=0, idle=0, qSize=0]
>>>     ^-- System thread pool [active=0, idle=4, qSize=0]
>>>
>>> Before new node added to cluster data was evenly distributed;
>>>
>>> server-1:
>>> [2021-11-08T06:00:52,779][INFO ][grid-timeout-worker-#13][IgniteKernal]
>>> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
>>>     ^-- Node [id=cde7abcb, uptime=4 days, 03:38:05.557]
>>>     ^-- Cluster [hosts=7, CPUs=28, servers=2, clients=7, topVer=17,
>>> minorTopVer=0]
>>>     ^-- Network [addrs=[10.156.0.112, 127.0.0.1], discoPort=47500,
>>> commPort=47100]
>>>     ^-- CPU [CPUs=4, curLoad=100%, avgLoad=5.78%, GC=117%]
>>>     ^-- Heap [used=7669MB, free=6.37%, comm=8192MB]
>>>     ^-- Off-heap memory [used=6025MB, free=7.92%, allocated=6344MB]
>>>     ^-- Page memory [pages=1524669]
>>>     ^--   sysMemPlc region [type=internal, persistence=true,
>>> lazyAlloc=false,
>>>       ...  initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=99.99%,
>>> allocRam=100MB, allocTotal=0MB]
>>>     ^--   default region [type=default, persistence=true,
>>> lazyAlloc=true,
>>>       ...  initCfg=256MB, maxCfg=6144MB, usedRam=6025MB, freeRam=1.93%,
>>> allocRam=6144MB, allocTotal=121061MB]
>>>     ^--   metastoreMemPlc region [type=internal, persistence=true,
>>> lazyAlloc=false,
>>>       ...  initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=99.86%,
>>> allocRam=0MB, allocTotal=0MB]
>>>     ^--   TxLog region [type=internal, persistence=true,
>>> lazyAlloc=false,
>>>       ...  initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=100%,
>>> allocRam=100MB, allocTotal=0MB]
>>>     ^--   volatileDsMemPlc region [type=user, persistence=false,
>>> lazyAlloc=true,
>>>       ...  initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=100%,
>>> allocRam=0MB]
>>>     ^-- Ignite persistence [used=121061MB]
>>>     ^-- Outbound messages queue [size=1]
>>>     ^-- Public thread pool [active=0, idle=0, qSize=0]
>>>     ^-- System thread pool [active=1, idle=3, qSize=1]
>>>
>>> server-2:
>>> [2021-11-08T06:00:37,491][INFO ][grid-timeout-worker-#13][IgniteKernal]
>>> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
>>>     ^-- Node [id=90f15a32, uptime=4 days, 03:37:54.391]
>>>     ^-- Cluster [hosts=7, CPUs=28, servers=2, clients=7, topVer=17,
>>> minorTopVer=0]
>>>     ^-- Network [addrs=[10.156.0.113, 127.0.0.1], discoPort=47500,
>>> commPort=47100]
>>>     ^-- CPU [CPUs=4, curLoad=2.6%, avgLoad=7.38%, GC=0%]
>>>     ^-- Heap [used=7877MB, free=3.83%, comm=8192MB]
>>>     ^-- Off-heap memory [used=6025MB, free=7.92%, allocated=6344MB]
>>>     ^-- Page memory [pages=1524670]
>>>     ^--   sysMemPlc region [type=internal, persistence=true,
>>> lazyAlloc=false,
>>>       ...  initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=99.99%,
>>> allocRam=100MB, allocTotal=0MB]
>>>     ^--   default region [type=default, persistence=true,
>>> lazyAlloc=true,
>>>       ...  initCfg=256MB, maxCfg=6144MB, usedRam=6025MB, freeRam=1.93%,
>>> allocRam=6144MB, allocTotal=121310MB]
>>>     ^--   metastoreMemPlc region [type=internal, persistence=true,
>>> lazyAlloc=false,
>>>       ...  initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=99.86%,
>>> allocRam=0MB, allocTotal=0MB]
>>>     ^--   TxLog region [type=internal, persistence=true,
>>> lazyAlloc=false,
>>>       ...  initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=100%,
>>> allocRam=100MB, allocTotal=0MB]
>>>     ^--   volatileDsMemPlc region [type=user, persistence=false,
>>> lazyAlloc=true,
>>>       ...  initCfg=40MB, maxCfg=100MB, usedRam=0MB, freeRam=100%,
>>> allocRam=0MB]
>>>     ^-- Ignite persistence [used=121310MB]
>>>     ^-- Outbound messages queue [size=0]
>>>     ^-- Public thread pool [active=0, idle=0, qSize=0]
>>>     ^-- System thread pool [active=0, idle=4, qSize=0]
>>>
>>>
>>> My expectation is data to be distributed evenly.
>>>
>>> What am I missing?
>>>
>>> Regardds.
>>>
>>> --
>>> <https://www.segmentify.com/>İbrahim Halil AltunSenior Software Engineer+90
>>> 536 3327510 • segmentify.com → <https://www.segmentify.com/>UK •
>>> Germany • Turkey <https://www.segmentify.com/ecommerce-growth-show>
>>> <https://www.g2.com/products/segmentify/reviews>
>>>
>>
>>
>>
>
> --
> <https://www.segmentify.com/>İbrahim Halil AltunSenior Software Engineer+90
> 536 3327510 • segmentify.com → <https://www.segmentify.com/>UK • Germany
> • Turkey <https://www.segmentify.com/ecommerce-growth-show>
> <https://www.g2.com/products/segmentify/reviews>
>
>
>
>

-- 
<https://www.segmentify.com/>İbrahim Halil AltunSenior Software Engineer+90
536 3327510 • segmentify.com → <https://www.segmentify.com/>UK • Germany •
Turkey <https://www.segmentify.com/ecommerce-growth-show>
<https://www.g2.com/products/segmentify/reviews>

Reply via email to