Realtime CDC demo

2024-05-20 Thread 38797715

Hi team,

We have seen that Ignite's 2.16 version has released the feature of 
real-time CDC, but we have not seen the relevant documentation and demo 
code. Does the community have any relevant DEMO code to provide?


Ignite process memory usage continues to grow in docker

2024-04-20 Thread 38797715

Hi team,

Start a simple Ignite process using Docker, which is the default 
configuration and does nothing. After a period of time, you will find 
that the memory occupied by this Ignite process increases until the OOM 
Killer is triggered.


If Docker is not used and the Ignite process is started on the host, 
this problem does not exist.


After testing, this issue is not related to the Docker version, JDK 
version, or Ignite version.


Through monitoring, it can be seen that the JVM heap space, non-heap 
space, and offHeap space are all normal.


Has anyone in the community encountered the same problem?


Does SQL Query support TouchedExpiryPolicy?

2023-12-14 Thread 38797715
From javadoc, An ExpiryPolicy that defines the expiry Duration of a 
Cache Entry based on when it was last touched. A touch includes 
creation, update or access.


So, does SQL queries support this expiration policy?


How to switch a node from normal mode to maintenance mode?

2023-09-17 Thread 38797715

Hi,

we found control.sh have '--persistence' and '--defragmentation 
schedule' command, however, these commands require Ignite to run in 
maintenance mode.


so, how to switch a node from normal mode to maintenance mode?


How to delete data of a specified partition with high performance

2023-02-28 Thread 38797715

hi,

How to delete data of a specified partition(or specified affinityKey) 
with high performance(multi thread)?




CachePartialUpdateCheckedException: Failed to update keys (Tail not found: 0)

2022-10-19 Thread 38797715

Hi team,

We encountered an error, as shown in the title.
Then we found a ticket in jira, and the issue description was exactly 
the same as what we encountered:


https://issues.apache.org/jira/browse/IGNITE-17734

What are the possible causes of this issue?


Re: partitionLossPolicy confused

2022-10-12 Thread 38797715

https://issues.apache.org/jira/browse/IGNITE-17835

在 2022/9/30 18:14, Вячеслав Коптилин 写道:

Hello,

In general there are two possible ways to handle lost partitions for a 
cluster that uses Ignite Native Persistence:

1.
   - Return all failed nodes to baseline topology.
   - Call resetLostPartitions

2.
   - Stop all remaining nodes in the cluster.
   - Start all nodes in the cluster (including previously failed 
nodes) and activate a cluster.


it’s important to return all failed nodes to the topology before 
calling resetLostPartitions, otherwise a cluster could end up having 
stale data.


If some owners cannot be returned to the topology for a some reason, 
they should be excluded from baseline before attempting resetting lost 
partition state or an ClusterTopologyCheckedException will be thrown
with a message "Cannot reset lost partitions because no baseline nodes 
are online [cache=someCahe, partition=someLostPart]” indicating safe 
recovery is not possible.


In your particular case, the cache does not have backups and returning 
a node that holds a lost partition should not lead to data 
inconsistencies.
This particular case can be detected and automatically "resolved". I 
will file a jira ticket in order to address this improvement.


Thanks,
Slava.

пн, 26 сент. 2022 г. в 16:51, 38797715 <38797...@qq.com>:

hello,

Start two nodes with native persistent enabled, and then activate it.

create a table with no backups, sql like follows:

CREATE TABLE City (
  ID INT,
  Name VARCHAR,
  CountryCode CHAR(3),
  District VARCHAR,
  Population INT,
  PRIMARY KEY (ID, CountryCode)
) WITH "template=partitioned, affinityKey=CountryCode,
CACHE_NAME=City, KEY_TYPE=demo.model.CityKey,
VALUE_TYPE=demo.model.City";

INSERT INTO City(ID, Name, CountryCode, District, Population)
VALUES (1,'Kabul','AFG','Kabol',178);
INSERT INTO City(ID, Name, CountryCode, District, Population)
VALUES (2,'Qandahar','AFG','Qandahar',237500);

then execute SELECT COUNT(*) FROM city;

normal.

then kill one node.

then execute SELECT COUNT(*) FROM city;

Failed to execute query because cache partition has been lostPart
[cacheName=City, part=0]

this alse normal.

Next, start the node that was shut down before.

then execute SELECT COUNT(*) FROM city;

Failed to execute query because cache partition has been lostPart
[cacheName=City, part=0]

At this time, all partitions have been recovered, and all baseline
nodes are ONLINE. Why still report this error? It is very
confusing. Execute reset_lost_partitions operation at this time
seems redundant. Do have any special considerations here?

if this time restart the whole cluster,  thenexecute SELECT
COUNT(*) FROM city; normal, this state is the same as the previous
state, but the behavior is different.






partitionLossPolicy confused

2022-09-26 Thread 38797715

hello,

Start two nodes with native persistent enabled, and then activate it.

create a table with no backups, sql like follows:

CREATE TABLE City (
  ID INT,
  Name VARCHAR,
  CountryCode CHAR(3),
  District VARCHAR,
  Population INT,
  PRIMARY KEY (ID, CountryCode)
) WITH "template=partitioned, affinityKey=CountryCode, CACHE_NAME=City, 
KEY_TYPE=demo.model.CityKey, VALUE_TYPE=demo.model.City";


INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES 
(1,'Kabul','AFG','Kabol',178);
INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES 
(2,'Qandahar','AFG','Qandahar',237500);


then execute SELECT COUNT(*) FROM city;

normal.

then kill one node.

then execute SELECT COUNT(*) FROM city;

Failed to execute query because cache partition has been lostPart 
[cacheName=City, part=0]


this alse normal.

Next, start the node that was shut down before.

then execute SELECT COUNT(*) FROM city;

Failed to execute query because cache partition has been lostPart 
[cacheName=City, part=0]


At this time, all partitions have been recovered, and all baseline nodes 
are ONLINE. Why still report this error? It is very confusing. Execute 
reset_lost_partitions operation at this time seems redundant. Do have 
any special considerations here?


if this time restart the whole cluster,  thenexecute SELECT COUNT(*) 
FROM city; normal, this state is the same as the previous state, but the 
behavior is different.







SQL execute method on replicated tables

2022-09-26 Thread 38797715
If all the tables queried by SQL are replicated tables, is the SQL 
executed on all nodes or only on one node?


if SQL executed on one node, what is the algorithm for selecting the 
execution node?


Re: How to calculate the amount of memory used by a cache or dataregion

2022-09-19 Thread 38797715

We know Ignite has the function of automatic memory defragmentation:

https://ignite.apache.org/docs/latest/memory-architecture#memory-defragmentation

I want to ask what is the threshold for triggering this function?

在 2022/9/19 22:36, Alexander Polovtcev 写道:
Hm, I don't know, honestly. We need to look at the code or somebody 
else can give us a hint.


On Mon, Sep 19, 2022 at 2:55 PM 38797715 <38797...@qq.com> wrote:

if native persistence enabled, the result is the same.

It seems that there is a bug in the calculation of the
"PagesFillFactor" metrics?

在 2022/9/19 17:30, Alexander Polovtcev 写道:

Do you have native persistence enabled?

On Mon, Sep 19, 2022 at 11:57 AM 38797715 <38797...@qq.com> wrote:

2.10 and 2.13, the same result.

在 2022/9/19 16:54, Alexander Polovtcev 写道:

Which version of Ignite do you use?

On Mon, Sep 19, 2022 at 11:38 AM 38797715 <38797...@qq.com>
wrote:

for example:

start a node by ignite.sh.

then:

CREATE TABLE City (
  ID INT,
  Name VARCHAR,
  CountryCode CHAR(3),
  District VARCHAR,
  Population INT,
  PRIMARY KEY (ID, CountryCode)
) WITH "template=partitioned, backups=1,
affinityKey=CountryCode, CACHE_NAME=City,
KEY_TYPE=demo.model.CityKey, VALUE_TYPE=demo.model.City";


INSERT INTO City(ID, Name, CountryCode, District,
Population) VALUES (1,'Kabul','AFG','Kabol',178);
INSERT INTO City(ID, Name, CountryCode, District,
Population) VALUES (2,'Qandahar','AFG','Qandahar',237500);
INSERT INTO City(ID, Name, CountryCode, District,
Population) VALUES (3,'Herat','AFG','Herat',186800);
INSERT INTO City(ID, Name, CountryCode, District,
Population) VALUES
(4,'Mazar-e-Sharif','AFG','Balkh',127800);
INSERT INTO City(ID, Name, CountryCode, District,
Population) VALUES
(5,'Amsterdam','NLD','Noord-Holland',731200);
INSERT INTO City(ID, Name, CountryCode, District,
Population) VALUES
(6,'Rotterdam','NLD','Zuid-Holland',593321);
INSERT INTO City(ID, Name, CountryCode, District,
Population) VALUES (7,'Haag','NLD','Zuid-Holland',440900);
INSERT INTO City(ID, Name, CountryCode, District,
Population) VALUES (8,'Utrecht','NLD','Utrecht',234323);
INSERT INTO City(ID, Name, CountryCode, District,
Population) VALUES
(9,'Eindhoven','NLD','Noord-Brabant',201843);
INSERT INTO City(ID, Name, CountryCode, District,
Population) VALUES
(10,'Tilburg','NLD','Noord-Brabant',193238);
INSERT INTO City(ID, Name, CountryCode, District,
Population) VALUES
(11,'Groningen','NLD','Groningen',172701);

SELECT * FROM sys.metrics WHERE name LIKE
'io.dataregion.default%';

io.dataregion.default.OffHeapSize 104857600
io.dataregion.default.PhysicalMemoryPages    2066
io.dataregion.default.EmptyDataPages 0
io.dataregion.default.UsedCheckpointBufferSize    0
io.dataregion.default.TotalThrottlingTime    0
io.dataregion.default.PagesReplaced    0
io.dataregion.default.EvictionRate    0
io.dataregion.default.InitialSize 104857600
io.dataregion.default.DirtyPages    0
io.dataregion.default.MaxSize 8589934592
io.dataregion.default.PagesWritten    0
io.dataregion.default.PagesReplaceRate    0
io.dataregion.default.PagesRead    0
io.dataregion.default.PagesFillFactor 0.9997031688690186
io.dataregion.default.TotalAllocatedPages    2066
io.dataregion.default.PhysicalMemorySize    8511920
io.dataregion.default.PagesReplaceAge 0
io.dataregion.default.AllocationRate 2066
io.dataregion.default.OffheapUsedSize 8511920
io.dataregion.default.LargeEntriesPagesCount    0
io.dataregion.default.TotalAllocatedSize    8511920
io.dataregion.default.CheckpointBufferSize    0

TotalUsedSize - (TotalAllocatedSize - PagesFillFactor *
TotalAllocatedSize)

= 8511920 - (8511920 - 0.9997031688690186 * 8511920) =
8509392


delete from city;

SELECT * FROM sys.metrics WHERE name LIKE
'io.dataregion.default%';

io.dataregion.default.OffHeapSize 104857600
io.dataregion.default.PhysicalMemoryPages    2075
io.dataregion.default.EmptyDataPages 0
io.dataregion.default.UsedCheckpointBufferSize    0
io.datar

Re: How to calculate the amount of memory used by a cache or dataregion

2022-09-19 Thread 38797715

if native persistence enabled, the result is the same.

It seems that there is a bug in the calculation of the "PagesFillFactor" 
metrics?


在 2022/9/19 17:30, Alexander Polovtcev 写道:

Do you have native persistence enabled?

On Mon, Sep 19, 2022 at 11:57 AM 38797715 <38797...@qq.com> wrote:

2.10 and 2.13, the same result.

在 2022/9/19 16:54, Alexander Polovtcev 写道:

Which version of Ignite do you use?

On Mon, Sep 19, 2022 at 11:38 AM 38797715 <38797...@qq.com> wrote:

for example:

start a node by ignite.sh.

then:

CREATE TABLE City (
  ID INT,
  Name VARCHAR,
  CountryCode CHAR(3),
  District VARCHAR,
  Population INT,
  PRIMARY KEY (ID, CountryCode)
) WITH "template=partitioned, backups=1,
affinityKey=CountryCode, CACHE_NAME=City,
KEY_TYPE=demo.model.CityKey, VALUE_TYPE=demo.model.City";


INSERT INTO City(ID, Name, CountryCode, District, Population)
VALUES (1,'Kabul','AFG','Kabol',178);
INSERT INTO City(ID, Name, CountryCode, District, Population)
VALUES (2,'Qandahar','AFG','Qandahar',237500);
INSERT INTO City(ID, Name, CountryCode, District, Population)
VALUES (3,'Herat','AFG','Herat',186800);
INSERT INTO City(ID, Name, CountryCode, District, Population)
VALUES (4,'Mazar-e-Sharif','AFG','Balkh',127800);
INSERT INTO City(ID, Name, CountryCode, District, Population)
VALUES (5,'Amsterdam','NLD','Noord-Holland',731200);
INSERT INTO City(ID, Name, CountryCode, District, Population)
VALUES (6,'Rotterdam','NLD','Zuid-Holland',593321);
INSERT INTO City(ID, Name, CountryCode, District, Population)
VALUES (7,'Haag','NLD','Zuid-Holland',440900);
INSERT INTO City(ID, Name, CountryCode, District, Population)
VALUES (8,'Utrecht','NLD','Utrecht',234323);
INSERT INTO City(ID, Name, CountryCode, District, Population)
VALUES (9,'Eindhoven','NLD','Noord-Brabant',201843);
INSERT INTO City(ID, Name, CountryCode, District, Population)
VALUES (10,'Tilburg','NLD','Noord-Brabant',193238);
INSERT INTO City(ID, Name, CountryCode, District, Population)
VALUES (11,'Groningen','NLD','Groningen',172701);

SELECT * FROM sys.metrics WHERE name LIKE
'io.dataregion.default%';

io.dataregion.default.OffHeapSize    104857600
io.dataregion.default.PhysicalMemoryPages 2066
io.dataregion.default.EmptyDataPages    0
io.dataregion.default.UsedCheckpointBufferSize    0
io.dataregion.default.TotalThrottlingTime    0
io.dataregion.default.PagesReplaced    0
io.dataregion.default.EvictionRate    0
io.dataregion.default.InitialSize    104857600
io.dataregion.default.DirtyPages    0
io.dataregion.default.MaxSize    8589934592
io.dataregion.default.PagesWritten    0
io.dataregion.default.PagesReplaceRate    0
io.dataregion.default.PagesRead    0
io.dataregion.default.PagesFillFactor 0.9997031688690186
io.dataregion.default.TotalAllocatedPages 2066
io.dataregion.default.PhysicalMemorySize 8511920
io.dataregion.default.PagesReplaceAge    0
io.dataregion.default.AllocationRate    2066
io.dataregion.default.OffheapUsedSize    8511920
io.dataregion.default.LargeEntriesPagesCount 0
io.dataregion.default.TotalAllocatedSize 8511920
io.dataregion.default.CheckpointBufferSize    0

TotalUsedSize - (TotalAllocatedSize - PagesFillFactor *
TotalAllocatedSize)

= 8511920 - (8511920 - 0.9997031688690186 * 8511920) = 8509392


delete from city;

SELECT * FROM sys.metrics WHERE name LIKE
'io.dataregion.default%';

io.dataregion.default.OffHeapSize    104857600
io.dataregion.default.PhysicalMemoryPages 2075
io.dataregion.default.EmptyDataPages    0
io.dataregion.default.UsedCheckpointBufferSize    0
io.dataregion.default.TotalThrottlingTime    0
io.dataregion.default.PagesReplaced    0
io.dataregion.default.EvictionRate    0
io.dataregion.default.InitialSize    104857600
io.dataregion.default.DirtyPages    0
io.dataregion.default.MaxSize    8589934592
io.dataregion.default.PagesWritten    0
io.dataregion.default.PagesReplaceRate    0
io.dataregion.default.PagesRead    0
io.dataregion.default.PagesFillFactor    1.0
io.dataregion.default.TotalAllocatedPages 2075
io.dataregion.default.PhysicalMemorySize 8549000
io.dataregion.default.PagesReplaceAge    0
io.dataregion.default.AllocationRate    9
io.dataregion.default.OffheapUsedSize    8549000
io.dataregion.default.LargeEntriesPagesCount 0
io.dataregion.de

Re: How to calculate the amount of memory used by a cache or dataregion

2022-09-19 Thread 38797715

no, pure memory mode.

在 2022/9/19 17:30, Alexander Polovtcev 写道:

Do you have native persistence enabled?

On Mon, Sep 19, 2022 at 11:57 AM 38797715 <38797...@qq.com> wrote:

2.10 and 2.13, the same result.

在 2022/9/19 16:54, Alexander Polovtcev 写道:

Which version of Ignite do you use?

On Mon, Sep 19, 2022 at 11:38 AM 38797715 <38797...@qq.com> wrote:

for example:

start a node by ignite.sh.

then:

CREATE TABLE City (
  ID INT,
  Name VARCHAR,
  CountryCode CHAR(3),
  District VARCHAR,
  Population INT,
  PRIMARY KEY (ID, CountryCode)
) WITH "template=partitioned, backups=1,
affinityKey=CountryCode, CACHE_NAME=City,
KEY_TYPE=demo.model.CityKey, VALUE_TYPE=demo.model.City";


INSERT INTO City(ID, Name, CountryCode, District, Population)
VALUES (1,'Kabul','AFG','Kabol',178);
INSERT INTO City(ID, Name, CountryCode, District, Population)
VALUES (2,'Qandahar','AFG','Qandahar',237500);
INSERT INTO City(ID, Name, CountryCode, District, Population)
VALUES (3,'Herat','AFG','Herat',186800);
INSERT INTO City(ID, Name, CountryCode, District, Population)
VALUES (4,'Mazar-e-Sharif','AFG','Balkh',127800);
INSERT INTO City(ID, Name, CountryCode, District, Population)
VALUES (5,'Amsterdam','NLD','Noord-Holland',731200);
INSERT INTO City(ID, Name, CountryCode, District, Population)
VALUES (6,'Rotterdam','NLD','Zuid-Holland',593321);
INSERT INTO City(ID, Name, CountryCode, District, Population)
VALUES (7,'Haag','NLD','Zuid-Holland',440900);
INSERT INTO City(ID, Name, CountryCode, District, Population)
VALUES (8,'Utrecht','NLD','Utrecht',234323);
INSERT INTO City(ID, Name, CountryCode, District, Population)
VALUES (9,'Eindhoven','NLD','Noord-Brabant',201843);
INSERT INTO City(ID, Name, CountryCode, District, Population)
VALUES (10,'Tilburg','NLD','Noord-Brabant',193238);
INSERT INTO City(ID, Name, CountryCode, District, Population)
VALUES (11,'Groningen','NLD','Groningen',172701);

SELECT * FROM sys.metrics WHERE name LIKE
'io.dataregion.default%';

io.dataregion.default.OffHeapSize    104857600
io.dataregion.default.PhysicalMemoryPages 2066
io.dataregion.default.EmptyDataPages    0
io.dataregion.default.UsedCheckpointBufferSize    0
io.dataregion.default.TotalThrottlingTime    0
io.dataregion.default.PagesReplaced    0
io.dataregion.default.EvictionRate    0
io.dataregion.default.InitialSize    104857600
io.dataregion.default.DirtyPages    0
io.dataregion.default.MaxSize    8589934592
io.dataregion.default.PagesWritten    0
io.dataregion.default.PagesReplaceRate    0
io.dataregion.default.PagesRead    0
io.dataregion.default.PagesFillFactor 0.9997031688690186
io.dataregion.default.TotalAllocatedPages 2066
io.dataregion.default.PhysicalMemorySize 8511920
io.dataregion.default.PagesReplaceAge    0
io.dataregion.default.AllocationRate    2066
io.dataregion.default.OffheapUsedSize    8511920
io.dataregion.default.LargeEntriesPagesCount 0
io.dataregion.default.TotalAllocatedSize 8511920
io.dataregion.default.CheckpointBufferSize    0

TotalUsedSize - (TotalAllocatedSize - PagesFillFactor *
TotalAllocatedSize)

= 8511920 - (8511920 - 0.9997031688690186 * 8511920) = 8509392


delete from city;

SELECT * FROM sys.metrics WHERE name LIKE
'io.dataregion.default%';

io.dataregion.default.OffHeapSize    104857600
io.dataregion.default.PhysicalMemoryPages 2075
io.dataregion.default.EmptyDataPages    0
io.dataregion.default.UsedCheckpointBufferSize    0
io.dataregion.default.TotalThrottlingTime    0
io.dataregion.default.PagesReplaced    0
io.dataregion.default.EvictionRate    0
io.dataregion.default.InitialSize    104857600
io.dataregion.default.DirtyPages    0
io.dataregion.default.MaxSize    8589934592
io.dataregion.default.PagesWritten    0
io.dataregion.default.PagesReplaceRate    0
io.dataregion.default.PagesRead    0
io.dataregion.default.PagesFillFactor    1.0
io.dataregion.default.TotalAllocatedPages 2075
io.dataregion.default.PhysicalMemorySize 8549000
io.dataregion.default.PagesReplaceAge    0
io.dataregion.default.AllocationRate    9
io.dataregion.default.OffheapUsedSize    8549000
io.dataregion.default.LargeEntriesPagesCount 0
io.dataregion.default.TotalAllocatedSize 8549000
io.dataregion.default.CheckpointBufferSize    0


TotalUsedSize - (TotalAllocatedSize - PagesFillFactor *

Re: How to calculate the amount of memory used by a cache or dataregion

2022-09-19 Thread 38797715

2.10 and 2.13, the same result.

在 2022/9/19 16:54, Alexander Polovtcev 写道:

Which version of Ignite do you use?

On Mon, Sep 19, 2022 at 11:38 AM 38797715 <38797...@qq.com> wrote:

for example:

start a node by ignite.sh.

then:

CREATE TABLE City (
  ID INT,
  Name VARCHAR,
  CountryCode CHAR(3),
  District VARCHAR,
  Population INT,
  PRIMARY KEY (ID, CountryCode)
) WITH "template=partitioned, backups=1, affinityKey=CountryCode,
CACHE_NAME=City, KEY_TYPE=demo.model.CityKey,
VALUE_TYPE=demo.model.City";


INSERT INTO City(ID, Name, CountryCode, District, Population)
VALUES (1,'Kabul','AFG','Kabol',178);
INSERT INTO City(ID, Name, CountryCode, District, Population)
VALUES (2,'Qandahar','AFG','Qandahar',237500);
INSERT INTO City(ID, Name, CountryCode, District, Population)
VALUES (3,'Herat','AFG','Herat',186800);
INSERT INTO City(ID, Name, CountryCode, District, Population)
VALUES (4,'Mazar-e-Sharif','AFG','Balkh',127800);
INSERT INTO City(ID, Name, CountryCode, District, Population)
VALUES (5,'Amsterdam','NLD','Noord-Holland',731200);
INSERT INTO City(ID, Name, CountryCode, District, Population)
VALUES (6,'Rotterdam','NLD','Zuid-Holland',593321);
INSERT INTO City(ID, Name, CountryCode, District, Population)
VALUES (7,'Haag','NLD','Zuid-Holland',440900);
INSERT INTO City(ID, Name, CountryCode, District, Population)
VALUES (8,'Utrecht','NLD','Utrecht',234323);
INSERT INTO City(ID, Name, CountryCode, District, Population)
VALUES (9,'Eindhoven','NLD','Noord-Brabant',201843);
INSERT INTO City(ID, Name, CountryCode, District, Population)
VALUES (10,'Tilburg','NLD','Noord-Brabant',193238);
INSERT INTO City(ID, Name, CountryCode, District, Population)
VALUES (11,'Groningen','NLD','Groningen',172701);

SELECT * FROM sys.metrics WHERE name LIKE 'io.dataregion.default%';

io.dataregion.default.OffHeapSize    104857600
io.dataregion.default.PhysicalMemoryPages    2066
io.dataregion.default.EmptyDataPages    0
io.dataregion.default.UsedCheckpointBufferSize    0
io.dataregion.default.TotalThrottlingTime    0
io.dataregion.default.PagesReplaced    0
io.dataregion.default.EvictionRate    0
io.dataregion.default.InitialSize    104857600
io.dataregion.default.DirtyPages    0
io.dataregion.default.MaxSize    8589934592
io.dataregion.default.PagesWritten    0
io.dataregion.default.PagesReplaceRate    0
io.dataregion.default.PagesRead    0
io.dataregion.default.PagesFillFactor 0.9997031688690186
io.dataregion.default.TotalAllocatedPages    2066
io.dataregion.default.PhysicalMemorySize    8511920
io.dataregion.default.PagesReplaceAge    0
io.dataregion.default.AllocationRate    2066
io.dataregion.default.OffheapUsedSize    8511920
io.dataregion.default.LargeEntriesPagesCount    0
io.dataregion.default.TotalAllocatedSize    8511920
io.dataregion.default.CheckpointBufferSize    0

TotalUsedSize - (TotalAllocatedSize - PagesFillFactor *
TotalAllocatedSize)

= 8511920 - (8511920 - 0.9997031688690186 * 8511920) = 8509392


delete from city;

SELECT * FROM sys.metrics WHERE name LIKE 'io.dataregion.default%';

io.dataregion.default.OffHeapSize    104857600
io.dataregion.default.PhysicalMemoryPages    2075
io.dataregion.default.EmptyDataPages    0
io.dataregion.default.UsedCheckpointBufferSize    0
io.dataregion.default.TotalThrottlingTime    0
io.dataregion.default.PagesReplaced    0
io.dataregion.default.EvictionRate    0
io.dataregion.default.InitialSize    104857600
io.dataregion.default.DirtyPages    0
io.dataregion.default.MaxSize    8589934592
io.dataregion.default.PagesWritten    0
io.dataregion.default.PagesReplaceRate    0
io.dataregion.default.PagesRead    0
io.dataregion.default.PagesFillFactor    1.0
io.dataregion.default.TotalAllocatedPages    2075
io.dataregion.default.PhysicalMemorySize    8549000
io.dataregion.default.PagesReplaceAge    0
io.dataregion.default.AllocationRate    9
io.dataregion.default.OffheapUsedSize    8549000
io.dataregion.default.LargeEntriesPagesCount    0
io.dataregion.default.TotalAllocatedSize    8549000
io.dataregion.default.CheckpointBufferSize    0


TotalUsedSize - (TotalAllocatedSize - PagesFillFactor *
TotalAllocatedSize)

=8549000 - (8549000 - 1.0 * 8549000) = 8549000

Instead, the value becomes larger?


在 2022/9/19 15:13, Alexander Polovtcev 写道:

Sorry, I messed up the metric names a little bit, we should use
the size metrics, not the page metrics. So the correct formula
would be: `TotalUsedSize - (TotalAllocatedSize - PagesFillFactor
* TotalAllocatedSize)`

On Sun, Sep 18, 2022 at 4:29 PM 38797715 <38797...@qq.com> wrote:

I've checked TotalUsedPages in ig

Re: How to calculate the amount of memory used by a cache or dataregion

2022-09-19 Thread 38797715

for example:

start a node by ignite.sh.

then:

CREATE TABLE City (
  ID INT,
  Name VARCHAR,
  CountryCode CHAR(3),
  District VARCHAR,
  Population INT,
  PRIMARY KEY (ID, CountryCode)
) WITH "template=partitioned, backups=1, affinityKey=CountryCode, 
CACHE_NAME=City, KEY_TYPE=demo.model.CityKey, VALUE_TYPE=demo.model.City";



INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES 
(1,'Kabul','AFG','Kabol',178);
INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES 
(2,'Qandahar','AFG','Qandahar',237500);
INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES 
(3,'Herat','AFG','Herat',186800);
INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES 
(4,'Mazar-e-Sharif','AFG','Balkh',127800);
INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES 
(5,'Amsterdam','NLD','Noord-Holland',731200);
INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES 
(6,'Rotterdam','NLD','Zuid-Holland',593321);
INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES 
(7,'Haag','NLD','Zuid-Holland',440900);
INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES 
(8,'Utrecht','NLD','Utrecht',234323);
INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES 
(9,'Eindhoven','NLD','Noord-Brabant',201843);
INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES 
(10,'Tilburg','NLD','Noord-Brabant',193238);
INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES 
(11,'Groningen','NLD','Groningen',172701);


SELECT * FROM sys.metrics WHERE name LIKE 'io.dataregion.default%';

io.dataregion.default.OffHeapSize    104857600
io.dataregion.default.PhysicalMemoryPages    2066
io.dataregion.default.EmptyDataPages    0
io.dataregion.default.UsedCheckpointBufferSize    0
io.dataregion.default.TotalThrottlingTime    0
io.dataregion.default.PagesReplaced    0
io.dataregion.default.EvictionRate    0
io.dataregion.default.InitialSize    104857600
io.dataregion.default.DirtyPages    0
io.dataregion.default.MaxSize    8589934592
io.dataregion.default.PagesWritten    0
io.dataregion.default.PagesReplaceRate    0
io.dataregion.default.PagesRead    0
io.dataregion.default.PagesFillFactor    0.9997031688690186
io.dataregion.default.TotalAllocatedPages    2066
io.dataregion.default.PhysicalMemorySize    8511920
io.dataregion.default.PagesReplaceAge    0
io.dataregion.default.AllocationRate    2066
io.dataregion.default.OffheapUsedSize    8511920
io.dataregion.default.LargeEntriesPagesCount    0
io.dataregion.default.TotalAllocatedSize    8511920
io.dataregion.default.CheckpointBufferSize    0

TotalUsedSize - (TotalAllocatedSize - PagesFillFactor * TotalAllocatedSize)

= 8511920 - (8511920 - 0.9997031688690186 * 8511920) = 8509392


delete from city;

SELECT * FROM sys.metrics WHERE name LIKE 'io.dataregion.default%';

io.dataregion.default.OffHeapSize    104857600
io.dataregion.default.PhysicalMemoryPages    2075
io.dataregion.default.EmptyDataPages    0
io.dataregion.default.UsedCheckpointBufferSize    0
io.dataregion.default.TotalThrottlingTime    0
io.dataregion.default.PagesReplaced    0
io.dataregion.default.EvictionRate    0
io.dataregion.default.InitialSize    104857600
io.dataregion.default.DirtyPages    0
io.dataregion.default.MaxSize    8589934592
io.dataregion.default.PagesWritten    0
io.dataregion.default.PagesReplaceRate    0
io.dataregion.default.PagesRead    0
io.dataregion.default.PagesFillFactor    1.0
io.dataregion.default.TotalAllocatedPages    2075
io.dataregion.default.PhysicalMemorySize    8549000
io.dataregion.default.PagesReplaceAge    0
io.dataregion.default.AllocationRate    9
io.dataregion.default.OffheapUsedSize    8549000
io.dataregion.default.LargeEntriesPagesCount    0
io.dataregion.default.TotalAllocatedSize    8549000
io.dataregion.default.CheckpointBufferSize    0


TotalUsedSize - (TotalAllocatedSize - PagesFillFactor * TotalAllocatedSize)

=8549000 - (8549000 - 1.0 * 8549000) = 8549000

Instead, the value becomes larger?


在 2022/9/19 15:13, Alexander Polovtcev 写道:
Sorry, I messed up the metric names a little bit, we should use the 
size metrics, not the page metrics. So the correct formula would be: 
`TotalUsedSize - (TotalAllocatedSize - PagesFillFactor * 
TotalAllocatedSize)`


On Sun, Sep 18, 2022 at 4:29 PM 38797715 <38797...@qq.com> wrote:

I've checked TotalUsedPages in ignite.dataRegionMetrics(), it is
found that the logic of this metric is consistent with
OffheapUsedSize, that is, the value will not be updated after data
deletion.

在 2022/9/18 19:42, Alexander Polovtcev 写道:

Hello, you can check out "TotalUsedPages" and "TotalUsedSize"
metrics that can be found in the Data Region metrics. However,
please keep in mind that this does not take page fragmentation
into account.

On Sat, Sep 17, 2022 at 2:35 PM 38797715 <38797...@qq.com> wrote:

Hi team,

   

Re: How to calculate the amount of memory used by a cache or dataregion

2022-09-18 Thread 38797715
I've checked TotalUsedPages in ignite.dataRegionMetrics(), it is found 
that the logic of this metric is consistent with OffheapUsedSize, that 
is, the value will not be updated after data deletion.


在 2022/9/18 19:42, Alexander Polovtcev 写道:
Hello, you can check out "TotalUsedPages" and "TotalUsedSize" metrics 
that can be found in the Data Region metrics. However, please keep in 
mind that this does not take page fragmentation into account.


On Sat, Sep 17, 2022 at 2:35 PM 38797715 <38797...@qq.com> wrote:

Hi team,

We found that if the delete operation is executed, the memory
space corresponding to the data will not be released, and this
space will be reused later.

Therefore, metrics such as OffheapUsedSize will become inaccurate.

So, how to calculate the exact amount of memory occupied by data
in a cache or a dataregion?



--
With regards,
Aleksandr Polovtcev

Re: How to calculate the amount of memory used by a cache or dataregion

2022-09-18 Thread 38797715

hello,

there is no metrics named "TotalUsedSize" in SYS.METRICS, how to get it?

在 2022/9/18 19:42, Alexander Polovtcev 写道:
Hello, you can check out "TotalUsedPages" and "TotalUsedSize" metrics 
that can be found in the Data Region metrics. However, please keep in 
mind that this does not take page fragmentation into account.


On Sat, Sep 17, 2022 at 2:35 PM 38797715 <38797...@qq.com> wrote:

Hi team,

We found that if the delete operation is executed, the memory
space corresponding to the data will not be released, and this
space will be reused later.

Therefore, metrics such as OffheapUsedSize will become inaccurate.

So, how to calculate the exact amount of memory occupied by data
in a cache or a dataregion?



--
With regards,
Aleksandr Polovtcev

How to calculate the amount of memory used by a cache or dataregion

2022-09-17 Thread 38797715

Hi team,

We found that if the delete operation is executed, the memory space 
corresponding to the data will not be released, and this space will be 
reused later.


Therefore, metrics such as OffheapUsedSize will become inaccurate.

So, how to calculate the exact amount of memory occupied by data in a 
cache or a dataregion?


Page replacement priority

2022-08-30 Thread 38797715

hello,

When there is insufficient memory, if a page replacement occurs, will 
the data be swapped out of memory first, or will the index also be 
swapped out of memory?


Or are there any optimized algorithms?


Re: Cursor in ThinClient failure

2022-07-07 Thread 38797715

Does the thin client have a mechanism like Session?

Cursor in ThinClient failure

2022-07-01 Thread 38797715

Hi team,

FieldsQueryCursor> cursor = client.query(new 
SqlFieldsQuery("SELECT name from Person WHERE 
id=?").setArgs(key).setSchema("PUBLIC"));


If the thin client fails during the execution of the above code, will it 
leave an unclosed cursor on the server side?




Re: HASH_JOIN: Index "HASH_JOIN_IDX" not found

2022-01-11 Thread 38797715

Yes,you are right,GCE is OK.

Therefore, either remove the description from Ignite's documentation or 
migrate the function from GCE back to Ignite.


在 2022/1/11 15:20, Ilya Korol 写道:
I've checked latest master and didn't find any mentions of 
HASH_JOIN_IDX in it (except documentation sources). Meanwhile in 
gridgain community repository you can find HashJoinIndex class 
(https://github.com/gridgain/gridgain/blob/master/modules/h2/src/main/java/org/gridgain/internal/h2/index/HashJoinIndex.java) 
and other code like:


// Parser
private IndexHints parseIndexHints(Table table) {
    read(OPEN_PAREN);
    LinkedHashSet indexNames = new LinkedHashSet<>();
    if (!readIf(CLOSE_PAREN)) {
    do {
    String indexName = readIdentifierWithSchema();
    if 
(HashJoinIndex.HASH_JOIN_IDX.equalsIgnoreCase(indexName)) {

    indexNames.add(HashJoinIndex.HASH_JOIN_IDX);
    }
    else {
    Index index = table.getIndex(indexName);
    indexNames.add(index.getName());
    }
    } while (readIfMore(true));
    }
    return IndexHints.createUseIndexHints(indexNames);
}

So, looks like this feature implementation is absent in Ignite (or was 
removed for some reasons).


On 2022/01/06 09:34:41 38797715 wrote:
> Execute the following script and the error will occur:
>
> CREATE TABLE Country (
>   Code CHAR(3) PRIMARY KEY,
>   Name VARCHAR,
>   Continent VARCHAR,
>   Region VARCHAR,
>   SurfaceArea DECIMAL(10,2),
>   IndepYear SMALLINT,
>   Population INT,
>   LifeExpectancy DECIMAL(3,1),
>   GNP DECIMAL(10,2),
>   GNPOld DECIMAL(10,2),
>   LocalName VARCHAR,
>   GovernmentForm VARCHAR,
>   HeadOfState VARCHAR,
>   Capital INT,
>   Code2 CHAR(2)
> ) WITH "template=partitioned, backups=1, CACHE_NAME=Country,
> VALUE_TYPE=demo.model.Country";
>
> CREATE TABLE City (
>   ID INT,
>   Name VARCHAR,
>   CountryCode CHAR(3),
>   District VARCHAR,
>   Population INT,
>   PRIMARY KEY (ID, CountryCode)
> ) WITH "template=partitioned, backups=1, CACHE_NAME=City,
> KEY_TYPE=demo.model.CityKey, VALUE_TYPE=demo.model.City";
>
>
> CREATE INDEX idx_country_code ON city (CountryCode);
>
> INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES
> (1,'Kabul','AFG','Kabol',178);
> INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES
> (2,'Qandahar','AFG','Qandahar',237500);
>
> INSERT INTO Country(Code, Name, Continent, Region, SurfaceArea,
> IndepYear, Population, LifeExpectancy, GNP, GNPOld, LocalName,
> GovernmentForm, HeadOfState, Capital, Code2) VALUES
> ('AFG','Afghanistan','Asia','Southern and Central
> 
Asia',652090.00,1919,2272,45.9,5976.00,NULL,'Afganistan/Afqanestan','Islamic 


> Emirate','Mohammad Omar',1,'AF');
>
> SELECT *
> FROM city,country USE INDEX(HASH_JOIN_IDX)
> WHERE city.CountryCode = country.code;
>
> see the doc:
>
> https://ignite.apache.org/docs/latest/SQL/distributed-joins#hash-joins
>
> why?
>
> is a bug?
>

HASH_JOIN: Index "HASH_JOIN_IDX" not found

2022-01-06 Thread 38797715

Execute the following script and the error will occur:

CREATE TABLE Country (
  Code CHAR(3) PRIMARY KEY,
  Name VARCHAR,
  Continent VARCHAR,
  Region VARCHAR,
  SurfaceArea DECIMAL(10,2),
  IndepYear SMALLINT,
  Population INT,
  LifeExpectancy DECIMAL(3,1),
  GNP DECIMAL(10,2),
  GNPOld DECIMAL(10,2),
  LocalName VARCHAR,
  GovernmentForm VARCHAR,
  HeadOfState VARCHAR,
  Capital INT,
  Code2 CHAR(2)
) WITH "template=partitioned, backups=1, CACHE_NAME=Country, 
VALUE_TYPE=demo.model.Country";


CREATE TABLE City (
  ID INT,
  Name VARCHAR,
  CountryCode CHAR(3),
  District VARCHAR,
  Population INT,
  PRIMARY KEY (ID, CountryCode)
) WITH "template=partitioned, backups=1, CACHE_NAME=City, 
KEY_TYPE=demo.model.CityKey, VALUE_TYPE=demo.model.City";



CREATE INDEX idx_country_code ON city (CountryCode);

INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES 
(1,'Kabul','AFG','Kabol',178);
INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES 
(2,'Qandahar','AFG','Qandahar',237500);


INSERT INTO Country(Code, Name, Continent, Region, SurfaceArea, 
IndepYear, Population, LifeExpectancy, GNP, GNPOld, LocalName, 
GovernmentForm, HeadOfState, Capital, Code2) VALUES 
('AFG','Afghanistan','Asia','Southern and Central 
Asia',652090.00,1919,2272,45.9,5976.00,NULL,'Afganistan/Afqanestan','Islamic 
Emirate','Mohammad Omar',1,'AF');


SELECT *
FROM city,country USE INDEX(HASH_JOIN_IDX)
WHERE city.CountryCode = country.code;

see the doc:

https://ignite.apache.org/docs/latest/SQL/distributed-joins#hash-joins

why?

is a bug?


IgniteBiPredicate in ThinClient ScanQuery:java.lang.ClassNotFoundException

2021-12-10 Thread 38797715

An exception will be throw when the following code is executed:

The ScanQuery in the thin client does not support IgniteBiPredicate?


publicclassThinClient{
publicstaticvoidmain(String[] args) throwsClientException, Exception{
ClientConfigurationcfg= 
newClientConfiguration().setAddresses("localhost:10800");

try(IgniteClientclient= Ignition.startClient(cfg)) {
ClientCache cache2= client.getOrCreateCache("cache2");
for(inti= 1; i<= 10; i++) {
cache2.put(i, newPerson((long)i, "a", "b"));
}
ClientCache cache3= 
client.getOrCreateCache("cache2").withKeepBinary();
IgniteBiPredicate filter= 
newIgniteBiPredicate() {

@Overridepublicbooleanapply(BinaryObjectkey, BinaryObjectperson) {
returnperson.field("id") > 6;
}
};
try(QueryCursor> cur3= 
cache3.query(newScanQuery<>(filter))) {

for(Cache.Entry entry:cur3) {
System.out.println(entry.getValue());
}
}
}
}
}

Re: The code inside the CacheEntryProcessor executes multiple times. Why?

2021-10-19 Thread 38797715
As far as I know, this method is added in the following ticket and has 
not been published yet:


https://issues.apache.org/jira/browse/IGNITE-15065

Is there any alternative solution in the existing release?

在 2021/10/19 15:34, Maksim Timonin 写道:

Hi!

You use an unregistered class in your entry processor: ItemClass1. You 
should register it before using it. The code does the job.


`ignite.binary().registerClass(ItemClass1.class);`

When class isn't registered Ignite does it by itself. But it requires 
to replay your code after it finds that class is unregistered. You can 
help cluster to register this class by self.





On Tue, Oct 19, 2021 at 3:52 AM 38797715 <38797...@qq.com> wrote:

Any feedback?

在 2021/10/14 15:03, 38797715 写道:


Hi,

The internal code of CacheEntryProcessor in the attachment has
been executed multiple times. Why?
Is there any simple way to solve this problem?


Re: The code inside the CacheEntryProcessor executes multiple times. Why?

2021-10-18 Thread 38797715

Any feedback?

在 2021/10/14 15:03, 38797715 写道:


Hi,

The internal code of CacheEntryProcessor in the attachment has been 
executed multiple times. Why?

Is there any simple way to solve this problem?
package com.test;

import java.util.Collections;
import java.util.HashMap;
import java.util.Map;

import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.Ignition;
import org.apache.ignite.configuration.CacheConfiguration;
import org.apache.ignite.configuration.IgniteConfiguration;
import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;

public class IgniteTestDemo {

public static String TEST_DATA = "TEST_DATA";
public static CacheConfiguration cfg1 = new CacheConfiguration<>();
public static IgniteCache TEST_CACHE;


private static Ignite ignite = null;

public static void main(String[] args) {
init();
initCache();
TEST_CACHE.put(1L, new UserInfoData());
testInvoke(1L);
}

private static void init() {
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setMetricsLogFrequency(0);
cfg.setPeerClassLoadingEnabled(true);
TcpDiscoverySpi spi = new TcpDiscoverySpi();
TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
ipFinder.setAddresses(Collections.singletonList("127.0.0.1"));
spi.setIpFinder(ipFinder);
cfg.setDiscoverySpi(spi);

ignite = Ignition.start(cfg);
}

private static void initCache() {
cfg1.setIndexedTypes(Long.class, UserInfoData.class);
cfg1.setName(TEST_DATA);
TEST_CACHE = ignite.getOrCreateCache(cfg1);
}

public static void testInvoke(long uid) {
System.out.println(" ... pre  testInvoke ... ");
TEST_CACHE.invoke(
uid, (entry, arguement) -> {
System.out.println("\n-");
UserInfoData value;
value = new UserInfoData();
value.getMap1().put(1, new Object());
value.getMap2().put(1, new ItemClass1());
entry.setValue(value);
return null;
}
);
System.out.println(" ... post  testInvoke ... ");
}

static class UserInfoData {
public Map getMap1() {
return map1;
}

public void setMap1(Map map1) {
this.map1 = map1;
}

private Map map1 = new HashMap<>();

public Map getMap2() {
return map2;
}

public void setMap2(Map map2) {
this.map2 = map2;
}
private Map map2 = new HashMap<>();
}

static class ItemClass1 {

}
}


The code inside the CacheEntryProcessor executes multiple times. Why?

2021-10-14 Thread 38797715

Hi,

The internal code of CacheEntryProcessor in the attachment has been 
executed multiple times. Why?

Is there any simple way to solve this problem?
package com.test;

import java.util.Collections;
import java.util.HashMap;
import java.util.Map;

import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.Ignition;
import org.apache.ignite.configuration.CacheConfiguration;
import org.apache.ignite.configuration.IgniteConfiguration;
import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;

public class IgniteTestDemo {

public static String TEST_DATA = "TEST_DATA";
public static CacheConfiguration cfg1 = new CacheConfiguration<>();
public static IgniteCache TEST_CACHE;


private static Ignite ignite = null;

public static void main(String[] args) {
init();
initCache();
TEST_CACHE.put(1L, new UserInfoData());
testInvoke(1L);
}

private static void init() {
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setMetricsLogFrequency(0);
cfg.setPeerClassLoadingEnabled(true);
TcpDiscoverySpi spi = new TcpDiscoverySpi();
TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
ipFinder.setAddresses(Collections.singletonList("127.0.0.1"));
spi.setIpFinder(ipFinder);
cfg.setDiscoverySpi(spi);

ignite = Ignition.start(cfg);
}

private static void initCache() {
cfg1.setIndexedTypes(Long.class, UserInfoData.class);
cfg1.setName(TEST_DATA);
TEST_CACHE = ignite.getOrCreateCache(cfg1);
}

public static void testInvoke(long uid) {
System.out.println(" ... pre  testInvoke ... ");
TEST_CACHE.invoke(
uid, (entry, arguement) -> {
System.out.println("\n-");
UserInfoData value;
value = new UserInfoData();
value.getMap1().put(1, new Object());
value.getMap2().put(1, new ItemClass1());
entry.setValue(value);
return null;
}
);
System.out.println(" ... post  testInvoke ... ");
}

static class UserInfoData {
public Map getMap1() {
return map1;
}

public void setMap1(Map map1) {
this.map1 = map1;
}

private Map map1 = new HashMap<>();

public Map getMap2() {
return map2;
}

public void setMap2(Map map2) {
this.map2 = map2;
}
private Map map2 = new HashMap<>();
}

static class ItemClass1 {

}
}


Re: Does ignite cluster need at least 3 nodes?

2021-09-28 Thread 38797715

No, it's completely the default implementation.

I understand that if the cluster has only two nodes, node1 (Coordinator) 
and node2, and if two nodes are disconnected due to long GC, node2 
should be shut down due to split brain, but it doesn't seem to be so now.


AFAIK, if the cluster is 3 nodes, it will not be the current behavior.


Re: Does ignite cluster need at least 3 nodes?

2021-09-28 Thread 38797715
My question is, does the current behavior seem to be inconsistent with 
the default SegmentationPolicy = STOP?


Re: Does ignite cluster need at least 3 nodes?

2021-09-27 Thread 38797715

Any feedback?

Does ignite cluster need at least 3 nodes?

2021-09-14 Thread 38797715

Hi team,

There is a cluster with 2 server nodes and 1 client node.
It can be seen from the attached log that the two server nodes are 
disconnected from about 20:18:15.


However, it is strange that the node with IP 10.97.32.53 did not shut 
down due to split brain. Instead, it is split into two clusters with 
only one server node.


We know that the default SegmentationPolicy value is STOP, so here comes 
the question:


1.There is no split brain in the behavior of this cluster, is there a 
bug here?


2.The default SegmentationPolicy does not take effect for a cluster with 
two nodes. An ignite cluster must have at least three nodes before the 
policy is effective?
[14:00:48,909][INFO][main][IgniteKernal] 

>>>__    
>>>   /  _/ ___/ |/ /  _/_  __/ __/  
>>>  _/ // (7 7// /  / / / _/
>>> /___/\___/_/|_/___/ /_/ /___/   
>>> 
>>> ver. 2.10.0#20210310-sha1:bc24f6ba
>>> 2021 Copyright(C) Apache Software Foundation
>>> 
>>> Ignite documentation: http://ignite.apache.org

[14:00:48,910][INFO][main][IgniteKernal] Config URL: file:/data/ignite/apache-ignite-2.10.0-bin/config/ignite-config-cluster-1.xml
[14:00:48,946][INFO][main][IgniteKernal] IgniteConfiguration [igniteInstanceName=null, pubPoolSize=32, svcPoolSize=32, callbackPoolSize=32, stripedPoolSize=32, sysPoolSize=32, mgmtPoolSize=4, dataStreamerPoolSize=32, utilityCachePoolSize=32, utilityCacheKeepAliveTime=6, p2pPoolSize=2, qryPoolSize=32, buildIdxPoolSize=4, igniteHome=/data/ignite/apache-ignite-2.10.0-bin, igniteWorkDir=/data/ignite/apache-ignite-2.10.0-bin/node1/work, mbeanSrv=com.sun.jmx.mbeanserver.JmxMBeanServer@564718df, nodeId=bb2aa568-9b5c-4079-bd5a-62c35e7b1ada, marsh=BinaryMarshaller [], marshLocJobs=false, daemon=false, p2pEnabled=false, netTimeout=5000, netCompressionLevel=1, sndRetryDelay=1000, sndRetryCnt=3, metricsHistSize=1, metricsUpdateFreq=2000, metricsExpTime=9223372036854775807, discoSpi=TcpDiscoverySpi [addrRslvr=null, sockTimeout=0, ackTimeout=0, marsh=null, reconCnt=10, reconDelay=2000, maxAckTimeout=60, soLinger=0, forceSrvMode=false, clientReconnectDisabled=false, internalLsnr=null, skipAddrsRandomization=false], segPlc=STOP, segResolveAttempts=2, waitForSegOnStart=true, allResolversPassReq=true, segChkFreq=1, commSpi=TcpCommunicationSpi [connectGate=org.apache.ignite.spi.communication.tcp.internal.ConnectGateway@186f8716, ctxInitLatch=java.util.concurrent.CountDownLatch@1d8bd0de[Count = 1], stopping=false, clientPool=null, nioSrvWrapper=null, stateProvider=null], evtSpi=org.apache.ignite.spi.eventstorage.NoopEventStorageSpi@45ca843, colSpi=NoopCollisionSpi [], deploySpi=LocalDeploymentSpi [], indexingSpi=org.apache.ignite.spi.indexing.noop.NoopIndexingSpi@3cc1435c, addrRslvr=null, encryptionSpi=org.apache.ignite.spi.encryption.noop.NoopEncryptionSpi@6bf0219d, tracingSpi=org.apache.ignite.spi.tracing.NoopTracingSpi@dd0c991, clientMode=false, rebalanceThreadPoolSize=4, rebalanceTimeout=1, rebalanceBatchesPrefetchCnt=3, rebalanceThrottle=0, rebalanceBatchSize=524288, txCfg=TransactionConfiguration [txSerEnabled=false, dfltIsolation=REPEATABLE_READ, dfltConcurrency=PESSIMISTIC, dfltTxTimeout=0, txTimeoutOnPartitionMapExchange=0, deadlockTimeout=1, pessimisticTxLogSize=0, pessimisticTxLogLinger=1, tmLookupClsName=null, txManagerFactory=null, useJtaSync=false], cacheSanityCheckEnabled=true, discoStartupDelay=6, deployMode=SHARED, p2pMissedCacheSize=100, locHost=null, timeSrvPortBase=31100, timeSrvPortRange=100, failureDetectionTimeout=1, sysWorkerBlockedTimeout=null, clientFailureDetectionTimeout=3, metricsLogFreq=6, connectorCfg=ConnectorConfiguration [jettyPath=null, host=null, port=11211, noDelay=true, directBuf=false, sndBufSize=32768, rcvBufSize=32768, idleQryCurTimeout=60, idleQryCurCheckFreq=6, sndQueueLimit=0, selectorCnt=4, idleTimeout=7000, sslEnabled=false, sslClientAuth=false, sslCtxFactory=null, sslFactory=null, portRange=100, threadPoolSize=32, msgInterceptor=null], odbcCfg=null, warmupClos=null, atomicCfg=AtomicConfiguration [seqReserveSize=1000, cacheMode=PARTITIONED, backups=1, aff=null, grpName=null], classLdr=null, sslCtxFactory=null, platformCfg=null, binaryCfg=null, memCfg=null, pstCfg=null, dsCfg=DataStorageConfiguration [sysRegionInitSize=41943040, sysRegionMaxSize=104857600, pageSize=0, concLvl=0, dfltDataRegConf=DataRegionConfiguration [name=Default_Region, maxSize=214748364800, initSize=1073741824, swapPath=null, pageEvictionMode=DISABLED, evictionThreshold=0.9, emptyPagesPoolSize=100, metricsEnabled=false, metricsSubIntervalCount=5, metricsRateTimeInterval=6, persistenceEnabled=true, checkpointPageBufSize=0, lazyMemoryAllocation=true, warmUpCfg=null], dataRegions=null, storagePath=null, checkpointFreq=18, lockWaitTime=1, checkpointThreads=4, checkpointWriteOrder=SEQUENTIAL, walHistSize=20, maxWalArchiveSize=1073741824, walSegments=10, walSegmentSize=1073741824, walPath=db/wal, 

Received incoming connection from remote node while connecting to this node, rejecting(2.7.5)

2021-07-08 Thread 38797715

Hi team,

Our cluster has node failure after frequent occurrence of the following 
messages:


2021-07-07_12:54:04.773[INFO] 
[grid-nio-worker-tcp-comm-25-#353%PROD_JET_default_SZ%] 
[o.a.i.s.c.tcp.TcpCommunicationSpi] Accepted incoming communication 
connection [locAddr=/29.21.68.48:47100, rmtAddr=/29.21.44.37:50128]
2021-07-07_12:54:04.773[INFO] 
[grid-nio-worker-tcp-comm-25-#353%PROD_JET_default_SZ%] 
[o.a.i.s.c.tcp.TcpCommunicationSpi] Received incoming connection from 
remote node while connecting to this node, rejecting 
[locNode=95f8163c-d349-40e9-b719-fab24057d2e8, locNodeOrder=3, 
rmtNode=e8193f9a-d1d6-48f8-b1b6-3cf28b914b78, rmtNodeOrder=10]


I want to know, what kind of error does "rejecting" represent here?










2021-07-07_12:53:08.032 [INFO ] [grid-timeout-worker-#327%PROD_JET_default_SZ%] [o.a.i.i.IgniteKernal%PROD_JET_default_SZ] FreeList [name=PROD_JET_default_SZ, buckets=256, dataPages=77686637, reusePages=0]
2021-07-07_12:53:14.614 [WARN ] [grid-nio-worker-tcp-comm-21-#349%PROD_JET_default_SZ%] [o.a.i.s.c.tcp.TcpCommunicationSpi] Communication SPI session write timed out (consider increasing 'socketWriteTimeout' configuration property) [remoteAddr=/29.21.44.37:47100, writeTimeout=2000]
2021-07-07_12:53:15.991 [INFO ] [grid-nio-worker-tcp-comm-22-#350%PROD_JET_default_SZ%] [o.a.i.s.c.tcp.TcpCommunicationSpi] Accepted incoming communication connection [locAddr=/29.21.68.48:47100, rmtAddr=/29.21.44.37:49616]
2021-07-07_12:53:15.991 [INFO ] [grid-nio-worker-tcp-comm-22-#350%PROD_JET_default_SZ%] [o.a.i.s.c.tcp.TcpCommunicationSpi] Received incoming connection from remote node while connecting to this node, rejecting [locNode=95f8163c-d349-40e9-b719-fab24057d2e8, locNodeOrder=3, rmtNode=e8193f9a-d1d6-48f8-b1b6-3cf28b914b78, rmtNodeOrder=10]
2021-07-07_12:53:16.192 [INFO ] [grid-nio-worker-tcp-comm-23-#351%PROD_JET_default_SZ%] [o.a.i.s.c.tcp.TcpCommunicationSpi] Accepted incoming communication connection [locAddr=/29.21.68.48:47100, rmtAddr=/29.21.44.37:49618]
2021-07-07_12:53:16.192 [INFO ] [grid-nio-worker-tcp-comm-23-#351%PROD_JET_default_SZ%] [o.a.i.s.c.tcp.TcpCommunicationSpi] Received incoming connection from remote node while connecting to this node, rejecting [locNode=95f8163c-d349-40e9-b719-fab24057d2e8, locNodeOrder=3, rmtNode=e8193f9a-d1d6-48f8-b1b6-3cf28b914b78, rmtNodeOrder=10]
2021-07-07_12:53:16.393 [INFO ] [grid-nio-worker-tcp-comm-24-#352%PROD_JET_default_SZ%] [o.a.i.s.c.tcp.TcpCommunicationSpi] Accepted incoming communication connection [locAddr=/29.21.68.48:47100, rmtAddr=/29.21.44.37:49620]
2021-07-07_12:53:16.393 [INFO ] [grid-nio-worker-tcp-comm-24-#352%PROD_JET_default_SZ%] [o.a.i.s.c.tcp.TcpCommunicationSpi] Received incoming connection from remote node while connecting to this node, rejecting [locNode=95f8163c-d349-40e9-b719-fab24057d2e8, locNodeOrder=3, rmtNode=e8193f9a-d1d6-48f8-b1b6-3cf28b914b78, rmtNodeOrder=10]
2021-07-07_12:53:16.594 [INFO ] [grid-nio-worker-tcp-comm-25-#353%PROD_JET_default_SZ%] [o.a.i.s.c.tcp.TcpCommunicationSpi] Accepted incoming communication connection [locAddr=/29.21.68.48:47100, rmtAddr=/29.21.44.37:49622]
2021-07-07_12:53:16.594 [INFO ] [grid-nio-worker-tcp-comm-25-#353%PROD_JET_default_SZ%] [o.a.i.s.c.tcp.TcpCommunicationSpi] Received incoming connection from remote node while connecting to this node, rejecting [locNode=95f8163c-d349-40e9-b719-fab24057d2e8, locNodeOrder=3, rmtNode=e8193f9a-d1d6-48f8-b1b6-3cf28b914b78, rmtNodeOrder=10]
2021-07-07_12:53:16.794 [INFO ] [grid-nio-worker-tcp-comm-26-#354%PROD_JET_default_SZ%] [o.a.i.s.c.tcp.TcpCommunicationSpi] Accepted incoming communication connection [locAddr=/29.21.68.48:47100, rmtAddr=/29.21.44.37:49624]
2021-07-07_12:53:16.795 [INFO ] [grid-nio-worker-tcp-comm-26-#354%PROD_JET_default_SZ%] [o.a.i.s.c.tcp.TcpCommunicationSpi] Received incoming connection from remote node while connecting to this node, rejecting [locNode=95f8163c-d349-40e9-b719-fab24057d2e8, locNodeOrder=3, rmtNode=e8193f9a-d1d6-48f8-b1b6-3cf28b914b78, rmtNodeOrder=10]
2021-07-07_12:53:16.995 [INFO ] [grid-nio-worker-tcp-comm-27-#355%PROD_JET_default_SZ%] [o.a.i.s.c.tcp.TcpCommunicationSpi] Accepted incoming communication connection [locAddr=/29.21.68.48:47100, rmtAddr=/29.21.44.37:49626]
2021-07-07_12:53:16.995 [INFO ] [grid-nio-worker-tcp-comm-27-#355%PROD_JET_default_SZ%] [o.a.i.s.c.tcp.TcpCommunicationSpi] Received incoming connection from remote node while connecting to this node, rejecting [locNode=95f8163c-d349-40e9-b719-fab24057d2e8, locNodeOrder=3, rmtNode=e8193f9a-d1d6-48f8-b1b6-3cf28b914b78, rmtNodeOrder=10]
2021-07-07_12:53:17.196 [INFO ] [grid-nio-worker-tcp-comm-28-#356%PROD_JET_default_SZ%] [o.a.i.s.c.tcp.TcpCommunicationSpi] Accepted incoming communication connection [locAddr=/29.21.68.48:47100, rmtAddr=/29.21.44.37:49628]
2021-07-07_12:53:17.196 [INFO ] [grid-nio-worker-tcp-comm-28-#356%PROD_JET_default_SZ%] [o.a.i.s.c.tcp.TcpCommunicationSpi] Received incoming connection from remote node while 

Thin client access Ignite cluster with ZooKeeper Discovery

2021-05-24 Thread 38797715

Hello,

For the Ignite cluster with ZooKeeper Discovery, if want to access the 
cluster through thin client, is there an example of ClientAddressFinder?




How to deal with data expiration time flexibly

2021-05-17 Thread 38797715

Hello team,

At present, only a few simple expiration policies can be configured, 
such as CreatedExpiryPolicy.


If want to use a field value to determine the expiration time of the 
data, what should we do? Or what interface is there for extension?




Re: Data synchronization after node restart

2021-05-17 Thread 38797715

Hello,

We know that the control script has an idle_verify command can be used 
to verify the potential inconsistency between the primary and backup. 
Well, since there are WAL and WAL archives, they can be used for 
historical data balancing. So why can't WAL ensure the consistency 
between the primary and the backup? Does the backup update also need to 
write the WAL file?


在 2021/5/10 下午9:57, akorensh 写道:

Hi,
You are referring to a persistent node failing while being a part of a
baseline topology.
When that same node comes back, it will load only the delta(differential
per your definition) from
the time that it was down. This is called historical rebalancing.
Read more here:
https://www.gridgain.com/docs/latest/developers-guide/historical-rebalancing
More on baseline topology:
https://ignite.apache.org/docs/latest/clustering/baseline-topology
Thanks, Alex



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


when persistence enabled,MaxDirectMemorySize = walSegmentSize * 4

2021-05-12 Thread 38797715

hello team,

doc says:

If you use Ignite native persistence, we recommend that you set the 
MaxDirectMemorySize JVM parameter to walSegmentSize * 4


why?

In addition, for the following configuration and code:

"vmArgs": "-XX:+PrintGCDetails -XX:MaxDirectMemorySize=5m"

ByteBuffer bb = ByteBuffer.allocateDirect(6 * 1024 * 1024);

This code throws the following error:
Caused by: java.lang.OutOfMemoryError: Direct buffer memory

Why isn't ignite affected by the MaxDirectMemorySize parameter?



Data synchronization after node restart

2021-05-10 Thread 38797715

Hi team,

If persistence is enabled and the number of backups is 1.

At this time, if a node fails, and the data writing is still normal.

If the previously failed node restarts, it joins the cluster again.

So is this restored node synchronizing all data from other nodes or only 
the differential data during failure?




Re: [2.10]If affinityKey is used as a where condition, cannot retrieve the result set

2021-04-21 Thread 38797715

Yes, you are right!

在 2021/4/22 上午12:49, Igor Belyakov 写道:

Seems like it could be related with the next issue:
https://issues.apache.org/jira/browse/IGNITE-14451 
<https://issues.apache.org/jira/browse/IGNITE-14451>

Which should be fixed in the 2.11 release.

As workaround you can try to change fields order in the PK to the same 
which is used in the cache fields list:


CREATE TABLE IF NOT EXISTS VIEWSORTCONTROL (
 TABLEID VARCHAR NOT NULL,
 ID VARCHAR,
 ...
 PRIMARY KEY (TABLEID, ID)
)

Igor

On Mon, Apr 12, 2021 at 1:03 PM 38797715 <38797...@qq.com 
<mailto:38797...@qq.com>> wrote:


Hi team,

test.zip Is a single node snapshot compressed file.

Unzip the file to the work directory of ignite, and then use the
example.xml start a node and activate the cluster.

You will see a table called viewsortcontrol.

Execute the following SQL, the result is normal:

SELECT * FROM viewsortcontrol

However, if you execute the following SQL, there is no query result:

SELECT * FROM viewsortcontrol WHERE tableid= '0x8d20e'

The tableid field is the affinitykey of the table.

What are the possible reasons?



Re: NPE on control.sh --cache indexes_force_rebuild

2021-04-14 Thread 38797715

2.10

at 
org.apache.ignite.internal.commandline.cache.CacheIndexesForceRebuild.parseArguments(CacheIndexesForceRebuild.java:210)


From the code point of view, this line of code does appear null pointer.


在 2021/4/14 下午3:56, Ilya Kazakov 写道:

Hello! What version do you use?

Ilya

ср, 14 апр. 2021 г. в 14:57, 38797715 <38797...@qq.com 
<mailto:38797...@qq.com>>:


 Hello,

Does indexes_force_rebuild command have a demo to run?

The following error will be thrown during current execution:

./control.sh --cache indexes_force_rebuild --cache_names City

Time: 2021-04-14T14:46:10.485
java.lang.NullPointerException
Command [] finished with code: 4
Error stack trace:
java.lang.NullPointerException
    at

org.apache.ignite.internal.commandline.cache.CacheIndexesForceRebuild.parseArguments(CacheIndexesForceRebuild.java:210)
    at

org.apache.ignite.internal.commandline.cache.CacheCommands.parseArguments(CacheCommands.java:97)
    at

org.apache.ignite.internal.commandline.CommonArgParser.parseAndValidate(CommonArgParser.java:241)
    at

org.apache.ignite.internal.commandline.CommandHandler.execute(CommandHandler.java:244)
    at

org.apache.ignite.internal.commandline.CommandHandler.main(CommandHandler.java:141)

Control utility has completed execution at: 2021-04-14T14:46:10.739
Execution time: 253 ms



NPE on control.sh --cache indexes_force_rebuild

2021-04-14 Thread 38797715

 Hello,

Does indexes_force_rebuild command have a demo to run?

The following error will be thrown during current execution:

./control.sh --cache indexes_force_rebuild --cache_names City

Time: 2021-04-14T14:46:10.485
java.lang.NullPointerException
Command [] finished with code: 4
Error stack trace:
java.lang.NullPointerException
    at 
org.apache.ignite.internal.commandline.cache.CacheIndexesForceRebuild.parseArguments(CacheIndexesForceRebuild.java:210)
    at 
org.apache.ignite.internal.commandline.cache.CacheCommands.parseArguments(CacheCommands.java:97)
    at 
org.apache.ignite.internal.commandline.CommonArgParser.parseAndValidate(CommonArgParser.java:241)
    at 
org.apache.ignite.internal.commandline.CommandHandler.execute(CommandHandler.java:244)
    at 
org.apache.ignite.internal.commandline.CommandHandler.main(CommandHandler.java:141)


Control utility has completed execution at: 2021-04-14T14:46:10.739
Execution time: 253 ms



[2.10]If affinityKey is used as a where condition, cannot retrieve the result set

2021-04-12 Thread 38797715

Hi team,

test.zip Is a single node snapshot compressed file.

Unzip the file to the work directory of ignite, and then use the 
example.xml start a node and activate the cluster.


You will see a table called viewsortcontrol.

Execute the following SQL, the result is normal:

SELECT * FROM viewsortcontrol

However, if you execute the following SQL, there is no query result:

SELECT * FROM viewsortcontrol WHERE tableid= '0x8d20e'

The tableid field is the affinitykey of the table.

What are the possible reasons?


http://www.springframework.org/schema/beans;
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd;>























































<>


VIEWSORTCONTROL.sql
Description: application/sql


[2.10]Binary type has different affinity key fields

2021-04-01 Thread 38797715
Hello Ilya, 
Our approach is not to empty binary_meta directory, because if you empty this 
directory, the data of all tables may be lost.
 We use the cat *.bin file to find the specific bin file of a table, and  then 
delete it on all nodes. After this operation, the following error  appears.
 We don't want to lose all the data. It's acceptable to rebuild only one table.



----
??: 
   "user"   
 


[2.10]Binary type has different affinity key fields

2021-03-31 Thread 38797715

Hello,
Ignite turns on native persistence,3 nodes.
We changed the name of the table primary key (unexpected operation), as 
follows:


CREATE TABLE WORKSPACE (
    NAME VARCHAR,
    WORKSPACEID VARCHAR,
    PRIMARY KEY (WORKSPACEID)
) WITH "template=cache-replicated, cache_name=Workspace, 
key_type=WorkspaceKey,value_type=Workspace";


INSERT 

DROP TABLE WORKSPACE

then:

CREATE TABLE WORKSPACE (
    NAME VARCHAR,
    ID VARCHAR,
    PRIMARY KEY (ID)
) WITH "template=cache-replicated, cache_name=Workspace, 
key_type=WorkspaceKey,value_type=Workspace";


At this time, you will find that the data cannot be written (affinitykey 
conflict error).


Next, delete the file in the binary_meta directory.

At this time, start a client node, and the following error will be thrown:

What I want to ask is, is there a recommended standard operation 
procedure for such an operation error to make the cluster return to normal?


Caused by: org.springframework.beans.BeanInstantiationException: Failed 
to instantiate [org.apache.ignite.Ignite]: Factory method 'ignite'threw 
exception; nested exception is class 
org.apache.ignite.IgniteCheckedException: New binary metadata is 
incompatible with binary metadata persisted locally. Consider cleaning 
up persisted metadata from /db/binary_meta directory.
at 
org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:185) 
~[spring-beans-5.2.9.RELEASE.jar!/:5.2.9.RELEASE]
at 
org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:650) 
~[spring-beans-5.2.9.RELEASE.jar!/:5.2.9.RELEASE]

... 44common frames omitted
Caused by: org.apache.ignite.IgniteCheckedException: New binary metadata 
is incompatible with binary metadata persisted locally. Consider 
cleaning up persisted metadata from /db/binary_meta directory.
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1455) 
~[ignite-core-2.10.0.jar!/:2.10.0]
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2112) 
~[ignite-core-2.10.0.jar!/:2.10.0]
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1758) 
~[ignite-core-2.10.0.jar!/:2.10.0]
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1143) 
~[ignite-core-2.10.0.jar!/:2.10.0]
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:641) 
~[ignite-core-2.10.0.jar!/:2.10.0]
at org.apache.ignite.IgniteSpring.start(IgniteSpring.java:66) 
~[ignite-spring-2.10.0.jar!/:2.10.0]
Caused by: org.apache.ignite.binary.BinaryObjectException: New binary 
metadata is incompatible with binary metadata persisted locally. 
Consider cleaning up persisted metadata from /db/binary_meta 
directory.
at 
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.addMetaLocally(CacheObjectBinaryProcessorImpl.java:698) 
~[ignite-core-2.10.0.jar!/:2.10.0]
at 
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl$1.addMetaLocally(CacheObjectBinaryProcessorImpl.java:297) 
~[ignite-core-2.10.0.jar!/:2.10.0]
at 
org.apache.ignite.internal.binary.BinaryContext.registerUserClassDescriptor(BinaryContext.java:826) 
~[ignite-core-2.10.0.jar!/:2.10.0]
at 
org.apache.ignite.internal.binary.BinaryContext.registerDescriptor(BinaryContext.java:784) 
~[ignite-core-2.10.0.jar!/:2.10.0]
at 
org.apache.ignite.internal.binary.BinaryContext.registerClass(BinaryContext.java:581) 
~[ignite-core-2.10.0.jar!/:2.10.0]
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.registerTypeLocally(GridQueryProcessor.java:1284) 
~[ignite-core-2.10.0.jar!/:2.10.0]
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.registerBinaryMetadata(GridQueryProcessor.java:1181) 
~[ignite-core-2.10.0.jar!/:2.10.0]
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.registerMetadataForRegisteredCaches(GridQueryProcessor.java:1143) 
~[ignite-core-2.10.0.jar!/:2.10.0]
at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.onCacheKernalStart(GridQueryProcessor.java:330) 
~[ignite-core-2.10.0.jar!/:2.10.0]
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.onKernalStart(GridCacheProcessor.java:677) 
~[ignite-core-2.10.0.jar!/:2.10.0]
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1407) 
~[ignite-core-2.10.0.jar!/:2.10.0]

... 61common frames omitted
Caused by: org.apache.ignite.binary.BinaryObjectException: Binary type 
has different affinity key fields [typeName=WorkspaceKey, 
affKeyFieldName1=ID, affKeyFieldName2=null]
at 
org.apache.ignite.internal.binary.BinaryUtils.mergeMetadata(BinaryUtils.java:999) 
~[ignite-core-2.10.0.jar!/:2.10.0]
at 
org.apache.ignite.internal.binary.BinaryUtils.mergeMetadata(BinaryUtils.java:959) 
~[ignite-core-2.10.0.jar!/:2.10.0]
at 
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.addMetaLocally(CacheObjectBinaryProcessorImpl.java:690) 

Re: [2.10 branch]cpp thin client transaction :Transaction with id 1 not found.

2021-03-05 Thread 38797715

Hello Igor,

Thank you very much for your hard work!

在 2021/3/5 下午6:50, Igor Sapego 写道:
Guys, I just want to notify you that the issue is fixed and is 
included in Ignite-2.10


Best Regards,
Igor


On Thu, Feb 18, 2021 at 3:41 AM 18624049226 <18624049...@163.com 
> wrote:


Hello Ilya,

https://issues.apache.org/jira/browse/IGNITE-14204


在 2021/2/18 上午12:14, Ilya Kasnacheev 写道:

Hello!

I confirm that I see this issue. Can you please file a ticket
against IGNITE JIRA?

Thanks,
-- 
Ilya Kasnacheev



вт, 16 февр. 2021 г. в 11:58, jjimeno mailto:jjim...@omp.com>>:

Hello!

In fact, it's very simple:

int main()
   {
   IgniteClientConfiguration cfg;

   cfg.SetEndPoints("10.250.0.10, 10.250.0.4");

   try
      {
      IgniteClient client = IgniteClient::Start(cfg);

      CacheClient cache =
client.GetOrCreateCache("vds");

      ClientTransactions transactions =
client.ClientTransactions();

      ClientTransaction tx = transactions.TxStart(PESSIMISTIC,
READ_COMMITTED);

      cache.Put(1, 1);

      tx.Commit();
      }
   catch (IgniteError & err)
      {
      std::cout << "An error occurred: " << err.GetText() <<
std::endl;

      return err.GetCode();
      }

   return 0;
   }

Not always, but sometimes, I get an "stack overflow" error,
which makes me
think about a concurrence problem in the code.

Cluster configuration:
>


Error:
>


Just in case, the C++ version I'm currently using is:
685c1b70ca (HEAD -> master, origin/master, origin/HEAD)
IGNITE-13865 Support
DateTime as a key or value in .NET and Java (#8580)

Le me know if you need anything else



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/




Re: [2.9.1]Failed to find DHT update future for deferred update response

2021-03-05 Thread 38797715

Hi Ilya,

It's not easy to build a reproducible environment, which is probably a 
problem in use, not necessarily a bug.


In particular, I want to know the behavior of DataStreamer in case of 
node failure and whether it has the function of fail over.


在 2021/3/5 下午8:16, Ilya Kasnacheev 写道:

Hello!

Do you happen to have a reproducer for this issue? I've not seen 
anything similar.


Regards,
--
Ilya Kasnacheev


ср, 3 мар. 2021 г. в 12:31, 38797715 <38797...@qq.com 
<mailto:38797...@qq.com>>:


Hi team,

When using DataStreamer to write a large amount of data at high
speed, if one server node fails, then other nodes will appear a
lot of following information, and finally they will also fail:

[2021-03-03 15:49:44,516][WARN ][sys-stripe-6-#7][atomic] Failed
to find DHT update future for deferred update response
[futId=142939814, nodeId=bfe0d1a9-8e0c-4a9e-8b62-041f2f252a80,
res=GridDhtAtomicDeferredUpdateResponse [futIds=GridLongList
[idx=256,

arr=[142939814,142731432,142827914,142939816,142731434,142939818,142939820,142939822,142939824,142939826,142939828,142731436,142939830,142731438,142939832,142939834,142731440,142827916,142731442,142827918,142827920,142939836,142827922,142939838,142827924,142827926,142939840,142939842,142827928,142731444,142939844,142827930,142731446,142939846,142827932,142939848,142731448,142827934,142939850,142827936,142939852,142827938,142827940,142939854,142827942,142939856,142939858,142827944,142827946,142939860,142827948,142939862,142827950,142731450,142939864,142827952,142731452,142939866,142939868,142827954,142731454,142827956,142731456,142827958,142939870,142827960,142731458,142939872,142827962,142939874,142731460,142939876,142939878,142731462,142731464,142939880,142731466,142939882,142939884,142731468,142939886,142731470,142939888,142731472,142731474,142939890,142731476,142939892,142731478,142939894,142939896,142731480,142731482,142731484,142731486,142731488,142731490,142731492,142731494,142939898,142939900,142731496,142731498,142939902,142939904,142731500,142939906,142939908,142731502,142939910,142731504,142939912,142939914,142731506,142939916,142731508,142731510,142939918,142939920,142939922,142731512,142939924,142731514,142939926,142731516,142939928,142731518,142939930,142731520,142939932,142939934,142939936,142731522,142939938,142731524,142939940,142731526,142939942,142939944,142731528,142939946,142731530,142939948,142939950,142731532,142939952,142939954,142731534,142939956,142731536,142939958,142731538,142939960,142731540,142731542,142939962,142939964,142731544,142731546,142731548,142939966,142731550,142939968,142731552,142939970,142731554,142731556,142939972,142731558,142731560,142731562,142939974,142731564,142939976,142939978,142939980,

What I want to ask is:

1.Are these logs related to node failures when the DataStreamer
writes?

2.Does DataStreamer have a mechanism for fail over? We know that
DataStreamer sends data to specified nodes in batches. When a node
fails, what is the behavior of DataStreamer?



[2.9.1]Failed to find DHT update future for deferred update response

2021-03-03 Thread 38797715

Hi team,

When using DataStreamer to write a large amount of data at high speed, 
if one server node fails, then other nodes will appear a lot of 
following information, and finally they will also fail:


[2021-03-03 15:49:44,516][WARN ][sys-stripe-6-#7][atomic] Failed to find 
DHT update future for deferred update response [futId=142939814, 
nodeId=bfe0d1a9-8e0c-4a9e-8b62-041f2f252a80, 
res=GridDhtAtomicDeferredUpdateResponse [futIds=GridLongList [idx=256, 
arr=[142939814,142731432,142827914,142939816,142731434,142939818,142939820,142939822,142939824,142939826,142939828,142731436,142939830,142731438,142939832,142939834,142731440,142827916,142731442,142827918,142827920,142939836,142827922,142939838,142827924,142827926,142939840,142939842,142827928,142731444,142939844,142827930,142731446,142939846,142827932,142939848,142731448,142827934,142939850,142827936,142939852,142827938,142827940,142939854,142827942,142939856,142939858,142827944,142827946,142939860,142827948,142939862,142827950,142731450,142939864,142827952,142731452,142939866,142939868,142827954,142731454,142827956,142731456,142827958,142939870,142827960,142731458,142939872,142827962,142939874,142731460,142939876,142939878,142731462,142731464,142939880,142731466,142939882,142939884,142731468,142939886,142731470,142939888,142731472,142731474,142939890,142731476,142939892,142731478,142939894,142939896,142731480,142731482,142731484,142731486,142731488,142731490,142731492,142731494,142939898,142939900,142731496,142731498,142939902,142939904,142731500,142939906,142939908,142731502,142939910,142731504,142939912,142939914,142731506,142939916,142731508,142731510,142939918,142939920,142939922,142731512,142939924,142731514,142939926,142731516,142939928,142731518,142939930,142731520,142939932,142939934,142939936,142731522,142939938,142731524,142939940,142731526,142939942,142939944,142731528,142939946,142731530,142939948,142939950,142731532,142939952,142939954,142731534,142939956,142731536,142939958,142731538,142939960,142731540,142731542,142939962,142939964,142731544,142731546,142731548,142939966,142731550,142939968,142731552,142939970,142731554,142731556,142939972,142731558,142731560,142731562,142939974,142731564,142939976,142939978,142939980,


What I want to ask is:

1.Are these logs related to node failures when the DataStreamer writes?

2.Does DataStreamer have a mechanism for fail over? We know that 
DataStreamer sends data to specified nodes in batches. When a node 
fails, what is the behavior of DataStreamer?




Cluster-wide snapshot operation finished successfully

2021-02-24 Thread 38797715

Hello team,

You can see the following message from the log:

[17:30:19,227][INFO][disco-notifier-worker-#44][IgniteSnapshotManager] 
Cluster-wide snapshot operation finished successfully 
[req=SnapshotOperationRequest 
[rqId=2d9baefa-eee4-4c13-ae78-750f0b1e1cd4, 
srcNodeId=dfa3ed59-4042-4431-8e0d-2346da6a59be, snpName=liyujue, 
grpIds=ArrayList [2100619], bltNodes=HashSet 
[bf803d29-caf2-4116-81b1-75e57445ad09, 
dfa3ed59-4042-4431-8e0d-2346da6a59be], err=null]]


At which node is the log output? Is it random?




Re: [2.9.1]in SYS.METRICS View some data is always 0

2021-02-01 Thread 38797715

It's amazing, let's simplify the problem.

1.Start a node using the following configuration file

http://www.springframework.org/schema/beans;
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd;>










2.Use sqlline to connect the current cluster:

./sqlline.sh --verbose=true -u jdbc:ignite:thin://localhost

3.Execute the following SQL in the 
`apache-ignite-2.9.1-bin/examples/sql/world.sql`file:


CREATETABLECity (
ID INT,
NameVARCHAR,
CountryCode CHAR(3),
District VARCHAR,
PopulationINT,
PRIMARYKEY(ID, CountryCode)
) WITH"template=partitioned, backups=1, affinityKey=CountryCode, 
CACHE_NAME=City, KEY_TYPE=demo.model.CityKey, VALUE_TYPE=demo.model.City";
INSERTINTOCity(ID, Name, CountryCode, District, Population) 
VALUES(1,'Kabul','AFG','Kabol',178);


4.Execute the following code:
        IgniteConfiguration cfg = new IgniteConfiguration();
        cfg.setClientMode(true);
        cfg.setPeerClassLoadingEnabled(true);
        cfg.setMetricExporterSpi(new SqlViewMetricExporterSpi());
        Ignite ignite = Ignition.start(cfg);

        IgniteCache cache = ignite.cache("City");

    String sql = "SELECT count(*) FROM CITY;";
    cache.query(new SqlFieldsQuery(sql)).getAll();
5.Execute the following SQL in the SQLLine:
select * from sys.metrics where name like 'cache.City.Query%';
6.Got it!

在 2021/2/1 下午4:39, Ilya Kasnacheev 写道:

Hello!

No, this is not correct.

Even if table is created through JDBC, nodes will collect metrics for 
its SqlFieldsQuery's:
| cache.SQL_PUBLIC_PERSON.QueryCompleted | 2 
 |    |
| cache.SQL_PUBLIC_PERSON.QueryExecuted | 2 
 |    |
| cache.SQL_PUBLIC_PERSON.QueryFailed | 0 
 |    |
| cache.SQL_PUBLIC_PERSON.QueryMaximumTime | 63 
|    |
| cache.SQL_PUBLIC_PERSON.QueryMinimalTime | 21 
|    |
| cache.SQL_PUBLIC_PERSON.QuerySumTime | 84 
|    |


Regards,
--
Ilya Kasnacheev


сб, 30 янв. 2021 г. в 05:10, 38797715 <38797...@qq.com 
<mailto:38797...@qq.com>>:


Hello Ilya,

I got it.
If the table is created through JDBC, the metric data will be 0.
Only when the CREATE TABLE statement is executed through
SqlFieldsQuery, the data will be counted.

I think it's a bug,Ilya, can you confirm again?

在 2021/1/30 上午12:06, Ilya Kasnacheev 写道:

Hello!

I can see some values on the server when executing SqlFieldsQuery
on the same server:
| cache.foo.QueryCompleted   | 3
 |    |
| cache.foo.QueryExecuted    | 3
 |    |
| cache.foo.QueryFailed  | 0
 |    |
| cache.foo.QueryMaximumTime | 350
   |    |
| cache.foo.QueryMinimalTime | 101
   |    |
| cache.foo.QuerySumTime | 665
   |    |

I can also see them on client, but you need to a) specify
ClientConnectorConfiguration on client node with non-default
port, b) connect to that port with JDBC, and c) Enable metrics
exporter SPI on the client node. Then I can see the same, after
running SqlFieldQuery's on client:
| cache.foo.QueryCompleted   | 3
 |    |
| cache.foo.QueryExecuted    | 3
 |    |
| cache.foo.QueryFailed  | 0
 |    |
| cache.foo.QueryMaximumTime | 269
   |    |
| cache.foo.QueryMinimalTime | 20
|    |
| cache.foo.QuerySumTime | 424
   |    |

Regards,
-- 
Ilya Kasnacheev



чт, 28 янв. 2021 г. в 14:42, 38797715 <38797...@qq.com
<mailto:38797...@qq.com>>:

Hello Ilya,

if use sqlline execute sql,the feedback result of the
following statement is correct.

SELECT * FROM sys.metrics WHERE name LIKE 'sql%' ORDER BY name;

I used SqlFieldsQuery again to do the above tests, and the
results were all 0. I think there may be a issue in this.

在 2021/1/28 下午

Re: [2.9.1]in SYS.METRICS View some data is always 0

2021-01-29 Thread 38797715

Hello Ilya,

I got it.
If the table is created through JDBC, the metric data will be 0.
Only when the CREATE TABLE statement is executed through SqlFieldsQuery, 
the data will be counted.


I think it's a bug,Ilya, can you confirm again?

在 2021/1/30 上午12:06, Ilya Kasnacheev 写道:

Hello!

I can see some values on the server when executing SqlFieldsQuery on 
the same server:
| cache.foo.QueryCompleted   | 3  | 
   |
| cache.foo.QueryExecuted    | 3  | 
   |
| cache.foo.QueryFailed  | 0  | 
   |
| cache.foo.QueryMaximumTime | 350    | 
   |
| cache.foo.QueryMinimalTime | 101    | 
   |
| cache.foo.QuerySumTime | 665    | 
   |


I can also see them on client, but you need to a) specify 
ClientConnectorConfiguration on client node with non-default port, b) 
connect to that port with JDBC, and c) Enable metrics exporter SPI on 
the client node. Then I can see the same, after running 
SqlFieldQuery's on client:
| cache.foo.QueryCompleted   | 3  | 
   |
| cache.foo.QueryExecuted    | 3  | 
   |
| cache.foo.QueryFailed  | 0  | 
   |
| cache.foo.QueryMaximumTime | 269    | 
   |
| cache.foo.QueryMinimalTime | 20 | 
   |
| cache.foo.QuerySumTime | 424    | 
   |


Regards,
--
Ilya Kasnacheev


чт, 28 янв. 2021 г. в 14:42, 38797715 <38797...@qq.com 
<mailto:38797...@qq.com>>:


Hello Ilya,

if use sqlline execute sql,the feedback result of the following
statement is correct.

SELECT * FROM sys.metrics WHERE name LIKE 'sql%' ORDER BY name;

I used SqlFieldsQuery again to do the above tests, and the results
were all 0. I think there may be a issue in this.

在 2021/1/28 下午6:52, Ilya Kasnacheev 写道:

Hello!

I think these metrics will be gathered for
ScanQuery/SqlFieldsQuery executed via native Java Query API, but
they will not be gathered for statements executed via JDBC.

One obvious reason is that Java Query API's queries are bound to
a specific cache. JDBC query is not bound to specific cache: JDBC
query may operate on zero or more caches. We could map these
queries back to participating caches, but I don't see that we do
that.

You could still  use sql.queries.user. metrics:
SELECT * FROM sys.metrics WHERE name LIKE 'sql%' ORDER BY name;

Regards,
-- 
Ilya Kasnacheev



ср, 27 янв. 2021 г. в 15:29, 38797715 <38797...@qq.com
<mailto:38797...@qq.com>>:

Hello Ilya,

The test method is as follows:

Start 2 nodes on the localhost.
Use the CREATE TABLE statement to create a table;
Use the COPY command to load some data;
Access to cluster through sqlline;
Execute select count (*) from T;
Execute select * from sys.metrics  WHERE name LIKE '%cache.T%';
At this time, you will find that the relevant data are all 0,
but the value of OffHeapEntriesCount is still correct.

If you use sqlline to access another node, the result is the
same.

The configuration file to start the cluster is as follows:













































在 2021/1/27 下午6:24, Ilya Kasnacheev 写道:

Hello!

These values are per-node, as far as I know. Is it possible
that you have connected to a node which does not handle any
queries (as reducer, anyway)?

Regards,
-- 
Ilya Kasnacheev



вт, 26 янв. 2021 г. в 13:48, 38797715 <38797...@qq.com
<mailto:38797...@qq.com>>:

Hi,

We found that SYS.METRICS View some data is always 0,
such as

QueryCompleted,QueryExecuted,QuerySumTime,QueryCompleted,QuerySumTime

and QueryMaximumTime. This is a bug? Or what
configuration is needed? Or
the related functions have not been implemented yet?



Re: [2.9.1]in SYS.METRICS View some data is always 0

2021-01-28 Thread 38797715

Hello Ilya,

if use sqlline execute sql,the feedback result of the following 
statement is correct.


SELECT * FROM sys.metrics WHERE name LIKE 'sql%' ORDER BY name;

I used SqlFieldsQuery again to do the above tests, and the results were 
all 0. I think there may be a issue in this.


在 2021/1/28 下午6:52, Ilya Kasnacheev 写道:

Hello!

I think these metrics will be gathered for ScanQuery/SqlFieldsQuery 
executed via native Java Query API, but they will not be gathered for 
statements executed via JDBC.


One obvious reason is that Java Query API's queries are bound to a 
specific cache. JDBC query is not bound to specific cache: JDBC query 
may operate on zero or more caches. We could map these queries back to 
participating caches, but I don't see that we do that.


You could still  use sql.queries.user. metrics:
SELECT * FROM sys.metrics WHERE name LIKE 'sql%' ORDER BY name;

Regards,
--
Ilya Kasnacheev


ср, 27 янв. 2021 г. в 15:29, 38797715 <38797...@qq.com 
<mailto:38797...@qq.com>>:


Hello Ilya,

The test method is as follows:

Start 2 nodes on the localhost.
Use the CREATE TABLE statement to create a table;
Use the COPY command to load some data;
Access to cluster through sqlline;
Execute select count (*) from T;
Execute select * from sys.metrics  WHERE name LIKE '%cache.T%';
At this time, you will find that the relevant data are all 0, but
the value of OffHeapEntriesCount is still correct.

If you use sqlline to access another node, the result is the same.

The configuration file to start the cluster is as follows:












































在 2021/1/27 下午6:24, Ilya Kasnacheev 写道:

Hello!

These values are per-node, as far as I know. Is it possible that
you have connected to a node which does not handle any queries
(as reducer, anyway)?

Regards,
-- 
Ilya Kasnacheev



вт, 26 янв. 2021 г. в 13:48, 38797715 <38797...@qq.com
<mailto:38797...@qq.com>>:

Hi,

We found that SYS.METRICS View some data is always 0, such as
QueryCompleted,QueryExecuted,QuerySumTime,QueryCompleted,QuerySumTime

and QueryMaximumTime. This is a bug? Or what configuration is
needed? Or
the related functions have not been implemented yet?



Re: [2.9.1]in SYS.METRICS View some data is always 0

2021-01-27 Thread 38797715

Hello,

I know SQL_ QUERIES and SQL_ QUERY_ HISTORY.
I opened *org.apache.ignite.spi.metric.sql.SqlViewMetricExporterSpi* in 
the configuration file

A new METRICS view will be added.

在 2021/1/28 上午7:02, akorensh 写道:

Hi, there is not a view called metrics, only node_metrics, see below

Here are the docs for those views:
https://ignite.apache.org/docs/latest/monitoring-metrics/system-views


Looks like you might be referring to an SQL view, in which case there is a
realtime one which will only show values when the query is executing and a
historical one which shows data for queries that have already ran.

https://ignite.apache.org/docs/latest/monitoring-metrics/system-views#sql_queries
https://ignite.apache.org/docs/latest/monitoring-metrics/system-views#sql_queries_history





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: [2.9.1]in SYS.METRICS View some data is always 0

2021-01-27 Thread 38797715

Hello Ilya,

The test method is as follows:

Start 2 nodes on the localhost.
Use the CREATE TABLE statement to create a table;
Use the COPY command to load some data;
Access to cluster through sqlline;
Execute select count (*) from T;
Execute select * from sys.metrics  WHERE name LIKE '%cache.T%';
At this time, you will find that the relevant data are all 0, but the 
value of OffHeapEntriesCount is still correct.


If you use sqlline to access another node, the result is the same.

The configuration file to start the cluster is as follows:









































在 2021/1/27 下午6:24, Ilya Kasnacheev 写道:

Hello!

These values are per-node, as far as I know. Is it possible that you 
have connected to a node which does not handle any queries (as 
reducer, anyway)?


Regards,
--
Ilya Kasnacheev


вт, 26 янв. 2021 г. в 13:48, 38797715 <38797...@qq.com 
<mailto:38797...@qq.com>>:


Hi,

We found that SYS.METRICS View some data is always 0, such as
QueryCompleted,QueryExecuted,QuerySumTime,QueryCompleted,QuerySumTime
and QueryMaximumTime. This is a bug? Or what configuration is
needed? Or
the related functions have not been implemented yet?



[2.9.1]in SYS.METRICS View some data is always 0

2021-01-26 Thread 38797715

Hi,

We found that SYS.METRICS View some data is always 0, such as 
QueryCompleted,QueryExecuted,QuerySumTime,QueryCompleted,QuerySumTime 
and QueryMaximumTime. This is a bug? Or what configuration is needed? Or 
the related functions have not been implemented yet?




Re: Regarding Partition Map exchange Triggers

2020-12-11 Thread 38797715

Hi,

According to the exception log of the following topic, a client node 
joins the cluster and blocks a SQL query on the transactional cache. Is 
this true?


http://apache-ignite-users.70518.x6.nabble.com/Failed-to-wait-for-affinity-ready-future-for-topology-version-AffinityTopologyVersion-td34823.html

Now it seems that the relevant explanations are confusing?

在 2020/12/11 下午8:21, Pavel Kovalenko 写道:

Hi,

I think it's wrong information on the wiki that PME is not triggered 
for some cases. It should be fixed.
Actually, PME is triggered in all cases but for some of them it 
doesn't block cache operations or the time of blocking is minimized.
Most optimizations for minimizing the blocking time of PME have been 
done in Ignite 2.8.


Thick client join/left PME - doesn't block operations at all.

Other events can be ordered by their potential blocking time:
1. Non-baseline node left/join - minimal
2. Baseline node stop/left
3. Baseline node join
4. Baseline change - heaviest operation

> *for the end user , is this invoked when we do ignite.getOrCreate( xx )
and ignite.cache(xx )*

Yes.

пт, 11 дек. 2020 г. в 14:55, VeenaMithare >:


Hi ,


I can see the triggers for PME initiation here :

https://cwiki.apache.org/confluence/display/IGNITE/%28Partition+Map%29+Exchange+-+under+the+hood



Triggers
Events which causes exchange

Topology events:

Node Join (EVT_NODE_JOINED) - new node discovered and joined topology
(exchange is done after a node is included into the ring). This event
doesn't trigger the PME if a thick client connects the cluster and
an Ignite
version is 2.8 or later.


--> *This means in ignite 2.8 or higher, this is triggered only if
nodes
that participate in the baseline topology are added ?*


Node Left (EVT_NODE_LEFT) - correct shutdown with call
ignite.close. This
event doesn't trigger the PME in Ignite 2.8 and later versions if
a node
belonging to an existing baseline topology leaves.

--> *This means this is not triggered at all 2.8.1 or higher if
shutdown
cleanly ? i.e. if this is called : Ignition.stop(false) *


Node Failed (EVT_NODE_FAILED) - detected unresponsive node,
probably crashed
and is considered failed

--> *This means this is  triggered at all 2.8.1 or higher for
baseline nodes
or any thick client node ?*

Custom events:

Activation / Deactivation / Baseline topology set -
ChangeGlobalStateMessage
Dynamic cache start / Dynamic cache stop - DynamicCacheChangeBatch

--> *for the end user , is this invoked when we do
ignite.getOrCreate( xx )
and ignite.cache(xx )*


Snapshot create / restore - SnapshotDiscoveryMessage
Global WAL enable / disable - WalStateAbstractMessage
Late affinity assignment - CacheAffinityChangeMessage


regards,
Veena.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/




Re: Failed to wait for affinity ready future for topology version: AffinityTopologyVersion

2020-12-09 Thread 38797715

Hi Andrei,

Another question is, what is the starting point of PME? PME starts when 
a new node join request is received?


According to the log, the error occurred in '[19:15:34,959]', and he was 
waiting for '[topVer = 57, minorTopVer = 0]', but 'Added new node to 
topology: TcpDiscoveryNode' occurred in '[19:15:54,661]', indicating 
that PME started very early.



在 2020/12/9 下午10:31, andrei 写道:

Hello,

This means that some  partition map exchange was blocked or took a 
long time. SQL queries cannot be run during this period.


Most likely you are using a transaction with no timeouts. If yes, then 
set the default tx timeout and tx timeout for PME:


https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/TransactionConfiguration.html#setTxTimeoutOnPartitionMapExchange-long- 



https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/TransactionConfiguration.html#setDefaultTxTimeout-long- 



If not, your logs are required.

BR,
Andrei

12/9/2020 4:15 PM, 38797715 пишет:

Hi community,

When a new client node is joining the cluster, other client nodes 
cannot perform SQL query. Is this true? log are as follows:


[19:15:34,959][SEVERE][query-#44319][GridMapQueryExecutor] Failed to 
execute local query.
class org.apache.ignite.IgniteException: Failed to wait for affinity 
ready future for topology version: AffinityTopologyVersion 
[topVer=57, minorTopVer=0]
    at 
org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache.awaitTopologyVersion(GridAffinityAssignmentCache.java:909) 

    at 
org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache.cachedAffinity(GridAffinityAssignmentCache.java:784) 

    at 
org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache.cachedAffinity(GridAffinityAssignmentCache.java:764) 

    at 
org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache.primaryPartitions(GridAffinityAssignmentCache.java:690) 

    at 
org.apache.ignite.internal.processors.cache.GridCacheAffinityManager.primaryPartitions(GridCacheAffinityManager.java:387) 

    at 
org.apache.ignite.internal.processors.query.h2.twostep.PartitionReservationManager.reservePartitions(PartitionReservationManager.java:216) 

    at 
org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onQueryRequest0(GridMapQueryExecutor.java:313) 

    at 
org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onQueryRequest(GridMapQueryExecutor.java:241) 

    at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.onMessage(IgniteH2Indexing.java:2186) 

    at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.lambda$start$17(IgniteH2Indexing.java:2139) 

    at 
org.apache.ignite.internal.managers.communication.GridIoManager$ArrayListener.onMessage(GridIoManager.java:3386) 

    at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1847) 

    at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1472) 

    at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$5200(GridIoManager.java:229) 

    at 
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1367) 

    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 

    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 


    at java.lang.Thread.run(Thread.java:748)
Caused by: class 
org.apache.ignite.internal.processors.cache.CacheStoppedException: 
Failed to perform cache operation (cache is stopped): Failed to wait 
for topology update, cache group is stopping.
    at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.lambda$processCacheStopRequestOnExchangeDone$629e8679$1(GridCacheProcessor.java:2723) 

    at 
org.apache.ignite.internal.util.IgniteUtils.doInParallel(IgniteUtils.java:11157) 

    at 
org.apache.ignite.internal.util.IgniteUtils.doInParallel(IgniteUtils.java:11059) 

    at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.processCacheStopRequestOnExchangeDone(GridCacheProcessor.java:2706) 

    at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.onExchangeDone(GridCacheProcessor.java:2862) 

    at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onDone(GridDhtPartitionsExchangeFuture.java:2330) 

    at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.processFullMessage(GridDhtPartitionsExchangeFuture.java:4375) 

    at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.access$1500(GridDhtPartitionsExchangeFuture.java:148) 

    at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$4.apply

Re: Failed to wait for affinity ready future for topology version: AffinityTopologyVersion

2020-12-09 Thread 38797715

Hi,

What I need to confirm is whether all SQL cannot be executed during PME 
execution, or just some types of SQL (including DDL, DML, SELECT)?


Or, it is strongly related to some cache parameters, such as atomicityMode?

在 2020/12/9 下午10:31, andrei 写道:

Hello,

This means that some  partition map exchange was blocked or took a 
long time. SQL queries cannot be run during this period.


Most likely you are using a transaction with no timeouts. If yes, then 
set the default tx timeout and tx timeout for PME:


https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/TransactionConfiguration.html#setTxTimeoutOnPartitionMapExchange-long- 



https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/TransactionConfiguration.html#setDefaultTxTimeout-long- 



If not, your logs are required.

BR,
Andrei

12/9/2020 4:15 PM, 38797715 пишет:

Hi community,

When a new client node is joining the cluster, other client nodes 
cannot perform SQL query. Is this true? log are as follows:


[19:15:34,959][SEVERE][query-#44319][GridMapQueryExecutor] Failed to 
execute local query.
class org.apache.ignite.IgniteException: Failed to wait for affinity 
ready future for topology version: AffinityTopologyVersion 
[topVer=57, minorTopVer=0]
    at 
org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache.awaitTopologyVersion(GridAffinityAssignmentCache.java:909) 

    at 
org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache.cachedAffinity(GridAffinityAssignmentCache.java:784) 

    at 
org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache.cachedAffinity(GridAffinityAssignmentCache.java:764) 

    at 
org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache.primaryPartitions(GridAffinityAssignmentCache.java:690) 

    at 
org.apache.ignite.internal.processors.cache.GridCacheAffinityManager.primaryPartitions(GridCacheAffinityManager.java:387) 

    at 
org.apache.ignite.internal.processors.query.h2.twostep.PartitionReservationManager.reservePartitions(PartitionReservationManager.java:216) 

    at 
org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onQueryRequest0(GridMapQueryExecutor.java:313) 

    at 
org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onQueryRequest(GridMapQueryExecutor.java:241) 

    at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.onMessage(IgniteH2Indexing.java:2186) 

    at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.lambda$start$17(IgniteH2Indexing.java:2139) 

    at 
org.apache.ignite.internal.managers.communication.GridIoManager$ArrayListener.onMessage(GridIoManager.java:3386) 

    at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1847) 

    at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1472) 

    at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$5200(GridIoManager.java:229) 

    at 
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1367) 

    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 

    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 


    at java.lang.Thread.run(Thread.java:748)
Caused by: class 
org.apache.ignite.internal.processors.cache.CacheStoppedException: 
Failed to perform cache operation (cache is stopped): Failed to wait 
for topology update, cache group is stopping.
    at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.lambda$processCacheStopRequestOnExchangeDone$629e8679$1(GridCacheProcessor.java:2723) 

    at 
org.apache.ignite.internal.util.IgniteUtils.doInParallel(IgniteUtils.java:11157) 

    at 
org.apache.ignite.internal.util.IgniteUtils.doInParallel(IgniteUtils.java:11059) 

    at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.processCacheStopRequestOnExchangeDone(GridCacheProcessor.java:2706) 

    at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.onExchangeDone(GridCacheProcessor.java:2862) 

    at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onDone(GridDhtPartitionsExchangeFuture.java:2330) 

    at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.processFullMessage(GridDhtPartitionsExchangeFuture.java:4375) 

    at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.access$1500(GridDhtPartitionsExchangeFuture.java:148) 

    at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$4.apply(GridDhtPartitionsExchangeFuture.java:4054

Failed to wait for affinity ready future for topology version: AffinityTopologyVersion

2020-12-09 Thread 38797715

Hi community,

When a new client node is joining the cluster, other client nodes cannot 
perform SQL query. Is this true? log are as follows:


[19:15:34,959][SEVERE][query-#44319][GridMapQueryExecutor] Failed to 
execute local query.
class org.apache.ignite.IgniteException: Failed to wait for affinity 
ready future for topology version: AffinityTopologyVersion [topVer=57, 
minorTopVer=0]
    at 
org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache.awaitTopologyVersion(GridAffinityAssignmentCache.java:909) 

    at 
org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache.cachedAffinity(GridAffinityAssignmentCache.java:784) 

    at 
org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache.cachedAffinity(GridAffinityAssignmentCache.java:764) 

    at 
org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache.primaryPartitions(GridAffinityAssignmentCache.java:690) 

    at 
org.apache.ignite.internal.processors.cache.GridCacheAffinityManager.primaryPartitions(GridCacheAffinityManager.java:387) 

    at 
org.apache.ignite.internal.processors.query.h2.twostep.PartitionReservationManager.reservePartitions(PartitionReservationManager.java:216) 

    at 
org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onQueryRequest0(GridMapQueryExecutor.java:313) 

    at 
org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onQueryRequest(GridMapQueryExecutor.java:241) 

    at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.onMessage(IgniteH2Indexing.java:2186) 

    at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.lambda$start$17(IgniteH2Indexing.java:2139) 

    at 
org.apache.ignite.internal.managers.communication.GridIoManager$ArrayListener.onMessage(GridIoManager.java:3386) 

    at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1847) 

    at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1472) 

    at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$5200(GridIoManager.java:229) 

    at 
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1367) 

    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 

    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 


    at java.lang.Thread.run(Thread.java:748)
Caused by: class 
org.apache.ignite.internal.processors.cache.CacheStoppedException: 
Failed to perform cache operation (cache is stopped): Failed to wait for 
topology update, cache group is stopping.
    at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.lambda$processCacheStopRequestOnExchangeDone$629e8679$1(GridCacheProcessor.java:2723) 

    at 
org.apache.ignite.internal.util.IgniteUtils.doInParallel(IgniteUtils.java:11157) 

    at 
org.apache.ignite.internal.util.IgniteUtils.doInParallel(IgniteUtils.java:11059) 

    at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.processCacheStopRequestOnExchangeDone(GridCacheProcessor.java:2706) 

    at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.onExchangeDone(GridCacheProcessor.java:2862) 

    at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onDone(GridDhtPartitionsExchangeFuture.java:2330) 

    at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.processFullMessage(GridDhtPartitionsExchangeFuture.java:4375) 

    at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.access$1500(GridDhtPartitionsExchangeFuture.java:148) 

    at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$4.apply(GridDhtPartitionsExchangeFuture.java:4054) 

    at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$4.apply(GridDhtPartitionsExchangeFuture.java:4042) 

    at 
org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:399) 

    at 
org.apache.ignite.internal.util.future.GridFutureAdapter.listen(GridFutureAdapter.java:354) 

    at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onReceiveFullMessage(GridDhtPartitionsExchangeFuture.java:4042) 

    at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager.processFullPartitionUpdate(GridCachePartitionExchangeManager.java:1886) 

    at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$3.onMessage(GridCachePartitionExchangeManager.java:429) 

    at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$3.onMessage(GridCachePartitionExchangeManager.java:416) 

   

Re: [2.9.0]NPE on invoke IgniteCache.destroy()

2020-12-08 Thread 38797715

Hi Ilya,

This issue is not easy to reproduce.

However, judging from the exception stack, the issue may be related to 
the checkpoint process during the destruction of the cache.


在 2020/12/9 上午9:33, Ilya Kazakov 写道:

Hello! Can you provide some details, or show some short reproducer?

-
Ilya Kazakov

пн, 7 дек. 2020 г. в 21:31, 38797715 <38797...@qq.com 
<mailto:38797...@qq.com>>:


Hi community,

Call IgniteCache.destroy Method, NPE appears,logs are as follows:

2020-12-0717:32:18.870[] [exchange-worker-#54%tradecore%]
INFOo.a.i.i.e.time- Started exchange init
[topVer=AffinityTopologyVersion [topVer=1, minorTopVer=279],
crd=true, evt=DISCOVERY_CUSTOM_EVT,
evtNode=320935f6-3516-4b0b-9e5f-e80768696522,
customEvt=DynamicCacheChangeBatch
[id=1f2a90e2671-562b8f51-fa6a-4094-928c-6976ce87614a,
reqs=ArrayList [DynamicCacheChangeRequest [cacheName=PksQuota,
hasCfg=false, nodeId=320935f6-3516-4b0b-9e5f-e80768696522,
clientStartOnly=false, stop=true, destroy=false,
disabledAfterStartfalse]], exchangeActions=ExchangeActions
[startCaches=null, stopCaches=[PksQuota], startGrps=[],
stopGrps=[PksQuota, destroy=true], resetParts=null,
stateChangeRequest=null], startCaches=false], allowMerge=false,
exchangeFreeSwitch=false]
2020-12-0717:32:18.873[] [exchange-worker-#54%tradecore%]
INFOo.a.i.i.p.c.d.d.p.GridDhtPartitionsExchangeFuture- Finished
waiting for partition release future
[topVer=AffinityTopologyVersion [topVer=1, minorTopVer=279],
waitTime=0ms, futInfo=NA, mode=DISTRIBUTED]
2020-12-0717:32:18.873[] [exchange-worker-#54%tradecore%]
INFOo.a.i.i.p.c.d.d.p.GridDhtPartitionsExchangeFuture- Finished
waiting for partitions release latch: ServerLatch [permits=0,
pendingAcks=HashSet [], super=CompletableLatch
[id=CompletableLatchUid [id=exchange,
topVer=AffinityTopologyVersion [topVer=1, minorTopVer=279
2020-12-0717:32:18.873[] [exchange-worker-#54%tradecore%]
INFOo.a.i.i.p.c.d.d.p.GridDhtPartitionsExchangeFuture- Finished
waiting for partition release future
[topVer=AffinityTopologyVersion [topVer=1, minorTopVer=279],
waitTime=0ms, futInfo=NA, mode=LOCAL]
2020-12-0717:32:19.037[] [exchange-worker-#54%tradecore%]
INFOo.a.i.i.p.c.d.d.p.GridDhtPartitionsExchangeFuture-
finishExchangeOnCoordinator [topVer=AffinityTopologyVersion
[topVer=1, minorTopVer=279], resVer=AffinityTopologyVersion
[topVer=1, minorTopVer=279]]
2020-12-0717:32:19.438[] [exchange-worker-#54%tradecore%]
INFOo.a.i.i.p.c.d.d.p.GridDhtPartitionsExchangeFuture- Finish
exchange future [startVer=AffinityTopologyVersion [topVer=1,
minorTopVer=279], resVer=AffinityTopologyVersion [topVer=1,
minorTopVer=279], err=null, rebalanced=true, wasRebalanced=true]
2020-12-0717:32:20.870[] [db-checkpoint-thread-#75%tradecore%]
INFOo.a.i.i.p.c.p.GridCacheDatabaseSharedManager- Checkpoint
started [checkpointId=f001018a-100e-4154-98c9-547dabf5015f,
startPtr=FileWALPointer [idx=16, fileOff=1059360696, len=4770137],
checkpointBeforeLockTime=549ms, checkpointLockWait=0ms,
checkpointListenersExecuteTime=529ms,
checkpointLockHoldTime=853ms, walCpRecordFsyncDuration=17ms,
writeCheckpointEntryDuration=7ms, splitAndSortCpPagesDuration=4ms,
pages=10775, reason='caches stop']
2020-12-0717:32:21.255[] [checkpoint-runner-#79%tradecore%]
WARNo.a.i.i.p.c.p.GridCacheDatabaseSharedManager- 1checkpoint
pages were not written yet due to unsuccessful page write lock
acquisition and will be retried
2020-12-0717:32:21.261[] [exchange-worker-#54%tradecore%]
ERRORo.a.i.i.p.c.GridCacheProcessor- Failed to wait for checkpoint
finish during cache stop.
org.apache.ignite.IgniteCheckedException: Compound exception for
CountDownFuture.
at

org.apache.ignite.internal.util.future.CountDownFuture.addError(CountDownFuture.java:72)
~[ignite-core-2.9.0.jar!/:2.9.0]
at

org.apache.ignite.internal.util.future.CountDownFuture.onDone(CountDownFuture.java:46)
~[ignite-core-2.9.0.jar!/:2.9.0]
at

org.apache.ignite.internal.util.future.CountDownFuture.onDone(CountDownFuture.java:28)
~[ignite-core-2.9.0.jar!/:2.9.0]
at

org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:478)
~[ignite-core-2.9.0.jar!/:2.9.0]
at

org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$WriteCheckpointPages.run(GridCacheDatabaseSharedManager.java:4546)
~[ignite-core-2.9.0.jar!/:2.9.0]
at

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
~[?:1.8.0_272]
at

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
~[?:1.8.0_272]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_272]
Suppressed: java.lang.NullPoin

[2.9.0]NPE on invoke IgniteCache.destroy()

2020-12-07 Thread 38797715

Hi community,

Call IgniteCache.destroy Method, NPE appears,logs are as follows:

2020-12-0717:32:18.870[] [exchange-worker-#54%tradecore%] 
INFOo.a.i.i.e.time- Started exchange init 
[topVer=AffinityTopologyVersion [topVer=1, minorTopVer=279], crd=true, 
evt=DISCOVERY_CUSTOM_EVT, evtNode=320935f6-3516-4b0b-9e5f-e80768696522, 
customEvt=DynamicCacheChangeBatch 
[id=1f2a90e2671-562b8f51-fa6a-4094-928c-6976ce87614a, reqs=ArrayList 
[DynamicCacheChangeRequest [cacheName=PksQuota, hasCfg=false, 
nodeId=320935f6-3516-4b0b-9e5f-e80768696522, clientStartOnly=false, 
stop=true, destroy=false, disabledAfterStartfalse]], 
exchangeActions=ExchangeActions [startCaches=null, 
stopCaches=[PksQuota], startGrps=[], stopGrps=[PksQuota, destroy=true], 
resetParts=null, stateChangeRequest=null], startCaches=false], 
allowMerge=false, exchangeFreeSwitch=false]
2020-12-0717:32:18.873[] [exchange-worker-#54%tradecore%] 
INFOo.a.i.i.p.c.d.d.p.GridDhtPartitionsExchangeFuture- Finished waiting 
for partition release future [topVer=AffinityTopologyVersion [topVer=1, 
minorTopVer=279], waitTime=0ms, futInfo=NA, mode=DISTRIBUTED]
2020-12-0717:32:18.873[] [exchange-worker-#54%tradecore%] 
INFOo.a.i.i.p.c.d.d.p.GridDhtPartitionsExchangeFuture- Finished waiting 
for partitions release latch: ServerLatch [permits=0, 
pendingAcks=HashSet [], super=CompletableLatch [id=CompletableLatchUid 
[id=exchange, topVer=AffinityTopologyVersion [topVer=1, minorTopVer=279
2020-12-0717:32:18.873[] [exchange-worker-#54%tradecore%] 
INFOo.a.i.i.p.c.d.d.p.GridDhtPartitionsExchangeFuture- Finished waiting 
for partition release future [topVer=AffinityTopologyVersion [topVer=1, 
minorTopVer=279], waitTime=0ms, futInfo=NA, mode=LOCAL]
2020-12-0717:32:19.037[] [exchange-worker-#54%tradecore%] 
INFOo.a.i.i.p.c.d.d.p.GridDhtPartitionsExchangeFuture- 
finishExchangeOnCoordinator [topVer=AffinityTopologyVersion [topVer=1, 
minorTopVer=279], resVer=AffinityTopologyVersion [topVer=1, 
minorTopVer=279]]
2020-12-0717:32:19.438[] [exchange-worker-#54%tradecore%] 
INFOo.a.i.i.p.c.d.d.p.GridDhtPartitionsExchangeFuture- Finish exchange 
future [startVer=AffinityTopologyVersion [topVer=1, minorTopVer=279], 
resVer=AffinityTopologyVersion [topVer=1, minorTopVer=279], err=null, 
rebalanced=true, wasRebalanced=true]
2020-12-0717:32:20.870[] [db-checkpoint-thread-#75%tradecore%] 
INFOo.a.i.i.p.c.p.GridCacheDatabaseSharedManager- Checkpoint started 
[checkpointId=f001018a-100e-4154-98c9-547dabf5015f, 
startPtr=FileWALPointer [idx=16, fileOff=1059360696, len=4770137], 
checkpointBeforeLockTime=549ms, checkpointLockWait=0ms, 
checkpointListenersExecuteTime=529ms, checkpointLockHoldTime=853ms, 
walCpRecordFsyncDuration=17ms, writeCheckpointEntryDuration=7ms, 
splitAndSortCpPagesDuration=4ms, pages=10775, reason='caches stop']
2020-12-0717:32:21.255[] [checkpoint-runner-#79%tradecore%] 
WARNo.a.i.i.p.c.p.GridCacheDatabaseSharedManager- 1checkpoint pages were 
not written yet due to unsuccessful page write lock acquisition and will 
be retried
2020-12-0717:32:21.261[] [exchange-worker-#54%tradecore%] 
ERRORo.a.i.i.p.c.GridCacheProcessor- Failed to wait for checkpoint 
finish during cache stop.
org.apache.ignite.IgniteCheckedException: Compound exception for 
CountDownFuture.
at 
org.apache.ignite.internal.util.future.CountDownFuture.addError(CountDownFuture.java:72) 
~[ignite-core-2.9.0.jar!/:2.9.0]
at 
org.apache.ignite.internal.util.future.CountDownFuture.onDone(CountDownFuture.java:46) 
~[ignite-core-2.9.0.jar!/:2.9.0]
at 
org.apache.ignite.internal.util.future.CountDownFuture.onDone(CountDownFuture.java:28) 
~[ignite-core-2.9.0.jar!/:2.9.0]
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:478) 
~[ignite-core-2.9.0.jar!/:2.9.0]
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$WriteCheckpointPages.run(GridCacheDatabaseSharedManager.java:4546) 
~[ignite-core-2.9.0.jar!/:2.9.0]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
~[?:1.8.0_272]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
~[?:1.8.0_272]

at java.lang.Thread.run(Thread.java:748) [?:1.8.0_272]
Suppressed: java.lang.NullPointerException
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$WriteCheckpointPages.writePages(GridCacheDatabaseSharedManager.java:4584) 
~[ignite-core-2.9.0.jar!/:2.9.0]
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$WriteCheckpointPages.run(GridCacheDatabaseSharedManager.java:4540) 
~[ignite-core-2.9.0.jar!/:2.9.0]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
~[?:1.8.0_272]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
~[?:1.8.0_272]

at java.lang.Thread.run(Thread.java:748) [?:1.8.0_272]
2020-12-0717:32:21.265[] [exchange-worker-#54%tradecore%] 

Re: [2.8.1]Checking optimistic transaction state on remote nodes

2020-11-23 Thread 38797715

Hi Ilya,

Then confirm again that according to the log message, optimistic 
transaction and READ_COMMITTED are used for single data operation of 
transactional cache?


If transactions are explicitly turned on, the default concurrency model 
and isolation level are pessimistic and REPEATABLE_READ?


在 2020/11/20 下午7:50, Ilya Kasnacheev 写道:

Hello!

It will happen when the node has left but the transaction has to be 
committed.


Most operations on transactional cache will involve implicit 
transactions so there you go.


Regards,
--
Ilya Kasnacheev


чт, 19 нояб. 2020 г. в 16:46, 38797715 <38797...@qq.com 
<mailto:38797...@qq.com>>:


Hi community,

Although there is a transactional cache, no transaction operation is
performed, but there is a lot of output below in the log. Why?

[2020-11-16 14:01:44,947][INFO ][sys-stripe-8-#9][IgniteTxManager]
Checking optimistic transaction state on remote nodes
[tx=GridDhtTxLocal
[nearNodeId=a7eded9b-4078-4ee5-a1dd-426b8debc203,
nearFutId=e0576afd571-dbd82c53-1772-4c53-a4ea-38e601002379,
nearMiniId=1, nearFinFutId=null, nearFinMiniId=0,
nearXidVer=GridCacheVersion [topVer=216485010, order=1607062821327,
nodeOrder=30], lb=null, super=GridDhtTxLocalAdapter
[nearOnOriginatingNode=false, nearNodes=KeySetView [],
dhtNodes=KeySetView [e4d4fc27-d2d9-47f9-8d21-dfac2c003b55,
3060fc02-e94a-4b6d-851a-05d75ea751e0], explicitLock=false,
super=IgniteTxLocalAdapter [completedBase=null,
sndTransformedVals=false, depEnabled=false,
txState=IgniteTxImplicitSingleStateImpl [init=true, recovery=false,
useMvccCaching=false], super=IgniteTxAdapter [xidVer=GridCacheVersion
[topVer=216485010, order=1607062856849, nodeOrder=1],
writeVer=GridCacheVersion [topVer=216485023, order=1607062856850,
nodeOrder=1], implicit=true, loc=true, threadId=24070,
startTime=1605506134277, nodeId=2b0db4f4-86d1-42c2-babf-f6318bd932e5,
startVer=GridCacheVersion [topVer=216485010, order=1607062856849,
nodeOrder=1], endVer=null, isolation=READ_COMMITTED,
concurrency=OPTIMISTIC, timeout=0, sysInvalidate=false, sys=false,
plc=2, commitVer=null, finalizing=RECOVERY_FINISH, invalidParts=null,
state=PREPARED, timedOut=false, topVer=AffinityTopologyVersion
[topVer=117, minorTopVer=0], mvccSnapshot=null,
skipCompletedVers=false,
parentTx=null, duration=370668ms, onePhaseCommit=false], size=1]]],
fut=GridCacheTxRecoveryFuture [trackable=true,
futId=81c3b7af571-1093b7fe-20ae-4c3f-9adb-4ecac23c136e,
tx=GridDhtTxLocal [nearNodeId=a7eded9b-4078-4ee5-a1dd-426b8debc203,
nearFutId=e0576afd571-dbd82c53-1772-4c53-a4ea-38e601002379,
nearMiniId=1, nearFinFutId=null, nearFinMiniId=0,
nearXidVer=GridCacheVersion [topVer=216485010, order=1607062821327,
nodeOrder=30], lb=null, super=GridDhtTxLocalAdapter
[nearOnOriginatingNode=false, nearNodes=KeySetView [],
dhtNodes=KeySetView [e4d4fc27-d2d9-47f9-8d21-dfac2c003b55,
3060fc02-e94a-4b6d-851a-05d75ea751e0], explicitLock=false,
super=IgniteTxLocalAdapter [completedBase=null,
sndTransformedVals=false, depEnabled=false,
txState=IgniteTxImplicitSingleStateImpl [init=true, recovery=false,
useMvccCaching=false], super=IgniteTxAdapter [xidVer=GridCacheVersion
[topVer=216485010, order=1607062856849, nodeOrder=1],
writeVer=GridCacheVersion [topVer=216485023, order=1607062856850,
nodeOrder=1], implicit=true, loc=true, threadId=24070,
startTime=1605506134277, nodeId=2b0db4f4-86d1-42c2-babf-f6318bd932e5,
startVer=GridCacheVersion [topVer=216485010, order=1607062856849,
nodeOrder=1], endVer=null, isolation=READ_COMMITTED,
concurrency=OPTIMISTIC, timeout=0, sysInvalidate=false, sys=false,
plc=2, commitVer=null, finalizing=RECOVERY_FINISH, invalidParts=null,
state=PREPARED, timedOut=false, topVer=AffinityTopologyVersion
[topVer=117, minorTopVer=0], mvccSnapshot=null,
skipCompletedVers=false,
parentTx=null, duration=370668ms, onePhaseCommit=false], size=1]]],
failedNodeIds=SingletonSet [a7eded9b-4078-4ee5-a1dd-426b8debc203],
nearTxCheck=false, innerFuts=EmptyList [],
super=GridCompoundIdentityFuture [super=GridCompoundFuture [rdc=Bool
reducer: true, initFlag=0, lsnrCalls=0, done=false, cancelled=false,
err=null, futs=EmptyList []
[2020-11-16 14:01:44,947][INFO ][sys-stripe-8-#9][IgniteTxManager]
Finishing prepared transaction [commit=true, tx=GridDhtTxLocal
[nearNodeId=a7eded9b-4078-4ee5-a1dd-426b8debc203,
nearFutId=e0576afd571-dbd82c53-1772-4c53-a4ea-38e601002379,
nearMiniId=1, nearFinFutId=null, nearFinMiniId=0,
nearXidVer=GridCacheVersion [topVer=216485010, order=1607062821327,
nodeOrder=30], lb=null, super=GridDhtTxLocalAdapter
[nearOnOriginatingNode=false, nearNodes=KeySetView [],
dhtNodes=KeySetView [e4d4fc27-d2d9-47f9-8d21-dfac2c003b55,
3060fc02-e94a-4b6d-851a-05d75ea751e0], explicitLock=fals

[2.8.1]Checking optimistic transaction state on remote nodes

2020-11-19 Thread 38797715

Hi community,

Although there is a transactional cache, no transaction operation is 
performed, but there is a lot of output below in the log. Why?


[2020-11-16 14:01:44,947][INFO ][sys-stripe-8-#9][IgniteTxManager] 
Checking optimistic transaction state on remote nodes [tx=GridDhtTxLocal 
[nearNodeId=a7eded9b-4078-4ee5-a1dd-426b8debc203, 
nearFutId=e0576afd571-dbd82c53-1772-4c53-a4ea-38e601002379, 
nearMiniId=1, nearFinFutId=null, nearFinMiniId=0, 
nearXidVer=GridCacheVersion [topVer=216485010, order=1607062821327, 
nodeOrder=30], lb=null, super=GridDhtTxLocalAdapter 
[nearOnOriginatingNode=false, nearNodes=KeySetView [], 
dhtNodes=KeySetView [e4d4fc27-d2d9-47f9-8d21-dfac2c003b55, 
3060fc02-e94a-4b6d-851a-05d75ea751e0], explicitLock=false, 
super=IgniteTxLocalAdapter [completedBase=null, 
sndTransformedVals=false, depEnabled=false, 
txState=IgniteTxImplicitSingleStateImpl [init=true, recovery=false, 
useMvccCaching=false], super=IgniteTxAdapter [xidVer=GridCacheVersion 
[topVer=216485010, order=1607062856849, nodeOrder=1], 
writeVer=GridCacheVersion [topVer=216485023, order=1607062856850, 
nodeOrder=1], implicit=true, loc=true, threadId=24070, 
startTime=1605506134277, nodeId=2b0db4f4-86d1-42c2-babf-f6318bd932e5, 
startVer=GridCacheVersion [topVer=216485010, order=1607062856849, 
nodeOrder=1], endVer=null, isolation=READ_COMMITTED, 
concurrency=OPTIMISTIC, timeout=0, sysInvalidate=false, sys=false, 
plc=2, commitVer=null, finalizing=RECOVERY_FINISH, invalidParts=null, 
state=PREPARED, timedOut=false, topVer=AffinityTopologyVersion 
[topVer=117, minorTopVer=0], mvccSnapshot=null, skipCompletedVers=false, 
parentTx=null, duration=370668ms, onePhaseCommit=false], size=1]]], 
fut=GridCacheTxRecoveryFuture [trackable=true, 
futId=81c3b7af571-1093b7fe-20ae-4c3f-9adb-4ecac23c136e, 
tx=GridDhtTxLocal [nearNodeId=a7eded9b-4078-4ee5-a1dd-426b8debc203, 
nearFutId=e0576afd571-dbd82c53-1772-4c53-a4ea-38e601002379, 
nearMiniId=1, nearFinFutId=null, nearFinMiniId=0, 
nearXidVer=GridCacheVersion [topVer=216485010, order=1607062821327, 
nodeOrder=30], lb=null, super=GridDhtTxLocalAdapter 
[nearOnOriginatingNode=false, nearNodes=KeySetView [], 
dhtNodes=KeySetView [e4d4fc27-d2d9-47f9-8d21-dfac2c003b55, 
3060fc02-e94a-4b6d-851a-05d75ea751e0], explicitLock=false, 
super=IgniteTxLocalAdapter [completedBase=null, 
sndTransformedVals=false, depEnabled=false, 
txState=IgniteTxImplicitSingleStateImpl [init=true, recovery=false, 
useMvccCaching=false], super=IgniteTxAdapter [xidVer=GridCacheVersion 
[topVer=216485010, order=1607062856849, nodeOrder=1], 
writeVer=GridCacheVersion [topVer=216485023, order=1607062856850, 
nodeOrder=1], implicit=true, loc=true, threadId=24070, 
startTime=1605506134277, nodeId=2b0db4f4-86d1-42c2-babf-f6318bd932e5, 
startVer=GridCacheVersion [topVer=216485010, order=1607062856849, 
nodeOrder=1], endVer=null, isolation=READ_COMMITTED, 
concurrency=OPTIMISTIC, timeout=0, sysInvalidate=false, sys=false, 
plc=2, commitVer=null, finalizing=RECOVERY_FINISH, invalidParts=null, 
state=PREPARED, timedOut=false, topVer=AffinityTopologyVersion 
[topVer=117, minorTopVer=0], mvccSnapshot=null, skipCompletedVers=false, 
parentTx=null, duration=370668ms, onePhaseCommit=false], size=1]]], 
failedNodeIds=SingletonSet [a7eded9b-4078-4ee5-a1dd-426b8debc203], 
nearTxCheck=false, innerFuts=EmptyList [], 
super=GridCompoundIdentityFuture [super=GridCompoundFuture [rdc=Bool 
reducer: true, initFlag=0, lsnrCalls=0, done=false, cancelled=false, 
err=null, futs=EmptyList []
[2020-11-16 14:01:44,947][INFO ][sys-stripe-8-#9][IgniteTxManager] 
Finishing prepared transaction [commit=true, tx=GridDhtTxLocal 
[nearNodeId=a7eded9b-4078-4ee5-a1dd-426b8debc203, 
nearFutId=e0576afd571-dbd82c53-1772-4c53-a4ea-38e601002379, 
nearMiniId=1, nearFinFutId=null, nearFinMiniId=0, 
nearXidVer=GridCacheVersion [topVer=216485010, order=1607062821327, 
nodeOrder=30], lb=null, super=GridDhtTxLocalAdapter 
[nearOnOriginatingNode=false, nearNodes=KeySetView [], 
dhtNodes=KeySetView [e4d4fc27-d2d9-47f9-8d21-dfac2c003b55, 
3060fc02-e94a-4b6d-851a-05d75ea751e0], explicitLock=false, 
super=IgniteTxLocalAdapter [completedBase=null, 
sndTransformedVals=false, depEnabled=false, 
txState=IgniteTxImplicitSingleStateImpl [init=true, recovery=false, 
useMvccCaching=false], super=IgniteTxAdapter [xidVer=GridCacheVersion 
[topVer=216485010, order=1607062856849, nodeOrder=1], 
writeVer=GridCacheVersion [topVer=216485023, order=1607062856850, 
nodeOrder=1], implicit=true, loc=true, threadId=24070, 
startTime=1605506134277, nodeId=2b0db4f4-86d1-42c2-babf-f6318bd932e5, 
startVer=GridCacheVersion [topVer=216485010, order=1607062856849, 
nodeOrder=1], endVer=null, isolation=READ_COMMITTED, 
concurrency=OPTIMISTIC, timeout=0, sysInvalidate=false, sys=false, 
plc=2, commitVer=null, finalizing=RECOVERY_FINISH, invalidParts=null, 
state=PREPARED, timedOut=false, topVer=AffinityTopologyVersion 
[topVer=117, minorTopVer=0], mvccSnapshot=null, skipCompletedVers=false, 

Re: High availability of local listeners for ContinuousQuery or Events

2020-10-30 Thread 38797715

Hi Igor,

We hope that if the local listener node fails, we can have a mechanism 
similar to fail over. Otherwise, if the local listener node fails and 
restarts, the events during the failure will be lost.


在 2020/10/30 上午12:04, Igor Belyakov 写道:

Hi,

In case the node, which registered a continuous query fails, the 
continuous query will be undeployed from the cluster. The cluster 
state won't be changed.


It's not a good practice to write the business code in a remote 
filter. Could you please clarify more details regarding your use case?


Igor

On Thu, Oct 29, 2020 at 4:46 PM 38797715 <38797...@qq.com 
<mailto:38797...@qq.com>> wrote:


Hi community,

For local listeners registered for ContinuousQuery and Events, is
there
a corresponding high availability mechanism design? That is, if
the node
registering the local listener fails, what state will the cluster be?

If do not register a local listener, but write the business code
in the
remote filter and return false, is this a good practice?




High availability of local listeners for ContinuousQuery or Events

2020-10-29 Thread 38797715

Hi community,

For local listeners registered for ContinuousQuery and Events, is there 
a corresponding high availability mechanism design? That is, if the node 
registering the local listener fails, what state will the cluster be?


If do not register a local listener, but write the business code in the 
remote filter and return false, is this a good practice?





Re: ZookeeperClusterNode memory leak(potential)

2020-10-14 Thread 38797715

Hi,

2.8.1

在 2020/10/14 下午4:06, Stephen Darlington 写道:

What version of Ignite are you using?

On 14 Oct 2020, at 08:36, 38797715 <38797...@qq.com 
<mailto:38797...@qq.com>> wrote:


Hi team,

About 50 nodes (including client nodes) were discovered by zookeeper.

After running for about 2 days, you will find that the number of 
ZookeeperClusterNode instances increases significantly (about 1.6 
million). After running for a longer time, it may cause memory overflow.


<微信图片_20201014153308.png.png>


<微信图片_20201014153354.png>





ZookeeperClusterNode memory leak(potential)

2020-10-14 Thread 38797715

Hi team,

About 50 nodes (including client nodes) were discovered by zookeeper.

After running for about 2 days, you will find that the number of 
ZookeeperClusterNode instances increases significantly (about 1.6 
million). After running for a longer time, it may cause memory overflow.






Re: How to confirm that disk compression is in effect?

2020-09-08 Thread 38797715

Hi,

I tried to test the following scenario, but it didn't seem to improve.

pageSize=4096 & wal compression enabled & COPY command import for 6M data


I've looked at the following discussion and performance test results, 
and it seems that the throughput has been improved by 2x-4x.


https://issues.apache.org/jira/browse/IGNITE-11336

http://apache-ignite-developers.2346864.n4.nabble.com/Disk-page-compression-for-Ignite-persistent-store-td38009.html

According to my understanding, the execution time of the copy command 
should be greatly reduced, but this is not the case. Why?


在 2020/9/8 下午5:16, Ilya Kasnacheev 写道:

Hello!

If your data does not compress at least 2x, then pageSize=8192 is 
useless. Frankly speaking I've never seen any beneficial deployments 
of page compression. I recommend turning it off and keeping WAL 
compression only.


Regards,
--
Ilya Kasnacheev


вт, 8 сент. 2020 г. в 05:18, 38797715 <38797...@qq.com 
<mailto:38797...@qq.com>>:


Hi Ilya,

This module has already been imported.

We re tested three scenarios:

1.pageSize=4096

2.pageSize=8192

3.pageSize=8192,disk compression and wal compression are enabled
at the same time.

From the test results, pageSize = 4096, the writing speed of this
scenario is slightly faster, and the disk space occupation is
slightly smaller, but the amplitude is less than 10%.

In the two scenarios with pageSize = 8192, there is no big
difference in write speed and disk space usage. However, for wal
files, the size of a single file will always be 64M. It is not
clear whether more compressed data is stored in the file.

My test environment is:

For notebook computers (8G RAM, 256G SSD), Apache ignite version
is 2.8.1, and the COPY command is used to import 6M data.

在 2020/9/7 下午10:06, Ilya Kasnacheev 写道:

Hello!

Did you add `ignite-compres` module to your classpath?

Have you tried WAL compression instead? Please check

https://apacheignite.readme.io/docs/write-ahead-log#section-wal-records-compression

<https://apacheignite.readme.io/docs/write-ahead-log#section-wal-records-compression>

Regards,
-- 
Ilya Kasnacheev



пт, 28 авг. 2020 г. в 06:52, 38797715 <38797...@qq.com
<mailto:38797...@qq.com>>:

Hi,

create table statement are as follows:

CREATETABLEPI_COM_DAY
(COM_ID VARCHAR(30) NOTNULL,
ITEM_ID VARCHAR(30) NOTNULL,
DATE1 VARCHAR(8) NOTNULL,
KIND VARCHAR(1),
QTY_IOD DECIMAL(18, 6) ,
AMT_IOD DECIMAL(18, 6) ,
QTY_PURCH DECIMAL(18, 6) ,
AMT_PURCH DECIMAL(18,6) ,
QTY_SOLD DECIMAL(18,6) ,
AMT_SOLD DECIMAL(18, 6) ,
AMT_SOLD_NO_TAX DECIMAL(18, 6) ,
QTY_PROFIT DECIMAL(18, 6) ,
AMT_PROFIT DECIMAL(18, 6) ,
QTY_LOSS DECIMAL(18,6) ,
AMT_LOSS DECIMAL(18, 6) ,
QTY_EOD DECIMAL(18, 6) ,
AMT_EOD DECIMAL(18,6) ,
UNIT_COST DECIMAL(18,8) ,
SUMCOST_SOLD DECIMAL(18,6) ,
GROSS_PROFIT DECIMAL(18, 6) ,
QTY_ALLOCATION DECIMAL(18,6) ,
AMT_ALLOCATION DECIMAL(18,2) ,
AMT_ALLOCATION_NO_TAX DECIMAL(18, 2) ,
GROSS_PROFIT_ALLOCATION DECIMAL(18,6) ,
SUMCOST_SOLD_ALLOCATION DECIMAL(18,6) ,
PRIMARYKEY(COM_ID,ITEM_ID,DATE1))
WITH"template=cache-partitioned,CACHE_NAME=PI_COM_DAY";
CREATEINDEX IDX_PI_COM_DAY_ITEM_DATE ONPI_COM_DAY(ITEM_ID,DATE1);

I don't think there's anything special about it.
Then we imported 10 million data using the COPY command.Data
is basically the actual production data, I think the
dispersion is OK, not artificial data with high similarity.
I would like to know if there are test results for the
function of disk compression? Most of the other memory
databases also have the function of data compression, but it
doesn't look like it is now, or what's wrong with me?

在 2020/8/28 上午12:39, Michael Cherkasov 写道:

Could you please share your benchmark code? I believe
compression might depend on data you write, if it full
random, it's difficult to compress the data.

On Wed, Aug 26, 2020, 8:26 PM 38797715 <38797...@qq.com
<mailto:38797...@qq.com>> wrote:

Hi,

We turn on disk compression to see the trend of
execution time and disk space.

Our expectation is that after disk compression is turned
on, although more CPU is used, the disk space is less
occupied. Because more data is written per unit time,
the overall execution time will be shortened in the case
of insufficient memory.

However, it is found that the execution time and disk
consumption do not change significantly. We tested the
diskPageComp

Re: How to confirm that disk compression is in effect?

2020-09-07 Thread 38797715

Hi Ilya,

This module has already been imported.

We re tested three scenarios:

1.pageSize=4096

2.pageSize=8192

3.pageSize=8192,disk compression and wal compression are enabled at the 
same time.


From the test results, pageSize = 4096, the writing speed of this 
scenario is slightly faster, and the disk space occupation is slightly 
smaller, but the amplitude is less than 10%.


In the two scenarios with pageSize = 8192, there is no big difference in 
write speed and disk space usage. However, for wal files, the size of a 
single file will always be 64M. It is not clear whether more compressed 
data is stored in the file.


My test environment is:

For notebook computers (8G RAM, 256G SSD), Apache ignite version is 
2.8.1, and the COPY command is used to import 6M data.


在 2020/9/7 下午10:06, Ilya Kasnacheev 写道:

Hello!

Did you add `ignite-compres` module to your classpath?

Have you tried WAL compression instead? Please check 
https://apacheignite.readme.io/docs/write-ahead-log#section-wal-records-compression 
<https://apacheignite.readme.io/docs/write-ahead-log#section-wal-records-compression>


Regards,
--
Ilya Kasnacheev


пт, 28 авг. 2020 г. в 06:52, 38797715 <38797...@qq.com 
<mailto:38797...@qq.com>>:


Hi,

create table statement are as follows:

CREATETABLEPI_COM_DAY
(COM_ID VARCHAR(30) NOTNULL,
ITEM_ID VARCHAR(30) NOTNULL,
DATE1 VARCHAR(8) NOTNULL,
KIND VARCHAR(1),
QTY_IOD DECIMAL(18, 6) ,
AMT_IOD DECIMAL(18, 6) ,
QTY_PURCH DECIMAL(18, 6) ,
AMT_PURCH DECIMAL(18,6) ,
QTY_SOLD DECIMAL(18,6) ,
AMT_SOLD DECIMAL(18, 6) ,
AMT_SOLD_NO_TAX DECIMAL(18, 6) ,
QTY_PROFIT DECIMAL(18, 6) ,
AMT_PROFIT DECIMAL(18, 6) ,
QTY_LOSS DECIMAL(18,6) ,
AMT_LOSS DECIMAL(18, 6) ,
QTY_EOD DECIMAL(18, 6) ,
AMT_EOD DECIMAL(18,6) ,
UNIT_COST DECIMAL(18,8) ,
SUMCOST_SOLD DECIMAL(18,6) ,
GROSS_PROFIT DECIMAL(18, 6) ,
QTY_ALLOCATION DECIMAL(18,6) ,
AMT_ALLOCATION DECIMAL(18,2) ,
AMT_ALLOCATION_NO_TAX DECIMAL(18, 2) ,
GROSS_PROFIT_ALLOCATION DECIMAL(18,6) ,
SUMCOST_SOLD_ALLOCATION DECIMAL(18,6) ,
PRIMARYKEY(COM_ID,ITEM_ID,DATE1))
WITH"template=cache-partitioned,CACHE_NAME=PI_COM_DAY";
CREATEINDEX IDX_PI_COM_DAY_ITEM_DATE ONPI_COM_DAY(ITEM_ID,DATE1);

I don't think there's anything special about it.
Then we imported 10 million data using the COPY command.Data is
basically the actual production data, I think the dispersion is
OK, not artificial data with high similarity.
I would like to know if there are test results for the function of
disk compression? Most of the other memory databases also have the
function of data compression, but it doesn't look like it is now,
or what's wrong with me?

在 2020/8/28 上午12:39, Michael Cherkasov 写道:

Could you please share your benchmark code? I believe compression
might depend on data you write, if it full random, it's difficult
to compress the data.

On Wed, Aug 26, 2020, 8:26 PM 38797715 <38797...@qq.com
<mailto:38797...@qq.com>> wrote:

Hi,

We turn on disk compression to see the trend of execution
time and disk space.

Our expectation is that after disk compression is turned on,
although more CPU is used, the disk space is less occupied.
Because more data is written per unit time, the overall
execution time will be shortened in the case of insufficient
memory.

However, it is found that the execution time and disk
consumption do not change significantly. We tested the
diskPageCompressionLevel values as 0, 10 and 17 respectively.

Our test method is as follows:
The ignite-compress module has been introduced.

The configuration of ignite is as follows:


http://www.springframework.org/schema/beans;
<http://www.springframework.org/schema/beans>
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;

<http://www.w3.org/2001/XMLSchema-instance>xsi:schemaLocation="http://www.springframework.org/schema/beans
<http://www.springframework.org/schema/beans>
http://www.springframework.org/schema/beans/spring-beans.xsd
<http://www.springframework.org/schema/beans/spring-beans.xsd>">









































Re: How to confirm that disk compression is in effect?

2020-08-27 Thread 38797715

Hi,

create table statement are as follows:

CREATETABLEPI_COM_DAY
(COM_ID VARCHAR(30) NOTNULL,
ITEM_ID VARCHAR(30) NOTNULL,
DATE1 VARCHAR(8) NOTNULL,
KIND VARCHAR(1),
QTY_IOD DECIMAL(18, 6) ,
AMT_IOD DECIMAL(18, 6) ,
QTY_PURCH DECIMAL(18, 6) ,
AMT_PURCH DECIMAL(18,6) ,
QTY_SOLD DECIMAL(18,6) ,
AMT_SOLD DECIMAL(18, 6) ,
AMT_SOLD_NO_TAX DECIMAL(18, 6) ,
QTY_PROFIT DECIMAL(18, 6) ,
AMT_PROFIT DECIMAL(18, 6) ,
QTY_LOSS DECIMAL(18,6) ,
AMT_LOSS DECIMAL(18, 6) ,
QTY_EOD DECIMAL(18, 6) ,
AMT_EOD DECIMAL(18,6) ,
UNIT_COST DECIMAL(18,8) ,
SUMCOST_SOLD DECIMAL(18,6) ,
GROSS_PROFIT DECIMAL(18, 6) ,
QTY_ALLOCATION DECIMAL(18,6) ,
AMT_ALLOCATION DECIMAL(18,2) ,
AMT_ALLOCATION_NO_TAX DECIMAL(18, 2) ,
GROSS_PROFIT_ALLOCATION DECIMAL(18,6) ,
SUMCOST_SOLD_ALLOCATION DECIMAL(18,6) ,
PRIMARYKEY(COM_ID,ITEM_ID,DATE1)) 
WITH"template=cache-partitioned,CACHE_NAME=PI_COM_DAY";

CREATEINDEX IDX_PI_COM_DAY_ITEM_DATE ONPI_COM_DAY(ITEM_ID,DATE1);

I don't think there's anything special about it.
Then we imported 10 million data using the COPY command.Data is 
basically the actual production data, I think the dispersion is OK, not 
artificial data with high similarity.
I would like to know if there are test results for the function of disk 
compression? Most of the other memory databases also have the function 
of data compression, but it doesn't look like it is now, or what's wrong 
with me?


在 2020/8/28 上午12:39, Michael Cherkasov 写道:
Could you please share your benchmark code? I believe compression 
might depend on data you write, if it full random, it's difficult to 
compress the data.


On Wed, Aug 26, 2020, 8:26 PM 38797715 <38797...@qq.com 
<mailto:38797...@qq.com>> wrote:


Hi,

We turn on disk compression to see the trend of execution time and
disk space.

Our expectation is that after disk compression is turned on,
although more CPU is used, the disk space is less occupied.
Because more data is written per unit time, the overall execution
time will be shortened in the case of insufficient memory.

However, it is found that the execution time and disk consumption
do not change significantly. We tested the
diskPageCompressionLevel values as 0, 10 and 17 respectively.

Our test method is as follows:
The ignite-compress module has been introduced.

The configuration of ignite is as follows:


http://www.springframework.org/schema/beans;
<http://www.springframework.org/schema/beans>
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;

<http://www.w3.org/2001/XMLSchema-instance>xsi:schemaLocation="http://www.springframework.org/schema/beans
<http://www.springframework.org/schema/beans>
http://www.springframework.org/schema/beans/spring-beans.xsd
<http://www.springframework.org/schema/beans/spring-beans.xsd>">








































How to confirm that disk compression is in effect?

2020-08-26 Thread 38797715

Hi,

We turn on disk compression to see the trend of execution time and disk 
space.


Our expectation is that after disk compression is turned on, although 
more CPU is used, the disk space is less occupied. Because more data is 
written per unit time, the overall execution time will be shortened in 
the case of insufficient memory.


However, it is found that the execution time and disk consumption do not 
change significantly. We tested the diskPageCompressionLevel values as 
0, 10 and 17 respectively.


Our test method is as follows:
The ignite-compress module has been introduced.

The configuration of ignite is as follows:


http://www.springframework.org/schema/beans;
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd;>





































Re: How to solve the problem of single quotes in SQL statements?

2020-08-26 Thread 38797715

ok,solved.
INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES 
(1,'Ka''bul','AFG','Kabol',178);


在 2020/8/26 下午7:10, 38797715 写道:

Hi,

for example:

CREATE TABLE City (   ID INT(11),   Name CHAR(35),   CountryCode 
CHAR(3),   District CHAR(20),   Population INT(11),   PRIMARY KEY (ID, 
CountryCode) ) WITH "template=partitioned, backups=1, 
affinityKey=CountryCode, CACHE_NAME=City, KEY_TYPE=demo.model.CityKey, 
VALUE_TYPE=demo.model.City";


INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES 
(1,'Ka'bul','AFG','Kabol',178);


name field's value contains single quotes.

The following writing will throw exceptions:

INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES 
(1,'Ka\'bul','AFG','Kabol',178);


INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES 
(1,'Ka\\'bul','AFG','Kabol',178);


INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES 
(1,"Ka'bul",'AFG','Kabol',178);


I wonder if there are any other solutions besides the PreparedStatement?



How to solve the problem of single quotes in SQL statements?

2020-08-26 Thread 38797715

Hi,

for example:

CREATE TABLE City (   ID INT(11),   Name CHAR(35),   CountryCode 
CHAR(3),   District CHAR(20),   Population INT(11),   PRIMARY KEY (ID, 
CountryCode) ) WITH "template=partitioned, backups=1, 
affinityKey=CountryCode, CACHE_NAME=City, KEY_TYPE=demo.model.CityKey, 
VALUE_TYPE=demo.model.City";


INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES 
(1,'Ka'bul','AFG','Kabol',178);


name field's value contains single quotes.

The following writing will throw exceptions:

INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES 
(1,'Ka\'bul','AFG','Kabol',178);


INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES 
(1,'Ka\\'bul','AFG','Kabol',178);


INSERT INTO City(ID, Name, CountryCode, District, Population) VALUES 
(1,"Ka'bul",'AFG','Kabol',178);


I wonder if there are any other solutions besides the PreparedStatement?




Uncommitted data within the scope of a transaction cannot be read within the same thread(2.8.1)

2020-08-19 Thread 38797715

Hi guys,

If you execute the following code, you will find that cache.iterator() 
does not return a result. If the transaction is not started, the 
returned result is correct.

Is this a bug or a known technical limitation?

server side just start ignite.sh

public class CacheTransactionExample {

    public static void main(String[] args) throws IgniteException {
        Ignition.setClientMode(true);
    try (Ignite ignite = Ignition.start()) {
    System.out.println();

    CacheConfiguration cfg = new 
CacheConfiguration<>("Account");


cfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
    IgniteCache cache = 
ignite.getOrCreateCache(cfg);


    try (Transaction tx = 
ignite.transactions().txStart(OPTIMISTIC, READ_COMMITTED)) {

    cache.put(1, new Account(1, 100));
    cache.put(2, new Account(1, 200));

    System.out.println();
    System.out.println(">>> " + cache.get(1));
    System.out.println(">>> " + cache.get(2));

    Iterator it = cache.iterator();
    while(it.hasNext()) {
        Cache.Entry acc = 
(Cache.Entry)it.next();

        System.out.println("<<<" + acc.getValue());
    }
    tx.commit();
    }
    finally {
    ignite.destroyCache("Account");
    }
    }
    }

    private static class Account implements Serializable {
    private int id;

    private double balance;

    Account(int id, double balance) {
    this.id = id;
    this.balance = balance;
    }

    void update(double amount) {
    balance += amount;
    }

    @Override public String toString() {
    return "Account [id=" + id + ", balance=$" + balance + ']';
    }
    }
}



Re: Enabling swapPath causes invoking shutdown hook

2020-08-13 Thread 38797715

Hi Denis,

We did a test, in the same environment (8G RAM, 10G swap partition) and 
the same configuration (2G Heap, enable persistence, data volume is 
about 6G), the only difference is that the maxSize size is different, 
which is configured as 5G and 12G respectively. We found that the 
performance of the scenario with maxSize = 12G is better than that of 
the scenario with maxSize = 5G, and the write performance is improved by 
more than 10%.


I suspect that if the memory data region is large enough to hold all the 
data, ignite's page replacement might not be enabled.


Our test scenarios are limited and may not be convincing. However, I 
think that the lack of memory may be the norm. At this time, it may be a 
good practice to make full use of the swap mechanism of the OS, which 
takes up more disk space but achieves better performance.


在 2020/8/14 上午8:22, Denis Magda 写道:
Ignite swapping is based on the swapping mechanism of the OS. So, you 
shouldn’t see any difference if enable the OS one directly some way.


Generally, you should not use swapping of any form as a permanent 
persistence layer due to the performance penalty. Once the swapping 
kicks in, you should scale out your cluster and wait while the cluster 
rebalances a part of the data to a new node. When the rebalancing 
completes, the performance will be recovered and swapping won’t longer 
be needed.


Denis

On Thursday, August 13, 2020, 38797715 <38797...@qq.com 
<mailto:38797...@qq.com>> wrote:


Hi,

We retested and found that if we configured swapPath, as the
amount of data increased, the write speed was actually slower and
slower. If the amount of data is large, on average, it is much
slower than the scenario where native persistence is enabled and
wal is disabled.

In this way, the use of the swapPath property has no productive
value, maybe it was an early development function, and now it is a
bit out of date.

What I want to ask is, in the case of small physical memory,
turning on persistence, and then configuring a larger maxSize
(using the swap mechanism of the OS), is this a solution? In other
words, the swap mechanism of the OS and the page replacement of
Ignite, which is better?

在 2020/8/6 下午9:23, Ilya Kasnacheev 写道:

Hello!

I think the performance of swap space should be on par with
persistence with disabled WAL.

You can submit suggested updates to the documentation if you like.

Regards,
-- 
Ilya Kasnacheev



ср, 5 авг. 2020 г. в 06:00, 38797715 <38797...@qq.com
<mailto:38797...@qq.com>>:

Hi Ilya,

If so, there are two ways to implement ignite's swap space:
1. maxSize > physical memory, which will use the swap
mechanism of the OS, can be used *vm.swappiness* Adjust.
2. Configure the *swapPath* property, which is implemented by
Ignite itself, is independent of the OS and has no
optimization parameters.

There's a choice between these two models, right? Then I
think there may be many problems in the description of the
document. I hope you can check it again:
https://apacheignite.readme.io/docs/swap-space
<https://apacheignite.readme.io/docs/swap-space>

After our initial testing, the performance of swap space is
much better than native persistence, so I think this pattern
is valuable in some scenarios.

在 2020/8/4 下午10:16, Ilya Kasnacheev 写道:

Hello!

From the docs:

To avoid this situation with the swapping capabilities, you
need to :

  * Set |maxSize = bigger_ than_RAM_size|, in which case,
the OS will take care of the swapping.
  * Enable swapping by setting the
|DataRegionConfiguration.swapPath| property.


I actually think these are either-or. You should either do
the first (and configure OS swapping) or the second part.

Having said that, I recommend setting proper Native
Persistence instead.

Regards,
-- 
Ilya Kasnacheev



сб, 25 июл. 2020 г. в 04:49, 38797715 <38797...@qq.com
<mailto:38797...@qq.com>>:

Hi,

https://apacheignite.readme.io/docs/swap-space
<https://apacheignite.readme.io/docs/swap-space>

According to the above document, if the physical memory
is small, you can solve this problem by opening the swap
space,The specific method is to configure maxSize to a
larger value (i.e. larger than the physical memory), and
the swapPath property needs to be configured.

But from the test results, the node is terminated.

I think the correct result should be that even if the
amount of data exceeds the physical memory, the node
should still be able to 

Re: Enabling swapPath causes invoking shutdown hook

2020-08-13 Thread 38797715

Hi,

We retested and found that if we configured swapPath, as the amount of 
data increased, the write speed was actually slower and slower. If the 
amount of data is large, on average, it is much slower than the scenario 
where native persistence is enabled and wal is disabled.


In this way, the use of the swapPath property has no productive value, 
maybe it was an early development function, and now it is a bit out of date.


What I want to ask is, in the case of small physical memory, turning on 
persistence, and then configuring a larger maxSize (using the swap 
mechanism of the OS), is this a solution? In other words, the swap 
mechanism of the OS and the page replacement of Ignite, which is better?


在 2020/8/6 下午9:23, Ilya Kasnacheev 写道:

Hello!

I think the performance of swap space should be on par with 
persistence with disabled WAL.


You can submit suggested updates to the documentation if you like.

Regards,
--
Ilya Kasnacheev


ср, 5 авг. 2020 г. в 06:00, 38797715 <38797...@qq.com 
<mailto:38797...@qq.com>>:


Hi Ilya,

If so, there are two ways to implement ignite's swap space:
1. maxSize > physical memory, which will use the swap mechanism of
the OS, can be used *vm.swappiness* Adjust.
2. Configure the *swapPath* property, which is implemented by
Ignite itself, is independent of the OS and has no optimization
parameters.

There's a choice between these two models, right? Then I think
there may be many problems in the description of the document. I
hope you can check it again:
https://apacheignite.readme.io/docs/swap-space
<https://apacheignite.readme.io/docs/swap-space>

After our initial testing, the performance of swap space is much
better than native persistence, so I think this pattern is
valuable in some scenarios.

在 2020/8/4 下午10:16, Ilya Kasnacheev 写道:

Hello!

From the docs:

To avoid this situation with the swapping capabilities, you need to :

  * Set |maxSize = bigger_ than_RAM_size|, in which case, the OS
will take care of the swapping.
  * Enable swapping by setting the
|DataRegionConfiguration.swapPath| property.


I actually think these are either-or. You should either do the
first (and configure OS swapping) or the second part.

Having said that, I recommend setting proper Native Persistence
instead.

Regards,
-- 
Ilya Kasnacheev



сб, 25 июл. 2020 г. в 04:49, 38797715 <38797...@qq.com
<mailto:38797...@qq.com>>:

Hi,

https://apacheignite.readme.io/docs/swap-space
<https://apacheignite.readme.io/docs/swap-space>

According to the above document, if the physical memory is
small, you can solve this problem by opening the swap
space,The specific method is to configure maxSize to a larger
value (i.e. larger than the physical memory), and the
swapPath property needs to be configured.

But from the test results, the node is terminated.

I think the correct result should be that even if the amount
of data exceeds the physical memory, the node should still be
able to run normally, but the data is exchanged to the disk.

I want to know what parameters affect the behavior of this
configuration? *vm.swappiness* or others?

在 2020/7/24 下午9:55, aealexsandrov 写道:

Hi,

Can you please clarify your expectations? You expected that JVM process 
will
be killed instead of gracefully stopping? What you are going to achieve?

BR,
Andrei



--
Sent from:http://apache-ignite-users.70518.x6.nabble.com/  
<http://apache-ignite-users.70518.x6.nabble.com/>




Re: Enabling swapPath causes invoking shutdown hook

2020-08-04 Thread 38797715

Hi Ilya,

If so, there are two ways to implement ignite's swap space:
1. maxSize > physical memory, which will use the swap mechanism of the 
OS, can be used *vm.swappiness* Adjust.
2. Configure the *swapPath* property, which is implemented by Ignite 
itself, is independent of the OS and has no optimization parameters.


There's a choice between these two models, right? Then I think there may 
be many problems in the description of the document. I hope you can 
check it again:

https://apacheignite.readme.io/docs/swap-space

After our initial testing, the performance of swap space is much better 
than native persistence, so I think this pattern is valuable in some 
scenarios.


在 2020/8/4 下午10:16, Ilya Kasnacheev 写道:

Hello!

From the docs:

To avoid this situation with the swapping capabilities, you need to :

  * Set |maxSize = bigger_ than_RAM_size|, in which case, the OS will
take care of the swapping.
  * Enable swapping by setting the
|DataRegionConfiguration.swapPath| property.


I actually think these are either-or. You should either do the first 
(and configure OS swapping) or the second part.


Having said that, I recommend setting proper Native Persistence instead.

Regards,
--
Ilya Kasnacheev


сб, 25 июл. 2020 г. в 04:49, 38797715 <38797...@qq.com 
<mailto:38797...@qq.com>>:


Hi,

https://apacheignite.readme.io/docs/swap-space
<https://apacheignite.readme.io/docs/swap-space>

According to the above document, if the physical memory is small,
you can solve this problem by opening the swap space,The specific
method is to configure maxSize to a larger value (i.e. larger than
the physical memory), and the swapPath property needs to be
configured.

But from the test results, the node is terminated.

I think the correct result should be that even if the amount of
data exceeds the physical memory, the node should still be able to
run normally, but the data is exchanged to the disk.

I want to know what parameters affect the behavior of this
configuration? *vm.swappiness* or others?

在 2020/7/24 下午9:55, aealexsandrov 写道:

Hi,

Can you please clarify your expectations? You expected that JVM process will
be killed instead of gracefully stopping? What you are going to achieve?

BR,
Andrei



--
Sent from:http://apache-ignite-users.70518.x6.nabble.com/  
<http://apache-ignite-users.70518.x6.nabble.com/>




integrated with Ignite and HBase

2020-08-04 Thread 38797715

Hi community,

Does the community have a demo integrated with Ignite and HBase? Such as 
the implementation of CacheStore or other implementation patterns?




Re: Enabling swapPath causes invoking shutdown hook

2020-07-24 Thread 38797715

Hi,

https://apacheignite.readme.io/docs/swap-space

According to the above document, if the physical memory is small, you 
can solve this problem by opening the swap space,The specific method is 
to configure maxSize to a larger value (i.e. larger than the physical 
memory), and the swapPath property needs to be configured.


But from the test results, the node is terminated.

I think the correct result should be that even if the amount of data 
exceeds the physical memory, the node should still be able to run 
normally, but the data is exchanged to the disk.


I want to know what parameters affect the behavior of this 
configuration? *vm.swappiness* or others?


在 2020/7/24 下午9:55, aealexsandrov 写道:

Hi,

Can you please clarify your expectations? You expected that JVM process will
be killed instead of gracefully stopping? What you are going to achieve?

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Enabling swapPath causes invoking shutdown hook

2020-07-23 Thread 38797715

Hi community,

When swapPath is configured in DataRegionConfiguration and maxSize is 
greater than the physical memory (that is, swap space is enabled), if 
the amount of data exceeds the physical memory, a node failure will 
occur. The log is as follows:


[08:29:14,212][INFO][Thread-24][G] Invoking shutdown hook...

I think the node process may be killed by the OS. What parameters can be 
adjusted?





GridIoManager compile error(2.8.0)

2020-06-21 Thread 38797715

Hi team,

I downloaded the source code of version 2.8.1, but found that it could 
not be compiled successfully, and there were compilation errors in the code.


for example:

org.apache.ignite.internal.managers.communication.GridIoManager

line 395:

The method register(String, BooleanSupplier, String) is ambiguous for 
the type MetricRegistry
    - The type of getSentMessagesCount() from the type CommunicationSpi 
is int, this is incompatible with the descriptor's return type:

     boolean

But it can be compiled successfully by executing mvn compile.

I use eclipse, I think this code is really problematic, I want to know 
why mvn compile can be compiled successfully? Why can this code be released?




Re: Binary recovery for a very long time

2020-05-26 Thread 38797715

Hi,

I enabled debug logging and found the following log output:

[2020-05-23T21:43:58,397][DEBUG][nio-acceptor-tcp-comm-#28%ClusterName1%][TcpCommunicationSpi] 
Balancing data [min0=0, minIdx=0, max0=-1, maxIdx=-1]
[2020-05-23T21:43:58,405][DEBUG][main][SchemaManager] Creating DB table 
with SQL: CREATE TABLE "PUBLIC"."CO_CUST"(_KEY VARCHAR INVISIBLE NOT 
NULL,_VAL OTHER INVISIBLE,"CUST_ID"VARCHAR(30) NOT 
NULL,"CUST_NAME"VARCHAR(200),"CUST_SHORT_NAME"VARCHAR(200),"CUST_SHORT_ID"VARCHAR(240),"LICENSE_CODE"VARCHAR(30),"STATUS"VARCHAR(16),"COM_ID"VARCHAR(30),"SALE_CENTER_ID"VARCHAR(30),"SALE_DEPT_ID"VARCHAR(30),"SLSMGR_ID"VARCHAR(30),"SLSMAN_ID"VARCHAR(30),"SLSMAN_MOBILE"VARCHAR(16),"MANAGER"VARCHAR(100),"IDENTITY_CARD_ID"VARCHAR(36),"ORDER_TEL"VARCHAR(80),"INV_TYPE"VARCHAR(18),"ORDER_WAY"VARCHAR(18),"PAY_TYPE"VARCHAR(2),"PERIODS"VARCHAR(200),"PRD_ST_DATE"VARCHAR(8),"ORDER_CUST_ID"VARCHAR(30),"BUSI_ADDR"VARCHAR(400),"WORK_PORT"VARCHAR(18),"BASE_TYPE"VARCHAR(24),"SALE_SCOPE"VARCHAR(1),"SCOPE"VARCHAR(200),"COM_CHARA"VARCHAR(2),"INNER_TYPE"VARCHAR(18),"CUST_KIND"VARCHAR(10),"CUST_KIND_NAME"VARCHAR(120),"CUST_TYPE"VARCHAR(6),"CUST_TYPE1"VARCHAR(6),"CUST_TYPE2"VARCHAR(6),"CUST_TYPE3"VARCHAR(6),"CUST_TYPE4"VARCHAR(6),"CUST_TYPE5"VARCHAR(6),"AREA_TYPE"VARCHAR(2),"IS_SEFL_CUST"VARCHAR(1),"IS_FUNC_CUST"VARCHAR(1),"MANAGER_BIRTHDAY"VARCHAR(8),"CELEBRATE_DATE"VARCHAR(8),"NATION_CUST_CODE"VARCHAR(30),"LONGITUDE"DECIMAL,"LATITUDE"DECIMAL,"AGENT_MESSAGE"VARCHAR(400),"NOTE"VARCHAR(1000),"UPDATE_TIME"TIMESTAMP,"INVTY_ID"VARCHAR(30),"STEP_ID"VARCHAR(20),"INV_CUST_ID"VARCHAR(60),"INV_UNIT_NAME"VARCHAR(200),"ACCOUNT"VARCHAR(200),"BANK"VARCHAR(100),"TAX_ACCOUNT"VARCHAR(120),"OTHER_ORDER_WAY"VARCHAR(32),"SALE_AVG"DECIMAL,"ITEM_ORD"DECIMAL,"QTY_SOLD"DECIMAL,"AMT_SOLD"DECIMAL,"MANAGER_TEL"VARCHAR(80),"IS_SALE_LARGE"VARCHAR(1),"TAX_TEL"VARCHAR(60),"TAX_ADDR"VARCHAR(400),"IS_TOR_TAX"VARCHAR(1),"CUST_TYPE6"VARCHAR(6),"CUST_TYPE7"VARCHAR(6),"CUST_TYPE8"VARCHAR(124),"CUST_TYPE9"VARCHAR(60),"CUST_TYPE10"VARCHAR(200),"CANT_ID"VARCHAR(20),"IS_ONLINE_PAY"VARCHAR(1),"IS_RAIL_CUST"VARCHAR(1),"CUST_SEG"VARCHAR(30),"QTY_MULTIPLE"VARCHAR(20),"TAX_ADRR"VARCHAR(200),"COLLECT_STAFF_ID"VARCHAR(30),"ITEM_HEIGHT"DECIMAL,"AREA_ID"VARCHAR(30),"BASE_TYPE_EXT"VARCHAR(30),"AREA_TYPE_EXT"VARCHAR(30),"WORK_PORT_EXT"VARCHAR(30),"CUST_SEG_EXT"VARCHAR(30),"IS_CIGAR_CUST"VARCHAR(1))
[2020-05-23T21:43:58,411][DEBUG][main][IgniteH2Indexing] Creating cache 
index [cacheId=1684722246, idxName=_key_PK]
[2020-05-23T21:43:59,081][DEBUG][main][IgniteH2Indexing] Creating cache 
index [cacheId=1684722246, idxName=IDX_CO_CUST_SALE_CENTER_ID]
[2020-05-23T21:43:59,088][DEBUG][main][IgniteH2Indexing] Creating cache 
index [cacheId=1684722246, idxName=IDX_CO_CUST_STATUS]
[2020-05-23T21:43:59,099][DEBUG][main][IgniteH2Indexing] Creating cache 
index [cacheId=1684722246, idxName=IDX_CO_CUST_SALE_DEPT_ID]
[2020-05-23T21:43:59,109][INFO][main][GridCacheProcessor] Started cache 
in recovery mode [name=CO_CUST, id=1684722246, dataRegionName=default, 
mode=PARTITIONED, atomicity=ATOMIC, backups=1, mvcc=false]


It can be seen from the log that all tables and indexes need to be 
rebuilt during node startup? Just don't load data?


在 2020/5/21 下午8:48, Ilya Kasnacheev 写道:

Hello!

1. I guess that WAL is read.
2. Unfortunately we do not have truly graceful exit as far as my 
understanding goes.


Regards,
--
Ilya Kasnacheev


вт, 19 мая 2020 г. в 10:22, 38797715 <38797...@qq.com 
<mailto:38797...@qq.com>>:


Hi,

the following log message:

[2020-05-12T18:17:57,071][INFO ][main][GridCacheProcessor] Started
cache in recovery mode [name=CO_CO_LINE_NEW, id=1742991829,
dataRegionName=default, mode=PARTITIONED, atomicity=ATOMIC,
backups=1, mvcc=false]

I have the following questions:

1.What has been done in the startup cache in recovery mode?

2.After testing, if the node stops normally (non abnormal
shutdown), the recovery process will also be performed during
startup. Why?

在 2020/5/18 下午9:58, Ilya Kasnacheev 写道:

Hello!

Direct IO module is experimental and should not be used unle

Data loading during startup

2020-05-19 Thread 38797715

Hi community,

I know that during the startup process of ignite node, cached data will 
not be loaded, that is, there is no warm up process.


However, we can see from the top command that during the startup of 
ignite, the memory usage has been increasing, which will increase by 
more than 10g, what data has been loaded in the start process?




Re: Binary recovery for a very long time

2020-05-19 Thread 38797715

Hi,

the following log message:

[2020-05-12T18:17:57,071][INFO ][main][GridCacheProcessor] Started cache 
in recovery mode [name=CO_CO_LINE_NEW, id=1742991829, 
dataRegionName=default, mode=PARTITIONED, atomicity=ATOMIC, backups=1, 
mvcc=false]


I have the following questions:

1.What has been done in the startup cache in recovery mode?

2.After testing, if the node stops normally (non abnormal shutdown), the 
recovery process will also be performed during startup. Why?


在 2020/5/18 下午9:58, Ilya Kasnacheev 写道:

Hello!

Direct IO module is experimental and should not be used unless 
performance is tested first, in your specific use case.


Regards,
--
Ilya Kasnacheev


пн, 18 мая 2020 г. в 16:47, 38797715 <38797...@qq.com 
<mailto:38797...@qq.com>>:


Hi,

If direct IO is disabled, the startup speed will be doubled,
including some other tests. I find that direct IO has a great
impact on the read performance.

在 2020/5/14 上午5:16, Evgenii Zhuravlev 写道:

Can you share full logs from all nodes?

вт, 12 мая 2020 г. в 18:24, 38797715 <38797...@qq.com
<mailto:38797...@qq.com>>:

Hi Evgenii,

The storage used is not SSD.

We will use different versions of ignite for further testing,
such as ignite2.8.
Ignite is configured as follows:


http://www.springframework.org/schema/beans;
<http://www.springframework.org/schema/beans>
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;

<http://www.w3.org/2001/XMLSchema-instance>xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd;>














































在 2020/5/13 上午4:45, Evgenii Zhuravlev 写道:

Hi,

Can you share full logs and configuration? What disk so you use?

Evgenii

вт, 12 мая 2020 г. в 06:49, 38797715 <38797...@qq.com
<mailto:38797...@qq.com>>:

Among them:
CO_CO_NEW: ~ 48 minutes(partitioned,backup=1,33M)

Ignite sys cache: ~ 27 minutes

PLM_ITEM:~3 minutes(repicated,1.9K)


在 2020/5/12 下午9:08, 38797715 写道:


Hi community,

We have 5 servers, 16 cores, 256g memory, and 200g
off-heap memory.
We have 7 tables to test, and the data volume is

respectively:31.8M,495.2M,552.3M,33M,873.3K,28M,1.9K(replicated),others
are partitioned(backup = 1)

VM args:-server -Xms20g -Xmx20g -XX:+AlwaysPreTouch
-XX:+UseG1GC -XX:+ScavengeBeforeFullGC
-XX:+DisableExplicitGC -XX:+PrintGCDetails
-XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps
-XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10
-XX:GCLogFileSize=100M -Xloggc:/data/gc/logs/gclog.txt
-Djava.net.preferIPv4Stack=true
-XX:MaxDirectMemorySize=256M -XX:+PrintAdaptiveSizePolicy

Today, one of the servers was restarted(kill and then
start ignite.sh) for some reason, but the node took 1.5
hours to start, which was much longer than expected.

After analyzing the log, the following information is
found:


[2020-05-12T17:00:05,138][INFO][main][GridCacheDatabaseSharedManager]
Found last checkpoint marker
[cpId=7a0564f2-43e5-400b-9439-746fc68a6ccb,
pos=FileWALPointer [idx=10511, fileOff=5134,
len=61193]]

[2020-05-12T17:00:05,151][INFO][main][GridCacheDatabaseSharedManager]
Binary memory state restored at node startup
[restoredPtr=FileWALPointer [idx=10511,
fileOff=51410110, len=0]]
[2020-05-12T17:00:05,152][INFO][main][FileWriteAheadLogManager]
Resuming logging to WAL segment
[file=/appdata/ignite/db/wal/24/0001.wal,
offset=51410110, ver=2]
[2020-05-12T17:00:06,448][INFO][main][PageMemoryImpl]
Started page memory [memoryAllocated=200.0GiB,
pages=50821088, tableSize=3.9GiB, checkpointBuffer=2.0GiB]
[2020-05-12T17:02:08,528][INFO][main][GridCacheProcessor]
Started cache in recovery mode [name=CO_CO_NEW,
id=-189779360, dataRegionName=default,
mode=PARTITIONED, atomicity=ATOMIC, backups=1, mvcc=false]
[2020-05-12T17:50:44,341][INFO][main][GridCacheProcessor]
Started cache in recovery mode [name=CO_CO_LINE,
id=-158

Re: Binary recovery for a very long time

2020-05-18 Thread 38797715

Hi,

If direct IO is disabled, the startup speed will be doubled, including 
some other tests. I find that direct IO has a great impact on the read 
performance.


在 2020/5/14 上午5:16, Evgenii Zhuravlev 写道:

Can you share full logs from all nodes?

вт, 12 мая 2020 г. в 18:24, 38797715 <38797...@qq.com 
<mailto:38797...@qq.com>>:


Hi Evgenii,

The storage used is not SSD.

We will use different versions of ignite for further testing, such
as ignite2.8.
Ignite is configured as follows:


http://www.springframework.org/schema/beans;
<http://www.springframework.org/schema/beans>
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;

<http://www.w3.org/2001/XMLSchema-instance>xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd;>













































在 2020/5/13 上午4:45, Evgenii Zhuravlev 写道:

Hi,

Can you share full logs and configuration? What disk so you use?

Evgenii

    вт, 12 мая 2020 г. в 06:49, 38797715 <38797...@qq.com
<mailto:38797...@qq.com>>:

Among them:
CO_CO_NEW: ~ 48 minutes(partitioned,backup=1,33M)

Ignite sys cache: ~ 27 minutes

PLM_ITEM:~3 minutes(repicated,1.9K)


在 2020/5/12 下午9:08, 38797715 写道:


Hi community,

We have 5 servers, 16 cores, 256g memory, and 200g off-heap
memory.
We have 7 tables to test, and the data volume is
respectively:31.8M,495.2M,552.3M,33M,873.3K,28M,1.9K(replicated),others
are partitioned(backup = 1)

VM args:-server -Xms20g -Xmx20g -XX:+AlwaysPreTouch
-XX:+UseG1GC -XX:+ScavengeBeforeFullGC
-XX:+DisableExplicitGC -XX:+PrintGCDetails
-XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps
-XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10
-XX:GCLogFileSize=100M -Xloggc:/data/gc/logs/gclog.txt
-Djava.net.preferIPv4Stack=true -XX:MaxDirectMemorySize=256M
-XX:+PrintAdaptiveSizePolicy

Today, one of the servers was restarted(kill and then start
ignite.sh) for some reason, but the node took 1.5 hours to
start, which was much longer than expected.

After analyzing the log, the following information is found:

[2020-05-12T17:00:05,138][INFO][main][GridCacheDatabaseSharedManager]
Found last checkpoint marker
[cpId=7a0564f2-43e5-400b-9439-746fc68a6ccb,
pos=FileWALPointer [idx=10511, fileOff=5134, len=61193]]
[2020-05-12T17:00:05,151][INFO][main][GridCacheDatabaseSharedManager]
Binary memory state restored at node startup
[restoredPtr=FileWALPointer [idx=10511, fileOff=51410110,
len=0]]
[2020-05-12T17:00:05,152][INFO][main][FileWriteAheadLogManager]
Resuming logging to WAL segment
[file=/appdata/ignite/db/wal/24/0001.wal,
offset=51410110, ver=2]
[2020-05-12T17:00:06,448][INFO][main][PageMemoryImpl]
Started page memory [memoryAllocated=200.0GiB,
pages=50821088, tableSize=3.9GiB, checkpointBuffer=2.0GiB]
[2020-05-12T17:02:08,528][INFO][main][GridCacheProcessor]
Started cache in recovery mode [name=CO_CO_NEW,
id=-189779360, dataRegionName=default, mode=PARTITIONED,
atomicity=ATOMIC, backups=1, mvcc=false]
[2020-05-12T17:50:44,341][INFO][main][GridCacheProcessor]
Started cache in recovery mode [name=CO_CO_LINE,
id=-1588248812, dataRegionName=default, mode=PARTITIONED,
atomicity=ATOMIC, backups=1, mvcc=false]
[2020-05-12T17:50:44,366][INFO][main][GridCacheProcessor]
Started cache in recovery mode [name=ignite-sys-cache,
id=-2100569601, dataRegionName=sysMemPlc, mode=REPLICATED,
atomicity=TRANSACTIONAL, backups=2147483647, mvcc=false]
[2020-05-12T18:17:57,071][INFO][main][GridCacheProcessor]
Started cache in recovery mode [name=CO_CO_LINE_NEW,
id=1742991829, dataRegionName=default, mode=PARTITIONED,
atomicity=ATOMIC, backups=1, mvcc=false]
[2020-05-12T18:19:54,910][INFO][main][GridCacheProcessor]
Started cache in recovery mode [name=PI_COM_DAY,
id=-1904194728, dataRegionName=default, mode=PARTITIONED,
atomicity=ATOMIC, backups=1, mvcc=false]
[2020-05-12T18:19:54,949][INFO][main][GridCacheProcessor]
Started cache in recovery mode [name=PLM_ITEM,
id=-1283854143, dataRegionName=default, mode=REPLICATED,
atomicity=ATOMIC, backups=2147483647, mvcc=false]
[2020-05-12T18:22:53,662][INFO][main][GridCacheProcessor]
Started cache in recovery mode [name=CO_CO, id=64322847,
dataRegionName=default, mode

Re: About index inline size of primary key

2020-05-14 Thread 38797715

Hi,

I see this property.
If this property is configured, it has a global impact? What is the 
influence range of this parameter?


在 2020/5/14 下午9:41, Stephen Darlington 写道:
Exactly as the warning says, with the IGNITE_MAX_INDEX_PAYLOAD_SIZE 
property:


./ignite.sh -J-DIGNITE_MAX_INDEX_PAYLOAD_SIZE=33

Regards,
Stephen

On 14 May 2020, at 14:23, 38797715 <38797...@qq.com 
<mailto:38797...@qq.com>> wrote:


Hi,

Today, I see the following information in the log:

[2020-05-14T16:42:04,346][WARN][query-#7759][IgniteH2Indexing] 
Indexed columns of a row cannot be fully inlined into index what may 
lead to slowdown due to additional data page reads, increase index 
inline size if needed (set system property 
IGNITE_MAX_INDEX_PAYLOAD_SIZE with recommended size (be aware it will 
be used by default for all indexes without explicit inline size)) 
[cacheName=NEW, tableName=NEW, idxName=_key_PK, idxCols=(CO_NUM, 
CUST_ID), idxType=PRIMARY KEY, curSize=10, recommendedInlineSize=33]


I know that the create index statement has an inline_ size clause, 
but I want to ask, how to adjust theinline size of primary key?







About index inline size of primary key

2020-05-14 Thread 38797715

Hi,

Today, I see the following information in the log:

[2020-05-14T16:42:04,346][WARN][query-#7759][IgniteH2Indexing] Indexed 
columns of a row cannot be fully inlined into index what may lead to 
slowdown due to additional data page reads, increase index inline size 
if needed (set system property IGNITE_MAX_INDEX_PAYLOAD_SIZE with 
recommended size (be aware it will be used by default for all indexes 
without explicit inline size)) [cacheName=NEW, tableName=NEW, 
idxName=_key_PK, idxCols=(CO_NUM, CUST_ID), idxType=PRIMARY KEY, 
curSize=10, recommendedInlineSize=33]



I know that the create index statement has an inline_ size clause, but I 
want to ask, how to adjust the inline size of primary key?




Re: Binary recovery for a very long time

2020-05-12 Thread 38797715

Hi Evgenii,

The storage used is not SSD.

We will use different versions of ignite for further testing, such as 
ignite2.8.

Ignite is configured as follows:


http://www.springframework.org/schema/beans;
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd;>










































在 2020/5/13 上午4:45, Evgenii Zhuravlev 写道:

Hi,

Can you share full logs and configuration? What disk so you use?

Evgenii

вт, 12 мая 2020 г. в 06:49, 38797715 <38797...@qq.com 
<mailto:38797...@qq.com>>:


Among them:
CO_CO_NEW: ~ 48 minutes(partitioned,backup=1,33M)

Ignite sys cache: ~ 27 minutes

PLM_ITEM:~3 minutes(repicated,1.9K)


    在 2020/5/12 下午9:08, 38797715 写道:


Hi community,

We have 5 servers, 16 cores, 256g memory, and 200g off-heap memory.
We have 7 tables to test, and the data volume is
respectively:31.8M,495.2M,552.3M,33M,873.3K,28M,1.9K(replicated),others
are partitioned(backup = 1)

VM args:-server -Xms20g -Xmx20g -XX:+AlwaysPreTouch -XX:+UseG1GC
-XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps
-XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10
-XX:GCLogFileSize=100M -Xloggc:/data/gc/logs/gclog.txt
-Djava.net.preferIPv4Stack=true -XX:MaxDirectMemorySize=256M
-XX:+PrintAdaptiveSizePolicy

Today, one of the servers was restarted(kill and then start
ignite.sh) for some reason, but the node took 1.5 hours to start,
which was much longer than expected.

After analyzing the log, the following information is found:

[2020-05-12T17:00:05,138][INFO][main][GridCacheDatabaseSharedManager]
Found last checkpoint marker
[cpId=7a0564f2-43e5-400b-9439-746fc68a6ccb, pos=FileWALPointer
[idx=10511, fileOff=5134, len=61193]]
[2020-05-12T17:00:05,151][INFO][main][GridCacheDatabaseSharedManager]
Binary memory state restored at node startup
[restoredPtr=FileWALPointer [idx=10511, fileOff=51410110, len=0]]
[2020-05-12T17:00:05,152][INFO][main][FileWriteAheadLogManager]
Resuming logging to WAL segment
[file=/appdata/ignite/db/wal/24/0001.wal,
offset=51410110, ver=2]
[2020-05-12T17:00:06,448][INFO][main][PageMemoryImpl] Started
page memory [memoryAllocated=200.0GiB, pages=50821088,
tableSize=3.9GiB, checkpointBuffer=2.0GiB]
[2020-05-12T17:02:08,528][INFO][main][GridCacheProcessor] Started
cache in recovery mode [name=CO_CO_NEW, id=-189779360,
dataRegionName=default, mode=PARTITIONED, atomicity=ATOMIC,
backups=1, mvcc=false]
[2020-05-12T17:50:44,341][INFO][main][GridCacheProcessor] Started
cache in recovery mode [name=CO_CO_LINE, id=-1588248812,
dataRegionName=default, mode=PARTITIONED, atomicity=ATOMIC,
backups=1, mvcc=false]
[2020-05-12T17:50:44,366][INFO][main][GridCacheProcessor] Started
cache in recovery mode [name=ignite-sys-cache, id=-2100569601,
dataRegionName=sysMemPlc, mode=REPLICATED,
atomicity=TRANSACTIONAL, backups=2147483647, mvcc=false]
[2020-05-12T18:17:57,071][INFO][main][GridCacheProcessor] Started
cache in recovery mode [name=CO_CO_LINE_NEW, id=1742991829,
dataRegionName=default, mode=PARTITIONED, atomicity=ATOMIC,
backups=1, mvcc=false]
[2020-05-12T18:19:54,910][INFO][main][GridCacheProcessor] Started
cache in recovery mode [name=PI_COM_DAY, id=-1904194728,
dataRegionName=default, mode=PARTITIONED, atomicity=ATOMIC,
backups=1, mvcc=false]
[2020-05-12T18:19:54,949][INFO][main][GridCacheProcessor] Started
cache in recovery mode [name=PLM_ITEM, id=-1283854143,
dataRegionName=default, mode=REPLICATED, atomicity=ATOMIC,
backups=2147483647, mvcc=false]
[2020-05-12T18:22:53,662][INFO][main][GridCacheProcessor] Started
cache in recovery mode [name=CO_CO, id=64322847,
dataRegionName=default, mode=PARTITIONED, atomicity=ATOMIC,
backups=1, mvcc=false]
[2020-05-12T18:22:54,876][INFO][main][GridCacheProcessor] Started
cache in recovery mode [name=CO_CUST, id=1684722246,
dataRegionName=default, mode=PARTITIONED, atomicity=ATOMIC,
backups=1, mvcc=false]
[2020-05-12T18:22:54,892][INFO][main][GridCacheDatabaseSharedManager]
Binary recovery performed in 4970233ms.

Among them, binary recovery took 4970 seconds.

Our question is:

1.Why is the start time so long?

2.Is the current state of ignite, with the growth of single node
data volume, the restart time will be longer and longer?

3.Do have any suggestions for optimizing the restart time?



Re: Binary recovery for a very long time

2020-05-12 Thread 38797715

Among them:
CO_CO_NEW: ~ 48 minutes(partitioned,backup=1,33M)

Ignite sys cache: ~ 27 minutes

PLM_ITEM:~3 minutes(repicated,1.9K)


在 2020/5/12 下午9:08, 38797715 写道:


Hi community,

We have 5 servers, 16 cores, 256g memory, and 200g off-heap memory.
We have 7 tables to test, and the data volume is 
respectively:31.8M,495.2M,552.3M,33M,873.3K,28M,1.9K(replicated),others 
are partitioned(backup = 1)


VM args:-server -Xms20g -Xmx20g -XX:+AlwaysPreTouch -XX:+UseG1GC 
-XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps 
-XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 
-XX:GCLogFileSize=100M -Xloggc:/data/gc/logs/gclog.txt 
-Djava.net.preferIPv4Stack=true -XX:MaxDirectMemorySize=256M 
-XX:+PrintAdaptiveSizePolicy


Today, one of the servers was restarted(kill and then start ignite.sh) 
for some reason, but the node took 1.5 hours to start, which was much 
longer than expected.


After analyzing the log, the following information is found:

[2020-05-12T17:00:05,138][INFO][main][GridCacheDatabaseSharedManager] 
Found last checkpoint marker 
[cpId=7a0564f2-43e5-400b-9439-746fc68a6ccb, pos=FileWALPointer 
[idx=10511, fileOff=5134, len=61193]]
[2020-05-12T17:00:05,151][INFO][main][GridCacheDatabaseSharedManager] 
Binary memory state restored at node startup 
[restoredPtr=FileWALPointer [idx=10511, fileOff=51410110, len=0]]
[2020-05-12T17:00:05,152][INFO][main][FileWriteAheadLogManager] 
Resuming logging to WAL segment 
[file=/appdata/ignite/db/wal/24/0001.wal, offset=51410110, 
ver=2]
[2020-05-12T17:00:06,448][INFO][main][PageMemoryImpl] Started page 
memory [memoryAllocated=200.0GiB, pages=50821088, tableSize=3.9GiB, 
checkpointBuffer=2.0GiB]
[2020-05-12T17:02:08,528][INFO][main][GridCacheProcessor] Started 
cache in recovery mode [name=CO_CO_NEW, id=-189779360, 
dataRegionName=default, mode=PARTITIONED, atomicity=ATOMIC, backups=1, 
mvcc=false]
[2020-05-12T17:50:44,341][INFO][main][GridCacheProcessor] Started 
cache in recovery mode [name=CO_CO_LINE, id=-1588248812, 
dataRegionName=default, mode=PARTITIONED, atomicity=ATOMIC, backups=1, 
mvcc=false]
[2020-05-12T17:50:44,366][INFO][main][GridCacheProcessor] Started 
cache in recovery mode [name=ignite-sys-cache, id=-2100569601, 
dataRegionName=sysMemPlc, mode=REPLICATED, atomicity=TRANSACTIONAL, 
backups=2147483647, mvcc=false]
[2020-05-12T18:17:57,071][INFO][main][GridCacheProcessor] Started 
cache in recovery mode [name=CO_CO_LINE_NEW, id=1742991829, 
dataRegionName=default, mode=PARTITIONED, atomicity=ATOMIC, backups=1, 
mvcc=false]
[2020-05-12T18:19:54,910][INFO][main][GridCacheProcessor] Started 
cache in recovery mode [name=PI_COM_DAY, id=-1904194728, 
dataRegionName=default, mode=PARTITIONED, atomicity=ATOMIC, backups=1, 
mvcc=false]
[2020-05-12T18:19:54,949][INFO][main][GridCacheProcessor] Started 
cache in recovery mode [name=PLM_ITEM, id=-1283854143, 
dataRegionName=default, mode=REPLICATED, atomicity=ATOMIC, 
backups=2147483647, mvcc=false]
[2020-05-12T18:22:53,662][INFO][main][GridCacheProcessor] Started 
cache in recovery mode [name=CO_CO, id=64322847, 
dataRegionName=default, mode=PARTITIONED, atomicity=ATOMIC, backups=1, 
mvcc=false]
[2020-05-12T18:22:54,876][INFO][main][GridCacheProcessor] Started 
cache in recovery mode [name=CO_CUST, id=1684722246, 
dataRegionName=default, mode=PARTITIONED, atomicity=ATOMIC, backups=1, 
mvcc=false]
[2020-05-12T18:22:54,892][INFO][main][GridCacheDatabaseSharedManager] 
Binary recovery performed in 4970233ms.


Among them, binary recovery took 4970 seconds.

Our question is:

1.Why is the start time so long?

2.Is the current state of ignite, with the growth of single node data 
volume, the restart time will be longer and longer?


3.Do have any suggestions for optimizing the restart time?



Binary recovery for a very long time

2020-05-12 Thread 38797715

Hi community,

We have 5 servers, 16 cores, 256g memory, and 200g off-heap memory.
We have 7 tables to test, and the data volume is 
respectively:31.8M,495.2M,552.3M,33M,873.3K,28M,1.9K(replicated),others 
are partitioned(backup = 1)


VM args:-server -Xms20g -Xmx20g -XX:+AlwaysPreTouch -XX:+UseG1GC 
-XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+PrintGCDetails 
-XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation 
-XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M 
-Xloggc:/data/gc/logs/gclog.txt -Djava.net.preferIPv4Stack=true 
-XX:MaxDirectMemorySize=256M -XX:+PrintAdaptiveSizePolicy


Today, one of the servers was restarted(kill and then start ignite.sh) 
for some reason, but the node took 1.5 hours to start, which was much 
longer than expected.


After analyzing the log, the following information is found:

[2020-05-12T17:00:05,138][INFO][main][GridCacheDatabaseSharedManager] 
Found last checkpoint marker [cpId=7a0564f2-43e5-400b-9439-746fc68a6ccb, 
pos=FileWALPointer [idx=10511, fileOff=5134, len=61193]]
[2020-05-12T17:00:05,151][INFO][main][GridCacheDatabaseSharedManager] 
Binary memory state restored at node startup [restoredPtr=FileWALPointer 
[idx=10511, fileOff=51410110, len=0]]
[2020-05-12T17:00:05,152][INFO][main][FileWriteAheadLogManager] Resuming 
logging to WAL segment 
[file=/appdata/ignite/db/wal/24/0001.wal, offset=51410110, 
ver=2]
[2020-05-12T17:00:06,448][INFO][main][PageMemoryImpl] Started page 
memory [memoryAllocated=200.0GiB, pages=50821088, tableSize=3.9GiB, 
checkpointBuffer=2.0GiB]
[2020-05-12T17:02:08,528][INFO][main][GridCacheProcessor] Started cache 
in recovery mode [name=CO_CO_NEW, id=-189779360, dataRegionName=default, 
mode=PARTITIONED, atomicity=ATOMIC, backups=1, mvcc=false]
[2020-05-12T17:50:44,341][INFO][main][GridCacheProcessor] Started cache 
in recovery mode [name=CO_CO_LINE, id=-1588248812, 
dataRegionName=default, mode=PARTITIONED, atomicity=ATOMIC, backups=1, 
mvcc=false]
[2020-05-12T17:50:44,366][INFO][main][GridCacheProcessor] Started cache 
in recovery mode [name=ignite-sys-cache, id=-2100569601, 
dataRegionName=sysMemPlc, mode=REPLICATED, atomicity=TRANSACTIONAL, 
backups=2147483647, mvcc=false]
[2020-05-12T18:17:57,071][INFO][main][GridCacheProcessor] Started cache 
in recovery mode [name=CO_CO_LINE_NEW, id=1742991829, 
dataRegionName=default, mode=PARTITIONED, atomicity=ATOMIC, backups=1, 
mvcc=false]
[2020-05-12T18:19:54,910][INFO][main][GridCacheProcessor] Started cache 
in recovery mode [name=PI_COM_DAY, id=-1904194728, 
dataRegionName=default, mode=PARTITIONED, atomicity=ATOMIC, backups=1, 
mvcc=false]
[2020-05-12T18:19:54,949][INFO][main][GridCacheProcessor] Started cache 
in recovery mode [name=PLM_ITEM, id=-1283854143, dataRegionName=default, 
mode=REPLICATED, atomicity=ATOMIC, backups=2147483647, mvcc=false]
[2020-05-12T18:22:53,662][INFO][main][GridCacheProcessor] Started cache 
in recovery mode [name=CO_CO, id=64322847, dataRegionName=default, 
mode=PARTITIONED, atomicity=ATOMIC, backups=1, mvcc=false]
[2020-05-12T18:22:54,876][INFO][main][GridCacheProcessor] Started cache 
in recovery mode [name=CO_CUST, id=1684722246, dataRegionName=default, 
mode=PARTITIONED, atomicity=ATOMIC, backups=1, mvcc=false]
[2020-05-12T18:22:54,892][INFO][main][GridCacheDatabaseSharedManager] 
Binary recovery performed in 4970233ms.


Among them, binary recovery took 4970 seconds.

Our question is:

1.Why is the start time so long?

2.Is the current state of ignite, with the growth of single node data 
volume, the restart time will be longer and longer?


3.Do have any suggestions for optimizing the restart time?