[jira] [Created] (IGNITE-13821) Explain how to include a scope into another tracing scope

2020-12-04 Thread Denis A. Magda (Jira)
Denis A. Magda created IGNITE-13821:
---

 Summary: Explain how to include a scope into another tracing scope
 Key: IGNITE-13821
 URL: https://issues.apache.org/jira/browse/IGNITE-13821
 Project: Ignite
  Issue Type: Improvement
  Components: documentation
Reporter: Denis A. Magda


The documentation explains how to activate a tracing scope:
https://ignite.apache.org/docs/latest/monitoring-metrics/tracing#using-control-script

Also, it's feasible to include another scope inside of a primary one. For 
instance, you can ask to include the communication scope into the tx scope. 
[Refer to this article in 
Russian|https://habr.com/ru/company/gridgain/blog/528836/] for details, search 
for the "Мы также можем увеличить степень детализации трейсинга транзакций, 
включив трейсинг коммуникационного протокола." occurrence in the text after 
which it's show how to include one scope inside of another.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-13820) Ignore letters' case for tracing scopes' names

2020-12-04 Thread Denis A. Magda (Jira)
Denis A. Magda created IGNITE-13820:
---

 Summary: Ignore letters' case for tracing scopes' names
 Key: IGNITE-13820
 URL: https://issues.apache.org/jira/browse/IGNITE-13820
 Project: Ignite
  Issue Type: Improvement
Affects Versions: 2.9
Reporter: Denis A. Magda


Presently, it's forced to provide a scope name in the uppercase (DISCOVERY, 
EXCHANGE, COMMUNICATION, TX):
https://ignite.apache.org/docs/latest/monitoring-metrics/tracing#using-control-script

Remove this limitation allowing to write the letters in any case 
("communication", "tx", "Exchange", etc.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[MTCGA]: new failures in builds [5771592] needs to be handled

2020-12-04 Thread dpavlov . tasks
Hi Igniters,

 I've detected some new issue on TeamCity to be handled. You are more than 
welcomed to help.

 *Test with high flaky rate in master 
AdditionalSecurityCheckWithGlobalAuthTest.testClientInfoIgniteClientFail 
https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8&testNameId=9076431272210280787&branch=%3Cdefault%3E&tab=testDetails

 *Test with high flaky rate in master 
ThinClientSslPermissionCheckTest.testSysOperation 
https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8&testNameId=-3513208105769035399&branch=%3Cdefault%3E&tab=testDetails

 *Test with high flaky rate in master 
ThinClientSslPermissionCheckTest.testCacheTaskPermOperations 
https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8&testNameId=-7196422311341378385&branch=%3Cdefault%3E&tab=testDetails

 *Test with high flaky rate in master 
AdditionalSecurityCheckTest.testClientInfoIgniteClientFail 
https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8&testNameId=4021374214183552074&branch=%3Cdefault%3E&tab=testDetails

 *Test with high flaky rate in master 
AdditionalSecurityCheckWithGlobalAuthTest.testClientInfo 
https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8&testNameId=8692223864803933315&branch=%3Cdefault%3E&tab=testDetails

 *Test with high flaky rate in master 
AdditionalSecurityCheckTest.testClientInfo 
https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8&testNameId=-3622669029910370937&branch=%3Cdefault%3E&tab=testDetails

 *Test with high flaky rate in master 
MultipleSSLContextsTest.testThinClients 
https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8&testNameId=-8572658404923326411&branch=%3Cdefault%3E&tab=testDetails
 No changes in the build

 - Here's a reminder of what contributors were agreed to do 
https://cwiki.apache.org/confluence/display/IGNITE/How+to+Contribute 
 - Should you have any questions please contact dev@ignite.apache.org 

Best Regards,
Apache Ignite TeamCity Bot 
https://github.com/apache/ignite-teamcity-bot
Notification generated at 01:16:06 05-12-2020 


[jira] [Created] (IGNITE-13819) Cluster prints Coordinator changed messages after clients left topology

2020-12-04 Thread Maria Makedonskaya (Jira)
Maria Makedonskaya created IGNITE-13819:
---

 Summary: Cluster prints Coordinator changed messages after clients 
left topology
 Key: IGNITE-13819
 URL: https://issues.apache.org/jira/browse/IGNITE-13819
 Project: Ignite
  Issue Type: Bug
Reporter: Maria Makedonskaya
Assignee: Maria Makedonskaya


2019-12-18 23:16:12,875[INFO 
][disco-event-worker-#47%datafabric-prd-101.24hourfit.com%][GridDiscoveryManager]
 *Node left topology:* TcpDiscoveryNode 
[id=88dac803-687d-48ce-bc64-e594b848c514, addrs=ArrayList [10.0.91.121, 
127.0.0.1], sockAddrs=HashSet 
[cacheloaderdatasvcs-prd-01.24hourfit.com/10.0.91.121:0, /127.0.0.1:0], 
discPort=0, order=69, intOrder=69, lastExchangeTime=1576736417448, loc=false, 
ver=2.5.5#20190322-sha1:ee4d5bcb, *isClient=true*]
2019-12-18 23:16:12,876[INFO 
][disco-event-worker-#47%datafabric-prd-101.24hourfit.com%][GridDiscoveryManager]
 Topology snapshot [ver=552, servers=7, clients=163, CPUs=106, offheap=320.0GB, 
heap=190.0GB]
2019-12-18 23:16:12,876[INFO 
][disco-event-worker-#47%datafabric-prd-101.24hourfit.com%][GridDiscoveryManager]
 *Coordinator changed* [*prev=*TcpDiscoveryNode 
[id=88dac803-687d-48ce-bc64-e594b848c514, addrs=ArrayList [10.0.91.121, 
127.0.0.1], sockAddrs=HashSet 
[cacheloaderdatasvcs-prd-01.24hourfit.com/10.0.91.121:0, /127.0.0.1:0], 
discPort=0, order=69, intOrder=69, lastExchangeTime=1576736417448, loc=false, 
ver=2.5.5#20190322-sha1:ee4d5bcb, *isClient=true*], cur=TcpDiscoveryNode 
[id=2e88bf53-8b70-41a0-b0ea-b0f0a61273e9, addrs=ArrayList [10.0.91.129, 
127.0.0.1], sockAddrs=HashSet 
[datafabric-prd-101.24hourfit.com/10.0.91.129:48500, /127.0.0.1:48500], 
discPort=48500, order=163, intOrder=161, lastExchangeTime=1576739772860, 
loc=true, ver=2.5.5#20190322-sha1:ee4d5bcb, isClient=false]]

Client node should have an order lower than a coordinator. Changes from 
IGNITE-8738 caused this issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-13818) Add extended logging topology for node start

2020-12-04 Thread Maria Makedonskaya (Jira)
Maria Makedonskaya created IGNITE-13818:
---

 Summary: Add extended logging topology for node start
 Key: IGNITE-13818
 URL: https://issues.apache.org/jira/browse/IGNITE-13818
 Project: Ignite
  Issue Type: Improvement
Reporter: Maria Makedonskaya
Assignee: Maria Makedonskaya


At start of the node write topology - the number of server and client nodes, I 
would like to expand the list of nodes (information about each node: order, id, 
host, ip), this will help in the case when the logs are only from one node, and 
when there are logs from all nodes it will speed up the search for coordinator 
or find the problem node id which was in the transaction



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[MTCGA]: new failures in builds [5774477] needs to be handled

2020-12-04 Thread dpavlov . tasks
Hi Igniters,

 I've detected some new issue on TeamCity to be handled. You are more than 
welcomed to help.

 *New test failure in master 
CpTriggeredWalDeltaConsistencyTest.testPutRemoveCacheDestroy 
https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8&testNameId=-5540222145410122511&branch=%3Cdefault%3E&tab=testDetails
 No changes in the build

 - Here's a reminder of what contributors were agreed to do 
https://cwiki.apache.org/confluence/display/IGNITE/How+to+Contribute 
 - Should you have any questions please contact dev@ignite.apache.org 

Best Regards,
Apache Ignite TeamCity Bot 
https://github.com/apache/ignite-teamcity-bot
Notification generated at 21:46:07 04-12-2020 


[jira] [Created] (IGNITE-13817) Calcite bug. select count assertionError.

2020-12-04 Thread Stanilovsky Evgeny (Jira)
Stanilovsky Evgeny created IGNITE-13817:
---

 Summary: Calcite bug. select count assertionError.
 Key: IGNITE-13817
 URL: https://issues.apache.org/jira/browse/IGNITE-13817
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Affects Versions: 2.9
Reporter: Stanilovsky Evgeny


request:
{noformat}
SELECT count(*)
FROM 
 RISK R,
 TRADE T,
 BATCH B
WHERE R.BATCHKEY = B.BATCHKEY
AND R.TRADEIDENTIFIER = T.TRADEIDENTIFIER
AND R.TRADEVERSION = T.TRADEVERSION
AND T.BOOK = 'RBCEUR'
AND B.ISLATEST = TRUE
{noformat}

breaks down calcite:


{noformat}
java.lang.AssertionError
at org.apache.calcite.plan.volcano.RelSet.mergeWith(RelSet.java:457)
at 
org.apache.calcite.plan.volcano.VolcanoPlanner.merge(VolcanoPlanner.java:1046)
at 
org.apache.calcite.plan.volcano.VolcanoPlanner.rename(VolcanoPlanner.java:903)
at org.apache.calcite.plan.volcano.RelSet.mergeWith(RelSet.java:432)
at 
org.apache.calcite.plan.volcano.VolcanoPlanner.merge(VolcanoPlanner.java:1046)
at 
org.apache.calcite.plan.volcano.VolcanoPlanner.registerSubset(VolcanoPlanner.java:1287)
at 
org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1166)
at 
org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:589)
at 
org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:604)
at 
org.apache.calcite.plan.volcano.VolcanoRuleCall.transformTo(VolcanoRuleCall.java:148)
at 
org.apache.calcite.plan.RelOptRuleCall.transformTo(RelOptRuleCall.java:268)
at 
org.apache.calcite.plan.RelOptRuleCall.transformTo(RelOptRuleCall.java:283)
at 
org.apache.calcite.rel.convert.ConverterRule.onMatch(ConverterRule.java:169)
at 
org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:229)
at 
org.apache.calcite.plan.volcano.TopDownRuleDriver.applyGenerator(TopDownRuleDriver.java:142)
at 
org.apache.calcite.plan.volcano.TopDownRuleDriver.access$600(TopDownRuleDriver.java:47)
at 
org.apache.calcite.plan.volcano.TopDownRuleDriver$ApplyRule.perform(TopDownRuleDriver.java:519)
at 
org.apache.calcite.plan.volcano.TopDownRuleDriver.drive(TopDownRuleDriver.java:101)
at 
org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:510)
at 
org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:312)
at 
org.apache.ignite.internal.processors.query.calcite.prepare.IgnitePlanner.transform(IgnitePlanner.java:258)
at 
org.apache.ignite.internal.processors.query.calcite.exec.ExecutionServiceImpl.optimize(ExecutionServiceImpl.java:616)
at 
org.apache.ignite.internal.processors.query.calcite.exec.ExecutionServiceImpl.prepareQuery(ExecutionServiceImpl.java:568)
at 
org.apache.ignite.internal.processors.query.calcite.exec.ExecutionServiceImpl.prepareSingle(ExecutionServiceImpl.java:542)
at 
org.apache.ignite.internal.processors.query.calcite.exec.ExecutionServiceImpl.prepareQuery(ExecutionServiceImpl.java:501)
at 
org.apache.ignite.internal.processors.query.calcite.prepare.QueryPlanCacheImpl.queryPlan(QueryPlanCacheImpl.java:84)
at 
org.apache.ignite.internal.processors.query.calcite.exec.ExecutionServiceImpl.executeQuery(ExecutionServiceImpl.java:378)
at 
org.apache.ignite.internal.processors.query.calcite.CalciteQueryProcessor.query(CalciteQueryProcessor.java:240)
at 
org.apache.ignite.internal.processors.query.calcite.CalciteQueryProcessorTest.test0(CalciteQueryProcessorTest.java:117)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.apache.ignite.testframework.junits.GridAbstractTest$7.run(GridAbstractTest.java:2373)
at java.lang.Thread.run(Thread.java:748)


{noformat}





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-13816) Add top-level dirs "config" and "logs" for .ignitecfg

2020-12-04 Thread Kirill Gusakov (Jira)
Kirill Gusakov created IGNITE-13816:
---

 Summary: Add top-level dirs "config" and "logs" for .ignitecfg
 Key: IGNITE-13816
 URL: https://issues.apache.org/jira/browse/IGNITE-13816
 Project: Ignite
  Issue Type: Sub-task
Reporter: Kirill Gusakov






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Use GridNioServer in Java thin client

2020-12-04 Thread Igor Sapego
Agree. Great job.

Best Regards,
Igor


On Thu, Nov 26, 2020 at 3:10 PM Ivan Daschinsky  wrote:

> Pavel, good job and great benchmark results!
>
> чт, 26 нояб. 2020 г. в 15:01, Pavel Tupitsyn :
>
> > PR is ready for review [1]
> >
> > I've added a simple put/get benchmark, there is some performance
> > improvement over existing implementation, see results in the PR
> > description.
> >
> > [1] https://github.com/apache/ignite/pull/8483
> >
> > On Fri, Nov 20, 2020 at 10:39 AM Pavel Tupitsyn 
> > wrote:
> >
> > > Since there are no objections, I've updated the IEP accordingly [1]
> > > and started working on it [2]
> > >
> > > [1]
> > >
> >
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-60%3A+Java+Thin+Client+Non-Blocking+Async+IO
> > > [2] https://github.com/apache/ignite/pull/8483
> > >
> > > On Mon, Nov 9, 2020 at 4:07 PM Ivan Daschinsky 
> > > wrote:
> > >
> > >> I suppose that the best variant -- ability to switch to netty if this
> > lib
> > >> is in classpath
> > >>
> > >> пн, 9 нояб. 2020 г. в 15:58, Igor Sapego :
> > >>
> > >> > Sounds like a good idea to me.
> > >> >
> > >> > Best Regards,
> > >> > Igor
> > >> >
> > >> >
> > >> > On Mon, Nov 9, 2020 at 3:32 PM Alex Plehanov <
> plehanov.a...@gmail.com
> > >
> > >> > wrote:
> > >> >
> > >> > > +1 for using GridNioServer as java thin client communication
> layer.
> > >> > >
> > >> > > вс, 8 нояб. 2020 г. в 19:12, Pavel Tupitsyn  >:
> > >> > >
> > >> > > > Igniters,
> > >> > > >
> > >> > > > This is a continuation of "Use Netty for Java thin client" [1],
> > >> > > > I'm starting a new thread for better visibility.
> > >> > > >
> > >> > > > The problems with current Java thin client are:
> > >> > > > * Socket writes block user threads
> > >> > > > * Every connection uses a separate listener thread (with
> partition
> > >> > > > awareness there is a thread for every server node within a
> single
> > >> > > > IgniteClient)
> > >> > > >
> > >> > > > GridNioServer can work in client mode and solves both of these
> > >> > problems.
> > >> > > > It is the most practical choice as well at the moment - no extra
> > >> > > > dependencies required.
> > >> > > >
> > >> > > > A potential drawback is increased coupling between thin client
> and
> > >> core
> > >> > > > code,
> > >> > > > which I'm going to mitigate by abstracting GridNioServer behind
> a
> > >> > simpler
> > >> > > > facade,
> > >> > > > so we can replace it with Netty or something else easier if we
> > >> decide
> > >> > to
> > >> > > > split the code.
> > >> > > >
> > >> > > > Thoughts, objections?
> > >> > > >
> > >> > > > [1]
> > >> > > >
> > >> > > >
> > >> > >
> > >> >
> > >>
> >
> http://apache-ignite-developers.2346864.n4.nabble.com/DISCUSS-Use-Netty-for-Java-thin-client-td49732.html
> > >> > > >
> > >> > >
> > >> >
> > >>
> > >>
> > >> --
> > >> Sincerely yours, Ivan Daschinskiy
> > >>
> > >
> >
>
>
> --
> Sincerely yours, Ivan Daschinskiy
>


Remove ability to delete segments from the middle of WAL archive

2020-12-04 Thread ткаленко кирилл
Hello to all!

I found that we have the option to remove segments from the middle of WAL 
archive and thus make it invalid due to gaps. 
I'll fix it in https://issues.apache.org/jira/browse/IGNITE-13815.


[jira] [Created] (IGNITE-13815) Remove ability to delete segments from the middle of WAL archive

2020-12-04 Thread Kirill Tkalenko (Jira)
Kirill Tkalenko created IGNITE-13815:


 Summary: Remove ability to delete segments from the middle of WAL 
archive
 Key: IGNITE-13815
 URL: https://issues.apache.org/jira/browse/IGNITE-13815
 Project: Ignite
  Issue Type: Improvement
  Components: persistence
Reporter: Kirill Tkalenko
Assignee: Kirill Tkalenko
 Fix For: 2.10


At the moment we have the option to delete segments from the middle of the 
archive via the *FileWriteAheadLogManager#truncate*. This creates gaps in the 
archive and makes it invalid.

It should be possible to delete segments sequentially up to the upper boundary. 
It has also been found that there is no protection against segment deletion, 
which may be needed for a binary recovery.

Also need to get rid of the physical check when reserving segments through the 
*FileWriteAheadLogManager#reserve*.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-13814) Long restorePartitionStates triggers FailureHandler on node startup

2020-12-04 Thread Ivan Bessonov (Jira)
Ivan Bessonov created IGNITE-13814:
--

 Summary: Long restorePartitionStates triggers FailureHandler on 
node startup
 Key: IGNITE-13814
 URL: https://issues.apache.org/jira/browse/IGNITE-13814
 Project: Ignite
  Issue Type: Bug
 Environment: {noformat}
Thread [name="sys-stripe-4-#5%EPE_CLUSTER_PERF%", id=24, state=WAITING, 
blockCnt=4, waitCnt=70836]
at java.base@11.0.8/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@11.0.8/java.util.concurrent.locks.LockSupport.park(LockSupport.java:323)
at 
app//o.a.i.i.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:186)
at 
app//o.a.i.i.util.future.GridFutureAdapter.getUninterruptibly(GridFutureAdapter.java:154)
at 
app//o.a.i.i.processors.cache.persistence.file.AsyncFileIO.read(AsyncFileIO.java:128)
at 
app//o.a.i.i.processors.cache.persistence.file.AbstractFileIO$2.run(AbstractFileIO.java:89)
at 
app//o.a.i.i.processors.cache.persistence.file.AbstractFileIO.fully(AbstractFileIO.java:52)
at 
app//o.a.i.i.processors.cache.persistence.file.AbstractFileIO.readFully(AbstractFileIO.java:87)
at 
app//o.a.i.i.processors.cache.persistence.file.FilePageStore.readWithFailover(FilePageStore.java:794)
at 
app//o.a.i.i.processors.cache.persistence.file.FilePageStore.read(FilePageStore.java:418)
at 
app//o.a.i.i.processors.cache.persistence.file.FilePageStoreManager.read(FilePageStoreManager.java:519)
at 
app//o.a.i.i.processors.cache.persistence.file.FilePageStoreManager.read(FilePageStoreManager.java:503)
at 
app//o.a.i.i.processors.cache.persistence.pagemem.PageMemoryImpl.acquirePage(PageMemoryImpl.java:874)
at 
app//o.a.i.i.processors.cache.persistence.pagemem.PageMemoryImpl.acquirePage(PageMemoryImpl.java:700)
at 
app//o.a.i.i.processors.cache.persistence.pagemem.PageMemoryImpl.acquirePage(PageMemoryImpl.java:689)
at 
app//o.a.i.i.processors.cache.persistence.DataStructure.acquirePage(DataStructure.java:157)
at 
app//o.a.i.i.processors.cache.persistence.freelist.PagesList.init(PagesList.java:274)
at 
app//o.a.i.i.processors.cache.persistence.freelist.AbstractFreeList.(AbstractFreeList.java:390)
at 
app//o.a.i.i.processors.cache.persistence.freelist.CacheFreeList.(CacheFreeList.java:57)
at 
app//o.a.i.i.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore$1.(GridCacheOffheapManager.java:1806)
at 
app//o.a.i.i.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.init0(GridCacheOffheapManager.java:1805)
at 
app//o.a.i.i.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.init(GridCacheOffheapManager.java:2130)
at 
app//o.a.i.i.processors.cache.persistence.GridCacheOffheapManager.restorePartitionStates(GridCacheOffheapManager.java:544)
at 
app//o.a.i.i.processors.cache.GridCacheProcessor$CacheRecoveryLifecycle.lambda$restorePartitionStates$0(GridCacheProcessor.java:5253)
at 
app//o.a.i.i.processors.cache.GridCacheProcessor$CacheRecoveryLifecycle$$Lambda$633/0x000800717040.run(Unknown
 Source)
at 
app//o.a.i.i.util.StripedExecutor$Stripe.body(StripedExecutor.java:559)
at app//o.a.i.i.util.worker.GridWorker.run(GridWorker.java:119)
at java.base@11.0.8/java.lang.Thread.run(Thread.java:834){noformat}
In this case, warm-up is on, but client also reports this to happen without 
warm-up.I don't think that restore partition states should trigger FH. It may 
take a lot of time with PDS. Also, why do we run it in striped pool? Let's 
imagine two large caches get the same stripe - restore time doubles.
Reporter: Ivan Bessonov
Assignee: Ivan Bessonov


The following would be printed to log:
{noformat}
[2020-10-30T17:32:26,190][WARN ][grid-timeout-worker-#22%EPE_CLUSTER_PERF%][] 
Possible failure suppressed accordingly to a configured handler 
[hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, 
super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet 
[SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], 
failureCtx=FailureContext [type=SYSTEM_WORKER_BLOCKED, err=class 
o.a.i.IgniteException: GridWorker [name=sys-stripe-4, 
igniteInstanceName=EPE_CLUSTER_PERF, finished=false, 
heartbeatTs=1604104192954]]]
org.apache.ignite.IgniteException: GridWorker [name=sys-stripe-4, 
igniteInstanceName=EPE_CLUSTER_PERF, finished=false, heartbeatTs=1604104192954]
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1859)
 [ignite-core-8.7.28.jar:8.7.28]
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1854)
 [ignite-core-8.7.28.jar:8.7.28]
at 
org.apache.ignite.internal.worker.WorkersRegistry.onIdle(WorkersRegistry.java:233)
 [ignite-core-8.7.28.jar:8.7.28

[jira] [Created] (IGNITE-13813) SKIP_GARBAGE WAL compression doesn't work for binary recovery

2020-12-04 Thread Ivan Bessonov (Jira)
Ivan Bessonov created IGNITE-13813:
--

 Summary: SKIP_GARBAGE WAL compression doesn't work for binary 
recovery
 Key: IGNITE-13813
 URL: https://issues.apache.org/jira/browse/IGNITE-13813
 Project: Ignite
  Issue Type: Bug
Reporter: Ivan Bessonov
Assignee: Ivan Bessonov


{noformat}
class org.apache.ignite.IgniteCheckedException: Failed to apply page snapshot

at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.lambda$performBinaryMemoryRestore$14(GridCacheDatabaseSharedManager.java:2419)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.lambda$stripedApplyPage$18(GridCacheDatabaseSharedManager.java:2603)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.lambda$stripedApply$19(GridCacheDatabaseSharedManager.java:2641)
at 
org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:559)
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:119)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.AssertionError: 4096
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.applyPageSnapshot(GridCacheDatabaseSharedManager.java:2671)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.lambda$performBinaryMemoryRestore$14(GridCacheDatabaseSharedManager.java:2412)
... 5 more{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)