Re: Apache Ignite 2.8 RELEASE [Time, Scope, Manager]

2019-10-30 Thread Akash Shinde
Hi,
 Could you please include IGNITE-10884
 in 2.8 release. This
issue is blocker for me.

Thanks,
Akash

On Wed, Oct 30, 2019 at 7:38 PM Maxim Muzafarov  wrote:

> Folks,
>
>
> It seems a week ago I've replied with the release info only to Artem.
> Sorry about that :-)
>
> Here is what I've collected.
> Let's discuss!
>
>
> Igniters,
>
>
> I've prepared the Apache Ignite 2.8 release page [1] with the list of
> known issues which are related to 2.8 release and about the additional
> release information. If I've missed something, please, feel free to
> set `fix version` with `2.8`.
>
> Details:
> https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.8
>
>
> * WAITING FOR COMPLETION *
>
> Here is the list of major features which must be completed before
> creating the release branch.
> - Apache ignite new monitoring
> - ML
> - Spark 2.4
>
> Details:
> https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.8#ApacheIgnite2.8-Awatingfeaturescompletion
>
>
> * TO DISCUSS *
>
> Previously some of the features were mentioned but discussion not
> finalized yet
> - Automatic modules support for Apache Ignite: find and resolve
> packages conflicts
>   https://issues.apache.org/jira/browse/IGNITE-11461
> - Support Java 11 for Apache Ignite
>   https://issues.apache.org/jira/browse/IGNITE-11189
> - Callbacks from the striped pool due to async/await may hang a cluster
>   https://issues.apache.org/jira/browse/IGNITE-12033
>
>
> * KNOWN ISSUES *
>
> Bugs and features sorted by priority. If someone knows any additional
> information about any of `blocker` issue, please, step in.
>
> - Unable to use date as primary key
>https://issues.apache.org/jira/browse/IGNITE-8552
> - Cluster hangs during concurrent node client and server nodes restart
>https://issues.apache.org/jira/browse/IGNITE-9184
>
> Details:
> https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.8#ApacheIgnite2.8-Unresolvedissues(notrelatedtodocumentation)
>
>
> * DOCUMENTATION *
>
> The list of issues\tasks related to Apache Ignite documentation sorted
> by priority.
>
> Details:
> https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.8#ApacheIgnite2.8-Unresolveddocumentationtasks
>
> On Mon, 21 Oct 2019 at 16:53, Maxim Muzafarov  wrote:
> >
> > Igniters,
> >
> >
> > I've prepared the Apache Ignite 2.8 release page [1] with the list of
> > known issues which are related to 2.8 release and about the additional
> > release information. If I've missed something, please, feel free to
> > set `fix version` with `2.8`.
> >
> > Details:
> https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.8
> >
> >
> > * WAITING FOR COMPLETION *
> >
> > Here is the list of major features which must be completed before
> > creating the release branch.
> > - Apache ignite new monitoring
> > - ML
> > - Spark 2.4
> >
> > Details:
> https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.8#ApacheIgnite2.8-Awatingfeaturescompletion
> >
> >
> > * TO DISCUSS *
> >
> > Previously some of the features were mentioned but discussion not
> finalized yet
> > - Automatic modules support for Apache Ignite: find and resolve
> > packages conflicts
> >   https://issues.apache.org/jira/browse/IGNITE-11461
> > - Support Java 11 for Apache Ignite
> >   https://issues.apache.org/jira/browse/IGNITE-11189
> > - Callbacks from the striped pool due to async/await may hang a cluster
> >   https://issues.apache.org/jira/browse/IGNITE-12033
> >
> >
> > * KNOWN ISSUES *
> >
> > Bugs and features sorted by priority. If someone knows any additional
> > information about any of `blocker` issue, please, step in.
> >
> > - Unable to use date as primary key
> >https://issues.apache.org/jira/browse/IGNITE-8552
> > - Cluster hangs during concurrent node client and server nodes restart
> >https://issues.apache.org/jira/browse/IGNITE-9184
> >
> > Details:
> https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.8#ApacheIgnite2.8-Unresolvedissues(notrelatedtodocumentation)
> >
> >
> > * DOCUMENTATION *
> >
> > The list of issues\tasks related to Apache Ignite documentation sorted
> > by priority.
> >
> > Details:
> https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.8#ApacheIgnite2.8-Unresolveddocumentationtasks
> >
> > On Mon, 14 Oct 2019 at 18:18, Artem Budnikov
> >  wrote:
> > >
> > > Hi Maxim,
> > >
> > > I'm glad to see that you care about documentation. Way to go! Here is a
> > > couple of points that can help:
> > >
> > > 1) I think it's safe to assume that Prachi will not work on the Ignite
> > > documentation any longer. You can take up the issues assigned to her
> and
> > > prioritize them whichever way is convenient for yourself. In fact, you
> > > can take up other's issues as well.
> > >
> > > 2) I'll try to do my best to finish as much documentation issues as
> I'll
> > > be able to by the release date.
> > >
> > > So, it looks like that in

Re: Apache Ignite 2.8 RELEASE [Time, Scope, Manager]

2019-10-30 Thread Akash Shinde
Because I didn't see it in this list
https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.8#ApacheIgnite2.8-Releaseissuesgroupedbystatus

Just wanted to make sure its a part on 2.8.

Thanks,
Akash

On Wed, Oct 30, 2019 at 8:20 PM Ivan Pavlukhin  wrote:

> Hi Akash,
>
> Why do you think it is not included? I see fix version 2.8 in ticket [1].
>
> [1] https://issues.apache.org/jira/browse/IGNITE-10884
>
> ср, 30 окт. 2019 г. в 17:29, Akash Shinde :
> >
> > Hi,
> >  Could you please include IGNITE-10884
> > <https://issues.apache.org/jira/browse/IGNITE-10884> in 2.8 release.
> This
> > issue is blocker for me.
> >
> > Thanks,
> > Akash
> >
> > On Wed, Oct 30, 2019 at 7:38 PM Maxim Muzafarov 
> wrote:
> >
> > > Folks,
> > >
> > >
> > > It seems a week ago I've replied with the release info only to Artem.
> > > Sorry about that :-)
> > >
> > > Here is what I've collected.
> > > Let's discuss!
> > >
> > >
> > > Igniters,
> > >
> > >
> > > I've prepared the Apache Ignite 2.8 release page [1] with the list of
> > > known issues which are related to 2.8 release and about the additional
> > > release information. If I've missed something, please, feel free to
> > > set `fix version` with `2.8`.
> > >
> > > Details:
> > > https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.8
> > >
> > >
> > > * WAITING FOR COMPLETION *
> > >
> > > Here is the list of major features which must be completed before
> > > creating the release branch.
> > > - Apache ignite new monitoring
> > > - ML
> > > - Spark 2.4
> > >
> > > Details:
> > >
> https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.8#ApacheIgnite2.8-Awatingfeaturescompletion
> > >
> > >
> > > * TO DISCUSS *
> > >
> > > Previously some of the features were mentioned but discussion not
> > > finalized yet
> > > - Automatic modules support for Apache Ignite: find and resolve
> > > packages conflicts
> > >   https://issues.apache.org/jira/browse/IGNITE-11461
> > > - Support Java 11 for Apache Ignite
> > >   https://issues.apache.org/jira/browse/IGNITE-11189
> > > - Callbacks from the striped pool due to async/await may hang a cluster
> > >   https://issues.apache.org/jira/browse/IGNITE-12033
> > >
> > >
> > > * KNOWN ISSUES *
> > >
> > > Bugs and features sorted by priority. If someone knows any additional
> > > information about any of `blocker` issue, please, step in.
> > >
> > > - Unable to use date as primary key
> > >https://issues.apache.org/jira/browse/IGNITE-8552
> > > - Cluster hangs during concurrent node client and server nodes restart
> > >https://issues.apache.org/jira/browse/IGNITE-9184
> > >
> > > Details:
> > >
> https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.8#ApacheIgnite2.8-Unresolvedissues(notrelatedtodocumentation)
> > >
> > >
> > > * DOCUMENTATION *
> > >
> > > The list of issues\tasks related to Apache Ignite documentation sorted
> > > by priority.
> > >
> > > Details:
> > >
> https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.8#ApacheIgnite2.8-Unresolveddocumentationtasks
> > >
> > > On Mon, 21 Oct 2019 at 16:53, Maxim Muzafarov 
> wrote:
> > > >
> > > > Igniters,
> > > >
> > > >
> > > > I've prepared the Apache Ignite 2.8 release page [1] with the list of
> > > > known issues which are related to 2.8 release and about the
> additional
> > > > release information. If I've missed something, please, feel free to
> > > > set `fix version` with `2.8`.
> > > >
> > > > Details:
> > > https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.8
> > > >
> > > >
> > > > * WAITING FOR COMPLETION *
> > > >
> > > > Here is the list of major features which must be completed before
> > > > creating the release branch.
> > > > - Apache ignite new monitoring
> > > > - ML
> > > > - Spark 2.4
> > > >
> > > > Details:
> > >
> https://cwiki.apache.org/confluence/display/IGNITE/Apache+Ignite+2.8#ApacheIgnite2.8-Awatingfeaturescompletio

Re: Countdown latch issue with 2.6.0

2020-06-08 Thread Akash Shinde
Hi, I have created jira for this issue
https://issues.apache.org/jira/browse/IGNITE-13132

Thanks,
Akash

On Sun, Jun 7, 2020 at 9:29 AM Akash Shinde  wrote:

> Can someone please help me with this issue.
>
> On Sat, Jun 6, 2020 at 6:45 PM Akash Shinde  wrote:
>
>> Hi,
>> Issue: Countdown latched gets reinitialize to original value(4) when one
>> or more (but not all) node goes down. (Partition loss happened)
>>
>> We are using ignite's distributed countdownlatch to make sure that cache
>> loading is completed on all server nodes. We do this to make sure that our
>> kafka consumers starts only after cache loading is complete on all server
>> nodes. This is the basic criteria which needs to be fulfilled before starts
>> actual processing
>>
>>
>>  We have 4 server nodes and countdownlatch is initialized to 4. We use
>> "cache.loadCache" method to start the cache loading. When each server
>> completes cache loading it reduces the count by 1 using countDown method.
>> So when all the nodes completes cache loading, the count reaches to zero.
>> When this count  reaches to zero we start kafka consumers on all server
>> nodes.
>>
>>  But we saw weird behavior in prod env. The 3 server nodes were shut down
>> at the same time. But 1 node is still alive. When this happened the count
>> down was reinitialized to original value i.e. 4. But I am not able to
>> reproduce this in dev env.
>>
>>  Is this a bug, when one or more (but not all) nodes goes down then count
>> re initializes back to original value?
>>
>> Thanks,
>> Akash
>>
>


Re: After upgrading 2.7 getting Unexpected error occurred during unmarshalling

2019-01-09 Thread Akash Shinde
Added  dev@ignite.apache.org.

Should I log Jira for this issue?

Thanks,
Akash



On Tue, Jan 8, 2019 at 6:16 PM Akash Shinde  wrote:

> Hi,
>
> No both nodes, client and server are running on Ignite 2.7 version. I am
> starting both server and client from Intellij IDE.
>
> Version printed in Server node log:
> Ignite ver. 2.7.0#20181201-sha1:256ae4012cb143b4855b598b740a6f3499ead4db
>
> Version in client node log:
> Ignite ver. 2.7.0#20181201-sha1:256ae4012cb143b4855b598b740a6f3499ead4db
>
> Thanks,
> Akash
>
> On Tue, Jan 8, 2019 at 5:18 PM Mikael  wrote:
>
>> Hi!
>>
>> Any chance you might have one node running 2.6 or something like that ?
>>
>> It looks like it get a different object that does not match the one
>> expected in 2.7
>>
>> Mikael
>> Den 2019-01-08 kl. 12:21, skrev Akash Shinde:
>>
>> Before submitting the affinity task ignite first gets the affinity cached
>> function (AffinityInfo) by submitting the cluster wide task "AffinityJob".
>> But while in the process of retrieving the output of this AffinityJob,
>> ignite deserializes this output. I am getting exception while deserailizing
>> this output.
>> In TcpDiscoveryNode.readExternal() method while deserailizing the
>> CacheMetrics object from input stream on 14th iteration I am getting
>> following exception. Complete stack trace is given in this mail chain.
>>
>> Caused by: java.io.IOException: Unexpected error occurred during
>> unmarshalling of an instance of the class:
>> org.apache.ignite.internal.processors.cache.CacheMetricsSnapshot.
>>
>> This is working fine on Ignite 2.6 version but giving problem on 2.7.
>>
>> Is this a bug or am I doing something wrong?
>>
>> Can someone please help?
>>
>> On Mon, Jan 7, 2019 at 9:41 PM Akash Shinde 
>> wrote:
>>
>>> Hi,
>>>
>>> When execute affinity.partition(key), I am getting following exception
>>> on Ignite  2.7.
>>>
>>> Stacktrace:
>>>
>>> 2019-01-07 21:23:03,093 6699878 [mgmt-#67%springDataNode%] ERROR
>>> o.a.i.i.p.task.GridTaskWorker - Error deserializing job response:
>>> GridJobExecuteResponse [nodeId=c0c832cb-33b0-4139-b11d-5cafab2fd046,
>>> sesId=4778e982861-31445139-523d-4d44-b071-9ca1eb2d73df,
>>> jobId=5778e982861-31445139-523d-4d44-b071-9ca1eb2d73df, gridEx=null,
>>> isCancelled=false, retry=null]
>>> org.apache.ignite.IgniteCheckedException: Failed to unmarshal object
>>> with optimized marshaller
>>>  at
>>> org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10146)
>>>  at
>>> org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse(GridTaskWorker.java:831)
>>>  at
>>> org.apache.ignite.internal.processors.task.GridTaskProcessor.processJobExecuteResponse(GridTaskProcessor.java:1081)
>>>  at
>>> org.apache.ignite.internal.processors.task.GridTaskProcessor$JobMessageListener.onMessage(GridTaskProcessor.java:1316)
>>>  at
>>> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1569)
>>>  at
>>> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1197)
>>>  at
>>> org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:127)
>>>  at
>>> org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1093)
>>>  at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>>>  at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>>>  at java.lang.Thread.run(Thread.java:748)
>>> Caused by: org.apache.ignite.binary.BinaryObjectException: Failed to
>>> unmarshal object with optimized marshaller
>>>  at
>>> org.apache.ignite.internal.binary.BinaryUtils.doReadOptimized(BinaryUtils.java:1765)
>>>  at
>>> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1964)
>>>  at
>>> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716)
>>>  at
>>> org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:313)
>>>  at
>>> org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal0(BinaryMarshaller.java:102)
>>>  at
>>> org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:82)
>>> 

[jira] [Created] (IGNITE-13132) Countdown latched gets reinitialize to original value(4) when one or more (but not all) node goes down.

2020-06-08 Thread Akash Shinde (Jira)
Akash Shinde created IGNITE-13132:
-

 Summary: Countdown latched gets reinitialize to original value(4) 
when one or more (but not all) node goes down. 
 Key: IGNITE-13132
 URL: https://issues.apache.org/jira/browse/IGNITE-13132
 Project: Ignite
  Issue Type: Bug
  Components: data structures
Reporter: Akash Shinde


We are using ignite's distributed countdownlatch to make sure that cache 
loading is completed on all server nodes. We do this to make sure that our 
kafka consumers starts only after cache loading is complete on all server 
nodes. This is the basic criteria which needs to be fulfilled before starts 
actual processing

 We have 4 server nodes and countdownlatch is initialized to 4. We use 
"cache.loadCache" method to start the cache loading. When each server completes 
cache loading it reduces the count by 1 using countDown method. So when all the 
nodes completes cache loading, the count reaches to zero. When this count  
reaches to zero we start kafka consumers on all server nodes.

 But we saw weird behavior in prod env. The 3 server nodes were shut down at 
the same time. But 1 node is still alive. When this happened the count down was 
reinitialized to original value i.e. 4. But I am not able to reproduce this in 
dev env.  
 
Note: Partiton loss were happened when three node gone down at same time.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)