Optimizing data model to be stored in data grid with less cache size

2016-02-12 Thread Ferry Syafei Sapei
If I have for instance the following data model to be persisted in the data 
grid:

class Person {
String accountNumber;   // accountNumber is also the cache key
String name;
String birthday;
String addressPostcode;
}

Should I replace the String data type to another data type in order to reduce 
the cache size?

Does Ignite perform some optimization when it persists the data in the cache?

I tried the following optimizations, but they did not really affect the cache 
size:
- replace accountNumber’s data type to long
- replace name's data type to byte[]
- replace birthday’s data type to long

Re: Exception in Kerberos Yarn cluster

2016-02-12 Thread Ivan Veselovsky
Hi, harishraj,
does the initial problem persist?

I tried ignite yarn module in kerberized Hortonworks sandbox environment,
and it works (see listing below), which means that the module is able to run
Ignite nodes in containers, and all the logs are visible through Yarn web
console (http://:8088/cluster), as the doc page
(https://apacheignite.readme.io/docs/yarn-deployment) prescribes.

An exception similar to that you initially reported can be observed if the
user does not have a valid ticket in Kerberos. So, the question is, did you
"kinit" the user before running the yarn application?





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Exception-in-Kerberos-Yarn-cluster-tp1950p2978.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Exception in Kerberos Yarn cluster

2016-02-12 Thread lrea.sys
Hi,
I suppose it works because it's not a cluster but a standalone installation,
so the datanode service running in the same host has already the token and
can use it, different thing in a cluster where the token needs to be shared
and retrieved.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Exception-in-Kerberos-Yarn-cluster-tp1950p2979.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Exception in Kerberos Yarn cluster

2016-02-12 Thread Ivan Veselovsky
Hi, Irea, can you please reply IGNITE-2525 comment
https://issues.apache.org/jira/browse/IGNITE-2525?focusedCommentId=15144461&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15144461
.
I'll try to run the yarn module in a multi-node cluster and share the
results.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Exception-in-Kerberos-Yarn-cluster-tp1950p2980.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Distributed queue problem with peerClassLoading enabled

2016-02-12 Thread mp
Hi Denis,

But my test still fails in version 1.5 with default (ie, binary)
marshaller. See my message from January 7, and your reply in which you
mentioned a new Jira ticked for a bug concerning the new binary marshaller:
https://issues.apache.org/jira/browse/IGNITE-2339

Basically, my test case (see
https://issues.apache.org/jira/browse/IGNITE-1823 ) fails in all of the
scenarios I tried:

1. Binary marshaller + default deployment mode
2. Binary marshaller + shared deployment mode
3. Binary marshaller + private deployment mode
4. Optimized marshaller + default deployment mode
5. Optimized marshaller + shared deployment mode
6. Optimized marshaller + private deployment mode

Would you have any hint/advice on how I could proceed? Is there any chance
of fixing the issues related to my test case?

Thanks for your help,
-Mateusz


On Wed, Feb 10, 2016 at 4:46 PM, Denis Magda  wrote:

> Hi Mateusz,
>
> In version 1.5 we released the binary objects [1] format that allows to
> store cache in class version independent form. Thus you don't need to have
> any classes on server side.
> This ability allows dynamic change to an objects structure, and even
> allows multiple clients with different versions of class definitions to
> co-exist.
>
> In my understanding if you switch to this format you will be able to
> support your use case.
>
> If something is unclear don't hesitate to ask.
>
> [1] https://apacheignite.readme.io/docs/binary-marshaller
>
> --
> Denis
>
>
> On 2/10/2016 4:06 PM, mp wrote:
>
> Hi Denis,
>
> Thanks for your reply.
> So, summing up, it seems that in the context of my use case, version 1.5
> does not differ from 1.4? Which means that I still cannot achieve my goal:
> different versions of the same class (from different clients) running on
> the cluster at the same time?
>
> As far as I understand this involves:
> 1. https://issues.apache.org/jira/browse/IGNITE-1823
> 2. https://issues.apache.org/jira/browse/IGNITE-2339
> 3. Removing the requirement for caches to work only with SHARED and
> CONTINUOUS deployment modes (this was announced by Dmitriy in
> http://apache-ignite-users.70518.x6.nabble.com/Distributed-queue-problem-with-peerClassLoading-enabled-tp1762p1829.html
> )
>
> Is there any chance the above use case will be possible in near future
> (any upcoming version)?
>
> I really like the API and concept of Ignite. If only I could achieve the
> above scenario...
>
> Cheers,
> -Mateusz
>
>
>
> On Thu, Jan 7, 2016 at 5:25 PM, Denis Magda  wrote:
>
>> Mateusz,
>>
>> It doesn’t work for now because peerClassLoading doesn’t work for objects
>> that are stored in the binary format in a cache.
>> Since starting from 1.5 BinaryMarshaller is a default one all the objects
>> are stored in a such format in caches by default.
>>
>> If you prefer to turn off such a behavior you can set
>> IgniteConfiguration.setMarshaller(new OptimizedMarshaller()) for every node
>> and your test should work as before.
>>
>> —
>> Denis
>>
>> On 7 янв. 2016 г., at 17:09, mp < mjj...@gmail.com>
>> wrote:
>>
>> Hello Denis,
>>
>> Thanks a lot for your reply!
>> Concerning point 2: does it mean that "peerClassLoading" simply does not
>> work in 1.5?
>> It used to work (partially) in 1.4 (details described earlier in the
>> message thread).
>>
>> Cheers,
>> -Mateusz
>>
>>
>>
>> On Thu, Jan 7, 2016 at 1:38 PM, Denis Magda < 
>> dma...@gridgain.com> wrote:
>>
>>> Hi Mateusz,
>>>
>>> 1. It seems that distributed cache is still *not* available in
>>> PRIVATE/ISOLATED modes. Is this correct?
>>>
>>> Right, it hasn't been fixed yet. I've just followed up the related
>>> discussion on the dev list. Please follow it to see the most up-to-date
>>> information
>>>
>>> http://apache-ignite-developers.2346864.n4.nabble.com/Fwd-Distributed-queue-problem-with-peerClassLoading-enabled-tp4521p6440.html
>>>
>>> 2. When I run my simple test code in the default SHARED mode (the same as
>>> specified in 
>>> https://issues.apache.org/jira/browse/IGNITE-1823 jira issue),
>>> I still get an error. However the cause exception seems to be different.
>>> Please see attached server log.
>>>
>>> The reason is that there is an attempt to deserialize a binary object
>>> stored on a server node and the server node doesn't have object's class
>>> definition in its class path.
>>> I've opened a ticket
>>> https://issues.apache.org/jira/browse/IGNITE-2339
>>>
>>> As a workaround you can put a class definition on server's class path
>>> and the problem will disappear.
>>>
>>> Regards,
>>> Denis
>>>
>>> On 1/7/2016 1:30 PM, mjjp wrote:
>>>
 Hello,

 I have just downloaded 1.5.0-final to check if my problem has been
 resolved.
 Either I'm doing something wrong, or version 1.5 has the same behavior
 in
 this context:

 1. It seems that distributed cache is still *not* available in
 PRIVATE/ISOLATED modes. Is this correct?

 2. When I run my simple test code in th

Re: Version issue with concurrent cache updates (EntryProcessor)

2016-02-12 Thread Myron Chelyada
So,

Please find test in attachment which allows to reproduce the issue.
It is very plain and much simpler than I initially described.
I was confused before because I was able to reproduce this issue on one
environment and couldn't on another. And the reason for that is that
assertion were not enabled there.




2016-02-11 19:31 GMT+02:00 Myron Chelyada :

> Hi Alexey,
>
> Will try to extract main logic into some test that would allow to
> reproduce it.
> But in meanwhile I figured out that issue appears only on cache that is
> store backed. I.e. as soon as I set either "readThrough" or "writeThrough"
> to "false" (or both) issue disappear.
> But actually in my case I need both enabled.
>
> 2016-02-10 14:24 GMT+02:00 Alexey Goncharuk :
>
>> Myron,
>>
>> I tried to reproduce this assertion on ignite-1.5, but with no luck. Can
>> you share your full cache configuration, the number of nodes in your
>> clusterr and a code snippet allowing to reproduce the issue?
>>
>> Thanks,
>> AG
>>
>
>


CacheEntryProcessorTest.java
Description: Binary data


Re: Version issue with concurrent cache updates (EntryProcessor)

2016-02-12 Thread Alexey Goncharuk
Myron,

Thank you for reporting the issue. The assertion happens when the value is
present in the store, absent in the cache and you run invokeAll(). As a
temporary solution, you can either call invoke() for each particular key
individually, or call getAll() for the keys prior to calling invokeAll()
(this will pre-load the values to the cache).
Since the issue is pretty critical, I believe it will be fixed in 1.6 (if
not earlier).

Yakov, Sam,
I created a ticket [1] and suggested a fix there, can you take a look and
check if the fix is ok?

Thanks,
AG

[1] https://issues.apache.org/jira/browse/IGNITE-2645


Re: Migrating From Hazelcast Service Interface To Apache Ignite Service Interface

2016-02-12 Thread gcollins
Thanks very much for the responses! Those are all important features of
Ignite, and I believe these features are one of the big reasons why we are
wanting to move away from Hazelcast (in Hazelcast, if you wanted to deploy
different services to different sets of nodes you need to set up different
Hazelcast clusters - ugh).

We effectively have implemented our own in-memory database with its own
specific query language. To minimize memory usage an "insert" into this
database will potentially update dictionaries, then append to a byte array
(or off-heap memory area) rather than create a new record with associated
overhead, then update Cassandra persistence (updating both summary and
detail tables). 

On a service restart, summary tables (with dictionaries and counts) are
automatically loaded into the right partitions. When we do a query, the
queries are done where the data is. If detail data is not already loaded we
load the partitions we needs.

We also do extra optimizing functions like in-memory compression (for
partitions which are queried but do not have inserts) and query result
caching on a per-partition basis. The partition may go and edit the query
(like modifying the query time range to match the partition time range) to
make it more likely we get a cache hit with subsequent queries.

Hazelcast just makes sure that partitions move around correctly and makes
sure that partition access is single-threaded. Since we can also manage what
migrates we select what really gets migrated to minimize the chance of OOM
exceptions (e.g. in a three node cluster, each node may be using 6 GB of
memory - if a node goes down and data migrates the remaining two nodes may
go to 9GB which could cause an OOM).

Does it sound at all possible I could do something similar in Ignite?
Perhaps would it be possible to write my own "IgniteCache" using lower level
building blocks to allow me to go do this?

thanks in advance,
Gareth





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Migrating-From-Hazelcast-Service-Interface-To-Apache-Ignite-Service-Interface-tp2970p2984.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Sharing Spark RDDs with Ignite

2016-02-12 Thread Andrey Gura
Dmitry,

I repeated your test. On my laptop it took about 2300 ms.

Having in mind that RDD is lazy by nature I suggested that DataFrame is
lazy too. So I add df.rdd().count() call in the code before RDD caching in
order to measure execution time and got about 670 ms.
After it igniteRDD.saveValues(df.rdd()) call takes about 1500 ms.

For more accurate results I measured this operations in a loop and got
about 700 ms for RDD caching on warmed up JVM.

I created pull request for clarity:
https://github.com/erasmas/ignite-playground/pull/1

On Thu, Feb 11, 2016 at 3:20 PM, Dmitriy Morozov  wrote:

> Hi Valentin,
>
> Sorry, I realize I didn't get it right. I'm using IgniteRDD to save RDD
> values now and IgniteCache to cache StructType.
> I'm using a ~1mb Parquet file for testing which has ~75K rows. I noticed
> that saving IgniteRDD is expensive, it takes about 4 seconds on my laptop.
>  I tried both client and server mode for IgniteContext but still couldn't
> make it faster.
>
> Here's the code
> 
> that I tried. I'd appreciate if somebody could give a hint on how to make
> it faster.
>
> Thanks!
>
> On 10 February 2016 at 21:55, vkulichenko 
> wrote:
>
>> Hi Dmitry,
>>
>> What are you trying to achieve by putting the RDD into the cache as a
>> single
>> entry? If you want to save RDD data into the Ignite cache, it's better to
>> create IgniteRDD and use its savePairs() or saveValues() methods. See [1]
>> for details.
>>
>> [1]
>>
>> https://apacheignite-fs.readme.io/docs/ignitecontext-igniterdd#section-saving-values-to-ignite
>>
>> -Val
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-ignite-users.70518.x6.nabble.com/Sharing-Spark-RDDs-with-Ignite-tp2805p2941.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>
>
> --
> Kind regards,
> Dima
>



-- 
Andrey Gura
GridGain Systems, Inc.
www.gridgain.com


Re: Exception in Kerberos Yarn cluster

2016-02-12 Thread Dmitriy Setrakyan
Ivan,

I think it makes sense to update the documentation with your explanation.
Can you please do it?

Thanks,
D.

On Fri, Feb 12, 2016 at 2:07 AM, Ivan Veselovsky 
wrote:

> Hi, harishraj,
> does the initial problem persist?
>
> I tried ignite yarn module in kerberized Hortonworks sandbox environment,
> and it works (see listing below), which means that the module is able to
> run
> Ignite nodes in containers, and all the logs are visible through Yarn web
> console (http://:8088/cluster), as the doc page
> (https://apacheignite.readme.io/docs/yarn-deployment) prescribes.
>
> An exception similar to that you initially reported can be observed if the
> user does not have a valid ticket in Kerberos. So, the question is, did you
> "kinit" the user before running the yarn application?
>
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Exception-in-Kerberos-Yarn-cluster-tp1950p2978.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Optimizing data model to be stored in data grid with less cache size

2016-02-12 Thread vkulichenko
Hi,

Ignite stores your classes in binary format, which is very compact, so these
changes should not change much from memory consumption standpoint. I would
recommend to use types that fit particular fields better from business case
standpoint (e.g., long for account number, Date for birthday, int for
postcode, etc.).

Also I would not recommend to use strings as keys if that's possible.
String's equals method has to do one-by-one char comparison, which is
obviously much slower that comparison of two longs.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Optimizing-data-model-to-be-stored-in-data-grid-with-less-cache-size-tp2977p2987.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Exception in Kerberos Yarn cluster

2016-02-12 Thread Ivan V.
Hi, Dmitriy,
but we don't seem to have a fix yet. It looks like the problem exists, but
I still was not able to see it because it is reproducible in real
multi-node kerberized cluster .

On Fri, Feb 12, 2016 at 8:26 PM, Dmitriy Setrakyan 
wrote:

> Ivan,
>
> I think it makes sense to update the documentation with your explanation.
> Can you please do it?
>
> Thanks,
> D.
>
>
> On Fri, Feb 12, 2016 at 2:07 AM, Ivan Veselovsky <
> iveselovs...@gridgain.com> wrote:
>
>> Hi, harishraj,
>> does the initial problem persist?
>>
>> I tried ignite yarn module in kerberized Hortonworks sandbox environment,
>> and it works (see listing below), which means that the module is able to
>> run
>> Ignite nodes in containers, and all the logs are visible through Yarn web
>> console (http://:8088/cluster), as the doc page
>> (https://apacheignite.readme.io/docs/yarn-deployment) prescribes.
>>
>> An exception similar to that you initially reported can be observed if the
>> user does not have a valid ticket in Kerberos. So, the question is, did
>> you
>> "kinit" the user before running the yarn application?
>>
>>
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-ignite-users.70518.x6.nabble.com/Exception-in-Kerberos-Yarn-cluster-tp1950p2978.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>


Re: Sharing Spark RDDs with Ignite

2016-02-12 Thread Dmitriy Morozov
Thanks Andrey!

It totally makes sense. I should have done a more accurate test. Appreciate
your help!

On 12 February 2016 at 17:31, Andrey Gura  wrote:

> Dmitry,
>
> I repeated your test. On my laptop it took about 2300 ms.
>
> Having in mind that RDD is lazy by nature I suggested that DataFrame is
> lazy too. So I add df.rdd().count() call in the code before RDD caching in
> order to measure execution time and got about 670 ms.
> After it igniteRDD.saveValues(df.rdd()) call takes about 1500 ms.
>
> For more accurate results I measured this operations in a loop and got
> about 700 ms for RDD caching on warmed up JVM.
>
> I created pull request for clarity:
> https://github.com/erasmas/ignite-playground/pull/1
>
> On Thu, Feb 11, 2016 at 3:20 PM, Dmitriy Morozov 
> wrote:
>
>> Hi Valentin,
>>
>> Sorry, I realize I didn't get it right. I'm using IgniteRDD to save RDD
>> values now and IgniteCache to cache StructType.
>> I'm using a ~1mb Parquet file for testing which has ~75K rows. I noticed
>> that saving IgniteRDD is expensive, it takes about 4 seconds on my laptop.
>>  I tried both client and server mode for IgniteContext but still couldn't
>> make it faster.
>>
>> Here's the code
>> 
>> that I tried. I'd appreciate if somebody could give a hint on how to make
>> it faster.
>>
>> Thanks!
>>
>> On 10 February 2016 at 21:55, vkulichenko 
>> wrote:
>>
>>> Hi Dmitry,
>>>
>>> What are you trying to achieve by putting the RDD into the cache as a
>>> single
>>> entry? If you want to save RDD data into the Ignite cache, it's better to
>>> create IgniteRDD and use its savePairs() or saveValues() methods. See [1]
>>> for details.
>>>
>>> [1]
>>>
>>> https://apacheignite-fs.readme.io/docs/ignitecontext-igniterdd#section-saving-values-to-ignite
>>>
>>> -Val
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://apache-ignite-users.70518.x6.nabble.com/Sharing-Spark-RDDs-with-Ignite-tp2805p2941.html
>>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>>
>>
>>
>>
>> --
>> Kind regards,
>> Dima
>>
>
>
>
> --
> Andrey Gura
> GridGain Systems, Inc.
> www.gridgain.com
>



-- 
Kind regards,
Dima


Re: Basic Spark integration question

2016-02-12 Thread vkulichenko
Hi,

You're right, currently there is no integration with R and it will require
to create a new R package. In my understanding, it should almost replicate
the IgniteContext/IgniteRDD API and provide saveValues() operation to store
the data and sql() to execute fast indexed queries against Ignite. What do
you think? Have you already started working on this? Is it something that
can be included in Ignite? I believe it would be a very useful contribution.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Basic-Spark-integration-question-tp2944p2990.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Distributed queue problem with peerClassLoading enabled

2016-02-12 Thread Denis Magda

Hi Mateusz,

I assigned both tickets that you have problems with on myself. They will 
be fixed as a part of the next release.

https://issues.apache.org/jira/browse/IGNITE-2339
https://issues.apache.org/jira/browse/IGNITE-1823

There is one more issue that was reproduced locally and refers to 
unexpected cache undeployment when the binary marshaller is used.

https://issues.apache.org/jira/browse/IGNITE-2647

Thanks for your patience and still showing the interest in Ignite.

Regards,
Denis

On 2/12/2016 4:41 PM, mp wrote:

Hi Denis,

But my test still fails in version 1.5 with default (ie, binary) 
marshaller. See my message from January 7, and your reply in which you 
mentioned a new Jira ticked for a bug concerning the new binary 
marshaller: https://issues.apache.org/jira/browse/IGNITE-2339


Basically, my test case (see 
https://issues.apache.org/jira/browse/IGNITE-1823 ) fails in all of 
the scenarios I tried:


1. Binary marshaller + default deployment mode
2. Binary marshaller + shared deployment mode
3. Binary marshaller + private deployment mode
4. Optimized marshaller + default deployment mode
5. Optimized marshaller + shared deployment mode
6. Optimized marshaller + private deployment mode

Would you have any hint/advice on how I could proceed? Is there any 
chance of fixing the issues related to my test case?


Thanks for your help,
-Mateusz


On Wed, Feb 10, 2016 at 4:46 PM, Denis Magda > wrote:


Hi Mateusz,

In version 1.5 we released the binary objects [1] format that
allows to store cache in class version independent form. Thus you
don't need to have any classes on server side.
This ability allows dynamic change to an objects structure, and
even allows multiple clients with different versions of class
definitions to co-exist.

In my understanding if you switch to this format you will be able
to support your use case.

If something is unclear don't hesitate to ask.

[1] https://apacheignite.readme.io/docs/binary-marshaller

--
Denis


On 2/10/2016 4:06 PM, mp wrote:

Hi Denis,

Thanks for your reply.
So, summing up, it seems that in the context of my use case,
version 1.5 does not differ from 1.4? Which means that I still
cannot achieve my goal: different versions of the same class
(from different clients) running on the cluster at the same time?

As far as I understand this involves:
1. https://issues.apache.org/jira/browse/IGNITE-1823
2. https://issues.apache.org/jira/browse/IGNITE-2339
3. Removing the requirement for caches to work only with SHARED
and CONTINUOUS deployment modes (this was announced by Dmitriy in

http://apache-ignite-users.70518.x6.nabble.com/Distributed-queue-problem-with-peerClassLoading-enabled-tp1762p1829.html
)

Is there any chance the above use case will be possible in near
future (any upcoming version)?

I really like the API and concept of Ignite. If only I could
achieve the above scenario...

Cheers,
-Mateusz



On Thu, Jan 7, 2016 at 5:25 PM, Denis Magda mailto:dma...@gridgain.com>> wrote:

Mateusz,

It doesn’t work for now because peerClassLoading doesn’t work
for objects that are stored in the binary format in a cache.
Since starting from 1.5 BinaryMarshaller is a default one all
the objects are stored in a such format in caches by default.

If you prefer to turn off such a behavior you can set
IgniteConfiguration.setMarshaller(new OptimizedMarshaller())
for every node and your test should work as before.

—
Denis


On 7 янв. 2016 г., at 17:09, mp mailto:mjj...@gmail.com>> wrote:

Hello Denis,

Thanks a lot for your reply!
Concerning point 2: does it mean that "peerClassLoading"
simply does not work in 1.5?
It used to work (partially) in 1.4 (details described
earlier in the message thread).

Cheers,
-Mateusz



On Thu, Jan 7, 2016 at 1:38 PM, Denis Magda
mailto:dma...@gridgain.com>> wrote:

Hi Mateusz,

1. It seems that distributed cache is still *not*
available in
PRIVATE/ISOLATED modes. Is this correct?

Right, it hasn't been fixed yet. I've just followed up
the related discussion on the dev list. Please follow it
to see the most up-to-date information

http://apache-ignite-developers.2346864.n4.nabble.com/Fwd-Distributed-queue-problem-with-peerClassLoading-enabled-tp4521p6440.html

2. When I run my simple test code in the default SHARED
mode (the same as
specified in
https://issues.apache.org/jira/browse/IGNITE-1823 jira
issue),
I still get an error. However the cause exception seems
to be different.
Please see attached server log.


Re: Migrating From Hazelcast Service Interface To Apache Ignite Service Interface

2016-02-12 Thread vkulichenko
Hi Gareth,

First of all, Ignite provides IndexingSpi interface [1] that you can
implement to support custom query language and custom indexing
implementation. The SPI will be used when a special type of query, SpiQuery
[2], is executed. Is this something that can help you? Let us know if you
have follow up questions.

Also it's not completely clear which Hazelcast APIs you are referring to.
Since you mentioned partitions, it looks like you're using caches. Is there
something on the cache API that is provided by Hazelcast, but not Ignite?
Internally Ignite cache does not follow thread-per-partition model, but this
is just an implementation detail. We use other mechanisms to ensure data
consistency, and I am not sure I understand your concern here. Am I missing
something?

[1]
https://ignite.apache.org/releases/1.5.0.final/javadoc/org/apache/ignite/spi/indexing/IndexingSpi.html
[2]
https://ignite.apache.org/releases/1.5.0.final/javadoc/org/apache/ignite/cache/query/SpiQuery.html

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Migrating-From-Hazelcast-Service-Interface-To-Apache-Ignite-Service-Interface-tp2970p2992.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Uberjar for IGFS 1.5.0

2016-02-12 Thread Kobe
I am likely asking a question that pertains more to Maven than to IGFS.
I am trying to find if there is an uberjar containing all dependencies
required by IGFS 1.5.0 
that I could include in my project.

Thanks,

/Kobe



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Uberjar-for-IGFS-1-5-0-tp2993.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Version issue with concurrent cache updates (EntryProcessor)

2016-02-12 Thread Myron Chelyada
Will try to apply some workaround and looking forward to fix.

2016-02-12 16:58 GMT+02:00 Alexey Goncharuk :

> Myron,
>
> Thank you for reporting the issue. The assertion happens when the value is
> present in the store, absent in the cache and you run invokeAll(). As a
> temporary solution, you can either call invoke() for each particular key
> individually, or call getAll() for the keys prior to calling invokeAll()
> (this will pre-load the values to the cache).
> Since the issue is pretty critical, I believe it will be fixed in 1.6 (if
> not earlier).
>
> Yakov, Sam,
> I created a ticket [1] and suggested a fix there, can you take a look and
> check if the fix is ok?
>
> Thanks,
> AG
>
> [1] https://issues.apache.org/jira/browse/IGNITE-2645
>